Mar 19 11:51:09.140381 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 19 11:51:09.826253 master-0 kubenswrapper[3958]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:51:09.826253 master-0 kubenswrapper[3958]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 19 11:51:09.826253 master-0 kubenswrapper[3958]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:51:09.826253 master-0 kubenswrapper[3958]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:51:09.826253 master-0 kubenswrapper[3958]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 11:51:09.826253 master-0 kubenswrapper[3958]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:51:09.828008 master-0 kubenswrapper[3958]: I0319 11:51:09.826388 3958 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:51:09.832338 master-0 kubenswrapper[3958]: W0319 11:51:09.832269 3958 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 11:51:09.832338 master-0 kubenswrapper[3958]: W0319 11:51:09.832305 3958 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 11:51:09.832338 master-0 kubenswrapper[3958]: W0319 11:51:09.832317 3958 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 11:51:09.832338 master-0 kubenswrapper[3958]: W0319 11:51:09.832328 3958 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 11:51:09.832338 master-0 kubenswrapper[3958]: W0319 11:51:09.832337 3958 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 11:51:09.832338 master-0 kubenswrapper[3958]: W0319 11:51:09.832346 3958 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 11:51:09.832338 master-0 kubenswrapper[3958]: W0319 11:51:09.832355 3958 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 11:51:09.832338 master-0 kubenswrapper[3958]: W0319 11:51:09.832363 3958 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 11:51:09.832990 master-0 kubenswrapper[3958]: W0319 11:51:09.832373 3958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 11:51:09.832990 master-0 kubenswrapper[3958]: W0319 11:51:09.832382 3958 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 11:51:09.832990 master-0 kubenswrapper[3958]: W0319 11:51:09.832390 3958 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 11:51:09.832990 master-0 kubenswrapper[3958]: W0319 11:51:09.832399 3958 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 11:51:09.832990 master-0 kubenswrapper[3958]: W0319 11:51:09.832408 3958 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 11:51:09.832990 master-0 kubenswrapper[3958]: W0319 11:51:09.832416 3958 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 11:51:09.832990 master-0 kubenswrapper[3958]: W0319 11:51:09.832425 3958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 11:51:09.832990 master-0 kubenswrapper[3958]: W0319 11:51:09.832433 3958 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 11:51:09.832990 master-0 kubenswrapper[3958]: W0319 11:51:09.832442 3958 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 11:51:09.832990 master-0 kubenswrapper[3958]: W0319 11:51:09.832450 3958 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 11:51:09.832990 master-0 kubenswrapper[3958]: W0319 11:51:09.832460 3958 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 11:51:09.832990 master-0 kubenswrapper[3958]: W0319 11:51:09.832468 3958 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 11:51:09.832990 master-0 kubenswrapper[3958]: W0319 11:51:09.832476 3958 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 11:51:09.832990 master-0 kubenswrapper[3958]: W0319 11:51:09.832506 3958 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 11:51:09.832990 master-0 kubenswrapper[3958]: W0319 11:51:09.832515 3958 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 11:51:09.832990 master-0 kubenswrapper[3958]: W0319 11:51:09.832524 3958 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 11:51:09.832990 master-0 kubenswrapper[3958]: W0319 11:51:09.832532 3958 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 11:51:09.832990 master-0 kubenswrapper[3958]: W0319 11:51:09.832541 3958 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 11:51:09.832990 master-0 kubenswrapper[3958]: W0319 11:51:09.832549 3958 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 11:51:09.832990 master-0 kubenswrapper[3958]: W0319 11:51:09.832561 3958 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 11:51:09.834207 master-0 kubenswrapper[3958]: W0319 11:51:09.832571 3958 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 11:51:09.834207 master-0 kubenswrapper[3958]: W0319 11:51:09.832581 3958 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 11:51:09.834207 master-0 kubenswrapper[3958]: W0319 11:51:09.832591 3958 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 11:51:09.834207 master-0 kubenswrapper[3958]: W0319 11:51:09.832600 3958 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 11:51:09.834207 master-0 kubenswrapper[3958]: W0319 11:51:09.832612 3958 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 11:51:09.834207 master-0 kubenswrapper[3958]: W0319 11:51:09.832623 3958 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 11:51:09.834207 master-0 kubenswrapper[3958]: W0319 11:51:09.832633 3958 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 11:51:09.834207 master-0 kubenswrapper[3958]: W0319 11:51:09.832642 3958 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 11:51:09.834207 master-0 kubenswrapper[3958]: W0319 11:51:09.832651 3958 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 11:51:09.834207 master-0 kubenswrapper[3958]: W0319 11:51:09.832659 3958 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 11:51:09.834207 master-0 kubenswrapper[3958]: W0319 11:51:09.832668 3958 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 11:51:09.834207 master-0 kubenswrapper[3958]: W0319 11:51:09.832677 3958 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 11:51:09.834207 master-0 kubenswrapper[3958]: W0319 11:51:09.832685 3958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 11:51:09.834207 master-0 kubenswrapper[3958]: W0319 11:51:09.832695 3958 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 11:51:09.834207 master-0 kubenswrapper[3958]: W0319 11:51:09.832703 3958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 11:51:09.834207 master-0 kubenswrapper[3958]: W0319 11:51:09.832711 3958 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 11:51:09.834207 master-0 kubenswrapper[3958]: W0319 11:51:09.832720 3958 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 11:51:09.834207 master-0 kubenswrapper[3958]: W0319 11:51:09.832730 3958 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 11:51:09.834207 master-0 kubenswrapper[3958]: W0319 11:51:09.832741 3958 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 11:51:09.835396 master-0 kubenswrapper[3958]: W0319 11:51:09.832754 3958 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 11:51:09.835396 master-0 kubenswrapper[3958]: W0319 11:51:09.832764 3958 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 11:51:09.835396 master-0 kubenswrapper[3958]: W0319 11:51:09.832775 3958 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 11:51:09.835396 master-0 kubenswrapper[3958]: W0319 11:51:09.832785 3958 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 11:51:09.835396 master-0 kubenswrapper[3958]: W0319 11:51:09.832825 3958 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 11:51:09.835396 master-0 kubenswrapper[3958]: W0319 11:51:09.832838 3958 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 11:51:09.835396 master-0 kubenswrapper[3958]: W0319 11:51:09.832849 3958 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 11:51:09.835396 master-0 kubenswrapper[3958]: W0319 11:51:09.832860 3958 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 11:51:09.835396 master-0 kubenswrapper[3958]: W0319 11:51:09.832871 3958 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 11:51:09.835396 master-0 kubenswrapper[3958]: W0319 11:51:09.832883 3958 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 11:51:09.835396 master-0 kubenswrapper[3958]: W0319 11:51:09.832914 3958 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 11:51:09.835396 master-0 kubenswrapper[3958]: W0319 11:51:09.832927 3958 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 11:51:09.835396 master-0 kubenswrapper[3958]: W0319 11:51:09.832938 3958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 11:51:09.835396 master-0 kubenswrapper[3958]: W0319 11:51:09.832954 3958 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 11:51:09.835396 master-0 kubenswrapper[3958]: W0319 11:51:09.832965 3958 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 11:51:09.835396 master-0 kubenswrapper[3958]: W0319 11:51:09.832976 3958 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 11:51:09.835396 master-0 kubenswrapper[3958]: W0319 11:51:09.832987 3958 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 11:51:09.835396 master-0 kubenswrapper[3958]: W0319 11:51:09.832999 3958 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 11:51:09.835396 master-0 kubenswrapper[3958]: W0319 11:51:09.833011 3958 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 11:51:09.835396 master-0 kubenswrapper[3958]: W0319 11:51:09.833022 3958 feature_gate.go:330] unrecognized feature gate: Example Mar 19 11:51:09.835396 master-0 kubenswrapper[3958]: W0319 11:51:09.833033 3958 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: W0319 11:51:09.833048 3958 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: W0319 11:51:09.833063 3958 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: W0319 11:51:09.833078 3958 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: W0319 11:51:09.833092 3958 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: I0319 11:51:09.834564 3958 flags.go:64] FLAG: --address="0.0.0.0" Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: I0319 11:51:09.834593 3958 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: I0319 11:51:09.834622 3958 flags.go:64] FLAG: --anonymous-auth="true" Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: I0319 11:51:09.834635 3958 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: I0319 11:51:09.834648 3958 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: I0319 11:51:09.834658 3958 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: I0319 11:51:09.834671 3958 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: I0319 11:51:09.834683 3958 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: I0319 11:51:09.834693 3958 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: I0319 11:51:09.834703 3958 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: I0319 11:51:09.834713 3958 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: I0319 11:51:09.834724 3958 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: I0319 11:51:09.834734 3958 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: I0319 11:51:09.834743 3958 flags.go:64] FLAG: --cgroup-root="" Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: I0319 11:51:09.834753 3958 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: I0319 11:51:09.834764 3958 flags.go:64] FLAG: --client-ca-file="" Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: I0319 11:51:09.834773 3958 flags.go:64] FLAG: --cloud-config="" Mar 19 11:51:09.836734 master-0 kubenswrapper[3958]: I0319 11:51:09.834783 3958 flags.go:64] FLAG: --cloud-provider="" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.834793 3958 flags.go:64] FLAG: --cluster-dns="[]" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.834850 3958 flags.go:64] FLAG: --cluster-domain="" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.834860 3958 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.834882 3958 flags.go:64] FLAG: --config-dir="" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.834892 3958 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.834902 3958 flags.go:64] FLAG: --container-log-max-files="5" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.834915 3958 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.834925 3958 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.834936 3958 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.834946 3958 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.834956 3958 flags.go:64] FLAG: --contention-profiling="false" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.834966 3958 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.834976 3958 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.834989 3958 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.834999 3958 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.835010 3958 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.835020 3958 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.835031 3958 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.835040 3958 flags.go:64] FLAG: --enable-load-reader="false" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.835050 3958 flags.go:64] FLAG: --enable-server="true" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.835060 3958 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.835078 3958 flags.go:64] FLAG: --event-burst="100" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.835089 3958 flags.go:64] FLAG: --event-qps="50" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.835098 3958 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 19 11:51:09.838324 master-0 kubenswrapper[3958]: I0319 11:51:09.835108 3958 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835118 3958 flags.go:64] FLAG: --eviction-hard="" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835130 3958 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835139 3958 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835149 3958 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835159 3958 flags.go:64] FLAG: --eviction-soft="" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835170 3958 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835179 3958 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835189 3958 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835199 3958 flags.go:64] FLAG: --experimental-mounter-path="" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835208 3958 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835218 3958 flags.go:64] FLAG: --fail-swap-on="true" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835228 3958 flags.go:64] FLAG: --feature-gates="" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835240 3958 flags.go:64] FLAG: --file-check-frequency="20s" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835250 3958 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835273 3958 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835284 3958 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835294 3958 flags.go:64] FLAG: --healthz-port="10248" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835304 3958 flags.go:64] FLAG: --help="false" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835314 3958 flags.go:64] FLAG: --hostname-override="" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835324 3958 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835333 3958 flags.go:64] FLAG: --http-check-frequency="20s" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835344 3958 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835353 3958 flags.go:64] FLAG: --image-credential-provider-config="" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835362 3958 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 19 11:51:09.840151 master-0 kubenswrapper[3958]: I0319 11:51:09.835372 3958 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835382 3958 flags.go:64] FLAG: --image-service-endpoint="" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835391 3958 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835401 3958 flags.go:64] FLAG: --kube-api-burst="100" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835411 3958 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835422 3958 flags.go:64] FLAG: --kube-api-qps="50" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835432 3958 flags.go:64] FLAG: --kube-reserved="" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835441 3958 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835451 3958 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835461 3958 flags.go:64] FLAG: --kubelet-cgroups="" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835471 3958 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835480 3958 flags.go:64] FLAG: --lock-file="" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835490 3958 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835501 3958 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835512 3958 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835526 3958 flags.go:64] FLAG: --log-json-split-stream="false" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835536 3958 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835546 3958 flags.go:64] FLAG: --log-text-split-stream="false" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835556 3958 flags.go:64] FLAG: --logging-format="text" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835565 3958 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835577 3958 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835586 3958 flags.go:64] FLAG: --manifest-url="" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835596 3958 flags.go:64] FLAG: --manifest-url-header="" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835608 3958 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835618 3958 flags.go:64] FLAG: --max-open-files="1000000" Mar 19 11:51:09.841741 master-0 kubenswrapper[3958]: I0319 11:51:09.835630 3958 flags.go:64] FLAG: --max-pods="110" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835653 3958 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835663 3958 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835673 3958 flags.go:64] FLAG: --memory-manager-policy="None" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835683 3958 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835693 3958 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835703 3958 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835713 3958 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835734 3958 flags.go:64] FLAG: --node-status-max-images="50" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835744 3958 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835754 3958 flags.go:64] FLAG: --oom-score-adj="-999" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835765 3958 flags.go:64] FLAG: --pod-cidr="" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835774 3958 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835789 3958 flags.go:64] FLAG: --pod-manifest-path="" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835826 3958 flags.go:64] FLAG: --pod-max-pids="-1" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835836 3958 flags.go:64] FLAG: --pods-per-core="0" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835847 3958 flags.go:64] FLAG: --port="10250" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835856 3958 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835866 3958 flags.go:64] FLAG: --provider-id="" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835875 3958 flags.go:64] FLAG: --qos-reserved="" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835885 3958 flags.go:64] FLAG: --read-only-port="10255" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835896 3958 flags.go:64] FLAG: --register-node="true" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835905 3958 flags.go:64] FLAG: --register-schedulable="true" Mar 19 11:51:09.843442 master-0 kubenswrapper[3958]: I0319 11:51:09.835915 3958 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.835931 3958 flags.go:64] FLAG: --registry-burst="10" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.835940 3958 flags.go:64] FLAG: --registry-qps="5" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.835950 3958 flags.go:64] FLAG: --reserved-cpus="" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.835960 3958 flags.go:64] FLAG: --reserved-memory="" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.835972 3958 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.835982 3958 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.835992 3958 flags.go:64] FLAG: --rotate-certificates="false" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.836002 3958 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.836012 3958 flags.go:64] FLAG: --runonce="false" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.836021 3958 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.836031 3958 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.836041 3958 flags.go:64] FLAG: --seccomp-default="false" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.836051 3958 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.836120 3958 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.836131 3958 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.836141 3958 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.836151 3958 flags.go:64] FLAG: --storage-driver-password="root" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.836161 3958 flags.go:64] FLAG: --storage-driver-secure="false" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.836171 3958 flags.go:64] FLAG: --storage-driver-table="stats" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.836180 3958 flags.go:64] FLAG: --storage-driver-user="root" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.836190 3958 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.836200 3958 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.836210 3958 flags.go:64] FLAG: --system-cgroups="" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.836220 3958 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 19 11:51:09.845183 master-0 kubenswrapper[3958]: I0319 11:51:09.836234 3958 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: I0319 11:51:09.836244 3958 flags.go:64] FLAG: --tls-cert-file="" Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: I0319 11:51:09.836253 3958 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: I0319 11:51:09.836267 3958 flags.go:64] FLAG: --tls-min-version="" Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: I0319 11:51:09.836277 3958 flags.go:64] FLAG: --tls-private-key-file="" Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: I0319 11:51:09.836287 3958 flags.go:64] FLAG: --topology-manager-policy="none" Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: I0319 11:51:09.836297 3958 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: I0319 11:51:09.836306 3958 flags.go:64] FLAG: --topology-manager-scope="container" Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: I0319 11:51:09.836316 3958 flags.go:64] FLAG: --v="2" Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: I0319 11:51:09.836328 3958 flags.go:64] FLAG: --version="false" Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: I0319 11:51:09.836340 3958 flags.go:64] FLAG: --vmodule="" Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: I0319 11:51:09.836352 3958 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: I0319 11:51:09.836362 3958 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: W0319 11:51:09.836686 3958 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: W0319 11:51:09.836698 3958 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: W0319 11:51:09.836708 3958 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: W0319 11:51:09.836718 3958 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: W0319 11:51:09.836727 3958 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: W0319 11:51:09.836736 3958 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: W0319 11:51:09.836753 3958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: W0319 11:51:09.836763 3958 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: W0319 11:51:09.836771 3958 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 11:51:09.846890 master-0 kubenswrapper[3958]: W0319 11:51:09.836783 3958 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 11:51:09.848531 master-0 kubenswrapper[3958]: W0319 11:51:09.836821 3958 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 11:51:09.848531 master-0 kubenswrapper[3958]: W0319 11:51:09.836832 3958 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 11:51:09.848531 master-0 kubenswrapper[3958]: W0319 11:51:09.836855 3958 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 11:51:09.848531 master-0 kubenswrapper[3958]: W0319 11:51:09.836865 3958 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 11:51:09.848531 master-0 kubenswrapper[3958]: W0319 11:51:09.836875 3958 feature_gate.go:330] unrecognized feature gate: Example Mar 19 11:51:09.848531 master-0 kubenswrapper[3958]: W0319 11:51:09.836886 3958 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 11:51:09.848531 master-0 kubenswrapper[3958]: W0319 11:51:09.836897 3958 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 11:51:09.848531 master-0 kubenswrapper[3958]: W0319 11:51:09.836905 3958 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 11:51:09.848531 master-0 kubenswrapper[3958]: W0319 11:51:09.836914 3958 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 11:51:09.848531 master-0 kubenswrapper[3958]: W0319 11:51:09.836923 3958 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 11:51:09.848531 master-0 kubenswrapper[3958]: W0319 11:51:09.836932 3958 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 11:51:09.848531 master-0 kubenswrapper[3958]: W0319 11:51:09.836940 3958 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 11:51:09.848531 master-0 kubenswrapper[3958]: W0319 11:51:09.836949 3958 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 11:51:09.848531 master-0 kubenswrapper[3958]: W0319 11:51:09.836958 3958 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 11:51:09.848531 master-0 kubenswrapper[3958]: W0319 11:51:09.836966 3958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 11:51:09.848531 master-0 kubenswrapper[3958]: W0319 11:51:09.836974 3958 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 11:51:09.848531 master-0 kubenswrapper[3958]: W0319 11:51:09.836983 3958 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 11:51:09.848531 master-0 kubenswrapper[3958]: W0319 11:51:09.836991 3958 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 11:51:09.848531 master-0 kubenswrapper[3958]: W0319 11:51:09.837003 3958 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 11:51:09.848531 master-0 kubenswrapper[3958]: W0319 11:51:09.837014 3958 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 11:51:09.849930 master-0 kubenswrapper[3958]: W0319 11:51:09.837024 3958 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 11:51:09.849930 master-0 kubenswrapper[3958]: W0319 11:51:09.837033 3958 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 11:51:09.849930 master-0 kubenswrapper[3958]: W0319 11:51:09.837043 3958 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 11:51:09.849930 master-0 kubenswrapper[3958]: W0319 11:51:09.837053 3958 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 11:51:09.849930 master-0 kubenswrapper[3958]: W0319 11:51:09.837062 3958 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 11:51:09.849930 master-0 kubenswrapper[3958]: W0319 11:51:09.837071 3958 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 11:51:09.849930 master-0 kubenswrapper[3958]: W0319 11:51:09.837080 3958 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 11:51:09.849930 master-0 kubenswrapper[3958]: W0319 11:51:09.837088 3958 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 11:51:09.849930 master-0 kubenswrapper[3958]: W0319 11:51:09.837101 3958 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 11:51:09.849930 master-0 kubenswrapper[3958]: W0319 11:51:09.837110 3958 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 11:51:09.849930 master-0 kubenswrapper[3958]: W0319 11:51:09.837121 3958 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 11:51:09.849930 master-0 kubenswrapper[3958]: W0319 11:51:09.837132 3958 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 11:51:09.849930 master-0 kubenswrapper[3958]: W0319 11:51:09.837141 3958 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 11:51:09.849930 master-0 kubenswrapper[3958]: W0319 11:51:09.837150 3958 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 11:51:09.849930 master-0 kubenswrapper[3958]: W0319 11:51:09.837160 3958 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 11:51:09.849930 master-0 kubenswrapper[3958]: W0319 11:51:09.837168 3958 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 11:51:09.849930 master-0 kubenswrapper[3958]: W0319 11:51:09.837178 3958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 11:51:09.849930 master-0 kubenswrapper[3958]: W0319 11:51:09.837187 3958 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 11:51:09.849930 master-0 kubenswrapper[3958]: W0319 11:51:09.837214 3958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 11:51:09.851674 master-0 kubenswrapper[3958]: W0319 11:51:09.837224 3958 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 11:51:09.851674 master-0 kubenswrapper[3958]: W0319 11:51:09.837233 3958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 11:51:09.851674 master-0 kubenswrapper[3958]: W0319 11:51:09.837241 3958 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 11:51:09.851674 master-0 kubenswrapper[3958]: W0319 11:51:09.837249 3958 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 11:51:09.851674 master-0 kubenswrapper[3958]: W0319 11:51:09.837258 3958 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 11:51:09.851674 master-0 kubenswrapper[3958]: W0319 11:51:09.837267 3958 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 11:51:09.851674 master-0 kubenswrapper[3958]: W0319 11:51:09.837276 3958 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 11:51:09.851674 master-0 kubenswrapper[3958]: W0319 11:51:09.837284 3958 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 11:51:09.851674 master-0 kubenswrapper[3958]: W0319 11:51:09.837293 3958 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 11:51:09.851674 master-0 kubenswrapper[3958]: W0319 11:51:09.837301 3958 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 11:51:09.851674 master-0 kubenswrapper[3958]: W0319 11:51:09.837310 3958 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 11:51:09.851674 master-0 kubenswrapper[3958]: W0319 11:51:09.837318 3958 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 11:51:09.851674 master-0 kubenswrapper[3958]: W0319 11:51:09.837327 3958 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 11:51:09.851674 master-0 kubenswrapper[3958]: W0319 11:51:09.837335 3958 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 11:51:09.851674 master-0 kubenswrapper[3958]: W0319 11:51:09.837343 3958 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 11:51:09.851674 master-0 kubenswrapper[3958]: W0319 11:51:09.837351 3958 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 11:51:09.851674 master-0 kubenswrapper[3958]: W0319 11:51:09.837360 3958 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 11:51:09.851674 master-0 kubenswrapper[3958]: W0319 11:51:09.837368 3958 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 11:51:09.851674 master-0 kubenswrapper[3958]: W0319 11:51:09.837377 3958 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 11:51:09.851674 master-0 kubenswrapper[3958]: W0319 11:51:09.837385 3958 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 11:51:09.852442 master-0 kubenswrapper[3958]: W0319 11:51:09.837395 3958 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 11:51:09.852442 master-0 kubenswrapper[3958]: W0319 11:51:09.837409 3958 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 11:51:09.852442 master-0 kubenswrapper[3958]: W0319 11:51:09.837418 3958 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 11:51:09.852442 master-0 kubenswrapper[3958]: I0319 11:51:09.838216 3958 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 19 11:51:09.852618 master-0 kubenswrapper[3958]: I0319 11:51:09.852562 3958 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 19 11:51:09.852618 master-0 kubenswrapper[3958]: I0319 11:51:09.852596 3958 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:51:09.852728 master-0 kubenswrapper[3958]: W0319 11:51:09.852694 3958 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 11:51:09.852728 master-0 kubenswrapper[3958]: W0319 11:51:09.852712 3958 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 11:51:09.852728 master-0 kubenswrapper[3958]: W0319 11:51:09.852718 3958 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 11:51:09.852728 master-0 kubenswrapper[3958]: W0319 11:51:09.852724 3958 feature_gate.go:330] unrecognized feature gate: Example Mar 19 11:51:09.852728 master-0 kubenswrapper[3958]: W0319 11:51:09.852731 3958 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 11:51:09.852907 master-0 kubenswrapper[3958]: W0319 11:51:09.852737 3958 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 11:51:09.852907 master-0 kubenswrapper[3958]: W0319 11:51:09.852743 3958 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 11:51:09.852907 master-0 kubenswrapper[3958]: W0319 11:51:09.852748 3958 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 11:51:09.852907 master-0 kubenswrapper[3958]: W0319 11:51:09.852754 3958 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 11:51:09.852907 master-0 kubenswrapper[3958]: W0319 11:51:09.852761 3958 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 11:51:09.852907 master-0 kubenswrapper[3958]: W0319 11:51:09.852770 3958 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 11:51:09.852907 master-0 kubenswrapper[3958]: W0319 11:51:09.852776 3958 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 11:51:09.852907 master-0 kubenswrapper[3958]: W0319 11:51:09.852782 3958 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 11:51:09.852907 master-0 kubenswrapper[3958]: W0319 11:51:09.852787 3958 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 11:51:09.852907 master-0 kubenswrapper[3958]: W0319 11:51:09.852812 3958 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 11:51:09.852907 master-0 kubenswrapper[3958]: W0319 11:51:09.852820 3958 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 11:51:09.852907 master-0 kubenswrapper[3958]: W0319 11:51:09.852826 3958 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 11:51:09.852907 master-0 kubenswrapper[3958]: W0319 11:51:09.852832 3958 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 11:51:09.852907 master-0 kubenswrapper[3958]: W0319 11:51:09.852837 3958 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 11:51:09.852907 master-0 kubenswrapper[3958]: W0319 11:51:09.852842 3958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 11:51:09.852907 master-0 kubenswrapper[3958]: W0319 11:51:09.852847 3958 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 11:51:09.852907 master-0 kubenswrapper[3958]: W0319 11:51:09.852854 3958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 11:51:09.852907 master-0 kubenswrapper[3958]: W0319 11:51:09.852859 3958 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 11:51:09.852907 master-0 kubenswrapper[3958]: W0319 11:51:09.852864 3958 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 11:51:09.852907 master-0 kubenswrapper[3958]: W0319 11:51:09.852869 3958 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 11:51:09.853475 master-0 kubenswrapper[3958]: W0319 11:51:09.852876 3958 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 11:51:09.853475 master-0 kubenswrapper[3958]: W0319 11:51:09.852884 3958 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 11:51:09.853475 master-0 kubenswrapper[3958]: W0319 11:51:09.852890 3958 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 11:51:09.853475 master-0 kubenswrapper[3958]: W0319 11:51:09.852895 3958 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 11:51:09.853475 master-0 kubenswrapper[3958]: W0319 11:51:09.852901 3958 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 11:51:09.853475 master-0 kubenswrapper[3958]: W0319 11:51:09.852906 3958 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 11:51:09.853475 master-0 kubenswrapper[3958]: W0319 11:51:09.852912 3958 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 11:51:09.853475 master-0 kubenswrapper[3958]: W0319 11:51:09.852918 3958 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 11:51:09.853475 master-0 kubenswrapper[3958]: W0319 11:51:09.852923 3958 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 11:51:09.853475 master-0 kubenswrapper[3958]: W0319 11:51:09.852929 3958 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 11:51:09.853475 master-0 kubenswrapper[3958]: W0319 11:51:09.852935 3958 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 11:51:09.853475 master-0 kubenswrapper[3958]: W0319 11:51:09.852941 3958 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 11:51:09.853475 master-0 kubenswrapper[3958]: W0319 11:51:09.852946 3958 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 11:51:09.853475 master-0 kubenswrapper[3958]: W0319 11:51:09.852951 3958 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 11:51:09.853475 master-0 kubenswrapper[3958]: W0319 11:51:09.852956 3958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 11:51:09.853475 master-0 kubenswrapper[3958]: W0319 11:51:09.852962 3958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 11:51:09.853475 master-0 kubenswrapper[3958]: W0319 11:51:09.852966 3958 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 11:51:09.853475 master-0 kubenswrapper[3958]: W0319 11:51:09.852971 3958 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 11:51:09.853475 master-0 kubenswrapper[3958]: W0319 11:51:09.852978 3958 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 11:51:09.854045 master-0 kubenswrapper[3958]: W0319 11:51:09.852983 3958 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 11:51:09.854045 master-0 kubenswrapper[3958]: W0319 11:51:09.852990 3958 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 11:51:09.854045 master-0 kubenswrapper[3958]: W0319 11:51:09.852996 3958 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 11:51:09.854045 master-0 kubenswrapper[3958]: W0319 11:51:09.853002 3958 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 11:51:09.854045 master-0 kubenswrapper[3958]: W0319 11:51:09.853007 3958 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 11:51:09.854045 master-0 kubenswrapper[3958]: W0319 11:51:09.853012 3958 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 11:51:09.854045 master-0 kubenswrapper[3958]: W0319 11:51:09.853018 3958 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 11:51:09.854045 master-0 kubenswrapper[3958]: W0319 11:51:09.853023 3958 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 11:51:09.854045 master-0 kubenswrapper[3958]: W0319 11:51:09.853028 3958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 11:51:09.854045 master-0 kubenswrapper[3958]: W0319 11:51:09.853033 3958 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 11:51:09.854045 master-0 kubenswrapper[3958]: W0319 11:51:09.853038 3958 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 11:51:09.854045 master-0 kubenswrapper[3958]: W0319 11:51:09.853044 3958 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 11:51:09.854045 master-0 kubenswrapper[3958]: W0319 11:51:09.853049 3958 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 11:51:09.854045 master-0 kubenswrapper[3958]: W0319 11:51:09.853055 3958 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 11:51:09.854045 master-0 kubenswrapper[3958]: W0319 11:51:09.853062 3958 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 11:51:09.854045 master-0 kubenswrapper[3958]: W0319 11:51:09.853067 3958 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 11:51:09.854045 master-0 kubenswrapper[3958]: W0319 11:51:09.853073 3958 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 11:51:09.854045 master-0 kubenswrapper[3958]: W0319 11:51:09.853078 3958 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 11:51:09.854045 master-0 kubenswrapper[3958]: W0319 11:51:09.853083 3958 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 11:51:09.854871 master-0 kubenswrapper[3958]: W0319 11:51:09.853088 3958 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 11:51:09.854871 master-0 kubenswrapper[3958]: W0319 11:51:09.853093 3958 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 11:51:09.854871 master-0 kubenswrapper[3958]: W0319 11:51:09.853098 3958 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 11:51:09.854871 master-0 kubenswrapper[3958]: W0319 11:51:09.853103 3958 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 11:51:09.854871 master-0 kubenswrapper[3958]: W0319 11:51:09.853108 3958 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 11:51:09.854871 master-0 kubenswrapper[3958]: W0319 11:51:09.853113 3958 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 11:51:09.854871 master-0 kubenswrapper[3958]: W0319 11:51:09.853117 3958 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 11:51:09.854871 master-0 kubenswrapper[3958]: W0319 11:51:09.853123 3958 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 11:51:09.854871 master-0 kubenswrapper[3958]: W0319 11:51:09.853128 3958 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 11:51:09.854871 master-0 kubenswrapper[3958]: I0319 11:51:09.853136 3958 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 19 11:51:09.854871 master-0 kubenswrapper[3958]: W0319 11:51:09.853291 3958 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 11:51:09.854871 master-0 kubenswrapper[3958]: W0319 11:51:09.853300 3958 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 11:51:09.854871 master-0 kubenswrapper[3958]: W0319 11:51:09.853306 3958 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 11:51:09.854871 master-0 kubenswrapper[3958]: W0319 11:51:09.853312 3958 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 11:51:09.854871 master-0 kubenswrapper[3958]: W0319 11:51:09.853318 3958 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 11:51:09.855337 master-0 kubenswrapper[3958]: W0319 11:51:09.853323 3958 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 11:51:09.855337 master-0 kubenswrapper[3958]: W0319 11:51:09.853328 3958 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 11:51:09.855337 master-0 kubenswrapper[3958]: W0319 11:51:09.853333 3958 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 11:51:09.855337 master-0 kubenswrapper[3958]: W0319 11:51:09.853338 3958 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 11:51:09.855337 master-0 kubenswrapper[3958]: W0319 11:51:09.853343 3958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 11:51:09.855337 master-0 kubenswrapper[3958]: W0319 11:51:09.853348 3958 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 11:51:09.855337 master-0 kubenswrapper[3958]: W0319 11:51:09.853352 3958 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 11:51:09.855337 master-0 kubenswrapper[3958]: W0319 11:51:09.853357 3958 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 11:51:09.855337 master-0 kubenswrapper[3958]: W0319 11:51:09.853363 3958 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 11:51:09.855337 master-0 kubenswrapper[3958]: W0319 11:51:09.853368 3958 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 11:51:09.855337 master-0 kubenswrapper[3958]: W0319 11:51:09.853373 3958 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 11:51:09.855337 master-0 kubenswrapper[3958]: W0319 11:51:09.853378 3958 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 11:51:09.855337 master-0 kubenswrapper[3958]: W0319 11:51:09.853385 3958 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 11:51:09.855337 master-0 kubenswrapper[3958]: W0319 11:51:09.853391 3958 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 11:51:09.855337 master-0 kubenswrapper[3958]: W0319 11:51:09.853396 3958 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 11:51:09.855337 master-0 kubenswrapper[3958]: W0319 11:51:09.853401 3958 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 11:51:09.855337 master-0 kubenswrapper[3958]: W0319 11:51:09.853406 3958 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 11:51:09.855337 master-0 kubenswrapper[3958]: W0319 11:51:09.853411 3958 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 11:51:09.855337 master-0 kubenswrapper[3958]: W0319 11:51:09.853415 3958 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 11:51:09.855337 master-0 kubenswrapper[3958]: W0319 11:51:09.853420 3958 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 11:51:09.856030 master-0 kubenswrapper[3958]: W0319 11:51:09.853425 3958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 11:51:09.856030 master-0 kubenswrapper[3958]: W0319 11:51:09.853430 3958 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 11:51:09.856030 master-0 kubenswrapper[3958]: W0319 11:51:09.853435 3958 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 11:51:09.856030 master-0 kubenswrapper[3958]: W0319 11:51:09.853440 3958 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 11:51:09.856030 master-0 kubenswrapper[3958]: W0319 11:51:09.853446 3958 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 11:51:09.856030 master-0 kubenswrapper[3958]: W0319 11:51:09.853451 3958 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 11:51:09.856030 master-0 kubenswrapper[3958]: W0319 11:51:09.853456 3958 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 11:51:09.856030 master-0 kubenswrapper[3958]: W0319 11:51:09.853461 3958 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 11:51:09.856030 master-0 kubenswrapper[3958]: W0319 11:51:09.853465 3958 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 11:51:09.856030 master-0 kubenswrapper[3958]: W0319 11:51:09.853471 3958 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 11:51:09.856030 master-0 kubenswrapper[3958]: W0319 11:51:09.853476 3958 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 11:51:09.856030 master-0 kubenswrapper[3958]: W0319 11:51:09.853481 3958 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 11:51:09.856030 master-0 kubenswrapper[3958]: W0319 11:51:09.853486 3958 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 11:51:09.856030 master-0 kubenswrapper[3958]: W0319 11:51:09.853492 3958 feature_gate.go:330] unrecognized feature gate: Example Mar 19 11:51:09.856030 master-0 kubenswrapper[3958]: W0319 11:51:09.853498 3958 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 11:51:09.856030 master-0 kubenswrapper[3958]: W0319 11:51:09.853503 3958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 11:51:09.856030 master-0 kubenswrapper[3958]: W0319 11:51:09.853508 3958 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 11:51:09.856030 master-0 kubenswrapper[3958]: W0319 11:51:09.853513 3958 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 11:51:09.856030 master-0 kubenswrapper[3958]: W0319 11:51:09.853518 3958 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 11:51:09.856030 master-0 kubenswrapper[3958]: W0319 11:51:09.853523 3958 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 11:51:09.856673 master-0 kubenswrapper[3958]: W0319 11:51:09.853528 3958 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 11:51:09.856673 master-0 kubenswrapper[3958]: W0319 11:51:09.853534 3958 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 11:51:09.856673 master-0 kubenswrapper[3958]: W0319 11:51:09.853541 3958 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 11:51:09.856673 master-0 kubenswrapper[3958]: W0319 11:51:09.853548 3958 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 11:51:09.856673 master-0 kubenswrapper[3958]: W0319 11:51:09.853555 3958 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 11:51:09.856673 master-0 kubenswrapper[3958]: W0319 11:51:09.853563 3958 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 11:51:09.856673 master-0 kubenswrapper[3958]: W0319 11:51:09.853569 3958 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 11:51:09.856673 master-0 kubenswrapper[3958]: W0319 11:51:09.853575 3958 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 11:51:09.856673 master-0 kubenswrapper[3958]: W0319 11:51:09.853581 3958 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 11:51:09.856673 master-0 kubenswrapper[3958]: W0319 11:51:09.853587 3958 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 11:51:09.856673 master-0 kubenswrapper[3958]: W0319 11:51:09.853593 3958 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 11:51:09.856673 master-0 kubenswrapper[3958]: W0319 11:51:09.853598 3958 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 11:51:09.856673 master-0 kubenswrapper[3958]: W0319 11:51:09.853604 3958 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 11:51:09.856673 master-0 kubenswrapper[3958]: W0319 11:51:09.853609 3958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 11:51:09.856673 master-0 kubenswrapper[3958]: W0319 11:51:09.853614 3958 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 11:51:09.856673 master-0 kubenswrapper[3958]: W0319 11:51:09.853619 3958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 11:51:09.856673 master-0 kubenswrapper[3958]: W0319 11:51:09.853624 3958 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 11:51:09.856673 master-0 kubenswrapper[3958]: W0319 11:51:09.853629 3958 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 11:51:09.857240 master-0 kubenswrapper[3958]: W0319 11:51:09.853634 3958 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 11:51:09.857240 master-0 kubenswrapper[3958]: W0319 11:51:09.853638 3958 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 11:51:09.857240 master-0 kubenswrapper[3958]: W0319 11:51:09.853643 3958 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 11:51:09.857240 master-0 kubenswrapper[3958]: W0319 11:51:09.853648 3958 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 11:51:09.857240 master-0 kubenswrapper[3958]: W0319 11:51:09.853653 3958 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 11:51:09.857240 master-0 kubenswrapper[3958]: W0319 11:51:09.853658 3958 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 11:51:09.857240 master-0 kubenswrapper[3958]: W0319 11:51:09.853663 3958 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 11:51:09.857240 master-0 kubenswrapper[3958]: W0319 11:51:09.853668 3958 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 11:51:09.857240 master-0 kubenswrapper[3958]: W0319 11:51:09.853673 3958 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 11:51:09.857240 master-0 kubenswrapper[3958]: I0319 11:51:09.853682 3958 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 19 11:51:09.857240 master-0 kubenswrapper[3958]: I0319 11:51:09.855451 3958 server.go:940] "Client rotation is on, will bootstrap in background" Mar 19 11:51:09.859606 master-0 kubenswrapper[3958]: I0319 11:51:09.859564 3958 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 19 11:51:09.863953 master-0 kubenswrapper[3958]: I0319 11:51:09.863910 3958 server.go:997] "Starting client certificate rotation" Mar 19 11:51:09.863953 master-0 kubenswrapper[3958]: I0319 11:51:09.863950 3958 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 19 11:51:09.864170 master-0 kubenswrapper[3958]: I0319 11:51:09.864127 3958 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 19 11:51:09.888765 master-0 kubenswrapper[3958]: I0319 11:51:09.888699 3958 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 19 11:51:09.892365 master-0 kubenswrapper[3958]: I0319 11:51:09.892297 3958 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 19 11:51:09.894778 master-0 kubenswrapper[3958]: E0319 11:51:09.894681 3958 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:09.910106 master-0 kubenswrapper[3958]: I0319 11:51:09.910029 3958 log.go:25] "Validated CRI v1 runtime API" Mar 19 11:51:09.916663 master-0 kubenswrapper[3958]: I0319 11:51:09.916619 3958 log.go:25] "Validated CRI v1 image API" Mar 19 11:51:09.920129 master-0 kubenswrapper[3958]: I0319 11:51:09.920079 3958 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 19 11:51:09.926277 master-0 kubenswrapper[3958]: I0319 11:51:09.926210 3958 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 f39678f0-0749-4469-b061-899c5a9052e6:/dev/vda3] Mar 19 11:51:09.926277 master-0 kubenswrapper[3958]: I0319 11:51:09.926250 3958 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Mar 19 11:51:09.950131 master-0 kubenswrapper[3958]: I0319 11:51:09.949571 3958 manager.go:217] Machine: {Timestamp:2026-03-19 11:51:09.947236801 +0000 UTC m=+0.620958013 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:42c922df40e540ac85bfc55dec643ba0 SystemUUID:42c922df-40e5-40ac-85bf-c55dec643ba0 BootID:56867831-7a09-49d8-8c88-5a315bbf793a Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:0b:8e:2e Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:be:b5:64:e8:21:b9 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 19 11:51:09.950131 master-0 kubenswrapper[3958]: I0319 11:51:09.950062 3958 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 19 11:51:09.950423 master-0 kubenswrapper[3958]: I0319 11:51:09.950270 3958 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 19 11:51:09.950716 master-0 kubenswrapper[3958]: I0319 11:51:09.950675 3958 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 19 11:51:09.951012 master-0 kubenswrapper[3958]: I0319 11:51:09.950956 3958 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:51:09.951348 master-0 kubenswrapper[3958]: I0319 11:51:09.951002 3958 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 11:51:09.951406 master-0 kubenswrapper[3958]: I0319 11:51:09.951365 3958 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:51:09.951406 master-0 kubenswrapper[3958]: I0319 11:51:09.951379 3958 container_manager_linux.go:303] "Creating device plugin manager" Mar 19 11:51:09.951406 master-0 kubenswrapper[3958]: I0319 11:51:09.951403 3958 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 19 11:51:09.951532 master-0 kubenswrapper[3958]: I0319 11:51:09.951428 3958 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 19 11:51:09.951621 master-0 kubenswrapper[3958]: I0319 11:51:09.951585 3958 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:51:09.951729 master-0 kubenswrapper[3958]: I0319 11:51:09.951696 3958 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 19 11:51:09.958312 master-0 kubenswrapper[3958]: I0319 11:51:09.958271 3958 kubelet.go:418] "Attempting to sync node with API server" Mar 19 11:51:09.958312 master-0 kubenswrapper[3958]: I0319 11:51:09.958302 3958 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:51:09.958428 master-0 kubenswrapper[3958]: I0319 11:51:09.958375 3958 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 19 11:51:09.958428 master-0 kubenswrapper[3958]: I0319 11:51:09.958395 3958 kubelet.go:324] "Adding apiserver pod source" Mar 19 11:51:09.958428 master-0 kubenswrapper[3958]: I0319 11:51:09.958410 3958 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:51:09.963654 master-0 kubenswrapper[3958]: I0319 11:51:09.963598 3958 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 19 11:51:09.966220 master-0 kubenswrapper[3958]: I0319 11:51:09.966178 3958 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:51:09.966540 master-0 kubenswrapper[3958]: I0319 11:51:09.966501 3958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 19 11:51:09.966540 master-0 kubenswrapper[3958]: I0319 11:51:09.966539 3958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 19 11:51:09.966646 master-0 kubenswrapper[3958]: I0319 11:51:09.966554 3958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 19 11:51:09.966646 master-0 kubenswrapper[3958]: I0319 11:51:09.966576 3958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 19 11:51:09.966713 master-0 kubenswrapper[3958]: I0319 11:51:09.966623 3958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 19 11:51:09.966713 master-0 kubenswrapper[3958]: I0319 11:51:09.966668 3958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 19 11:51:09.966713 master-0 kubenswrapper[3958]: I0319 11:51:09.966687 3958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 19 11:51:09.966713 master-0 kubenswrapper[3958]: I0319 11:51:09.966700 3958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 19 11:51:09.966713 master-0 kubenswrapper[3958]: I0319 11:51:09.966714 3958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 19 11:51:09.966973 master-0 kubenswrapper[3958]: I0319 11:51:09.966736 3958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 19 11:51:09.966973 master-0 kubenswrapper[3958]: I0319 11:51:09.966780 3958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 19 11:51:09.966973 master-0 kubenswrapper[3958]: I0319 11:51:09.966840 3958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 19 11:51:09.967453 master-0 kubenswrapper[3958]: W0319 11:51:09.967374 3958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:09.967524 master-0 kubenswrapper[3958]: E0319 11:51:09.967486 3958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:09.967720 master-0 kubenswrapper[3958]: W0319 11:51:09.967621 3958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:09.967791 master-0 kubenswrapper[3958]: E0319 11:51:09.967745 3958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:09.968261 master-0 kubenswrapper[3958]: I0319 11:51:09.968218 3958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 19 11:51:09.968993 master-0 kubenswrapper[3958]: I0319 11:51:09.968951 3958 server.go:1280] "Started kubelet" Mar 19 11:51:09.970211 master-0 kubenswrapper[3958]: I0319 11:51:09.970131 3958 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:51:09.970412 master-0 kubenswrapper[3958]: I0319 11:51:09.970169 3958 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:51:09.970412 master-0 kubenswrapper[3958]: I0319 11:51:09.970306 3958 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 19 11:51:09.970856 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 19 11:51:09.971307 master-0 kubenswrapper[3958]: I0319 11:51:09.971261 3958 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:51:09.971394 master-0 kubenswrapper[3958]: I0319 11:51:09.971297 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:09.973292 master-0 kubenswrapper[3958]: I0319 11:51:09.973231 3958 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 19 11:51:09.973505 master-0 kubenswrapper[3958]: I0319 11:51:09.973458 3958 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:51:09.974288 master-0 kubenswrapper[3958]: E0319 11:51:09.974265 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:09.974360 master-0 kubenswrapper[3958]: I0319 11:51:09.974297 3958 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 19 11:51:09.974360 master-0 kubenswrapper[3958]: I0319 11:51:09.974304 3958 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 19 11:51:09.974443 master-0 kubenswrapper[3958]: I0319 11:51:09.974409 3958 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 19 11:51:09.975100 master-0 kubenswrapper[3958]: I0319 11:51:09.975060 3958 server.go:449] "Adding debug handlers to kubelet server" Mar 19 11:51:09.981244 master-0 kubenswrapper[3958]: I0319 11:51:09.981183 3958 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 19 11:51:09.981244 master-0 kubenswrapper[3958]: I0319 11:51:09.981229 3958 factory.go:55] Registering systemd factory Mar 19 11:51:09.981244 master-0 kubenswrapper[3958]: I0319 11:51:09.981251 3958 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:51:09.982081 master-0 kubenswrapper[3958]: I0319 11:51:09.981870 3958 reconstruct.go:97] "Volume reconstruction finished" Mar 19 11:51:09.982081 master-0 kubenswrapper[3958]: I0319 11:51:09.981897 3958 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:51:09.982190 master-0 kubenswrapper[3958]: I0319 11:51:09.982078 3958 factory.go:153] Registering CRI-O factory Mar 19 11:51:09.982190 master-0 kubenswrapper[3958]: I0319 11:51:09.982100 3958 factory.go:221] Registration of the crio container factory successfully Mar 19 11:51:09.982190 master-0 kubenswrapper[3958]: I0319 11:51:09.982124 3958 factory.go:103] Registering Raw factory Mar 19 11:51:09.982190 master-0 kubenswrapper[3958]: I0319 11:51:09.982140 3958 manager.go:1196] Started watching for new ooms in manager Mar 19 11:51:09.982305 master-0 kubenswrapper[3958]: W0319 11:51:09.982172 3958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:09.982305 master-0 kubenswrapper[3958]: E0319 11:51:09.982245 3958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:09.982386 master-0 kubenswrapper[3958]: E0319 11:51:09.982297 3958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 19 11:51:09.982931 master-0 kubenswrapper[3958]: I0319 11:51:09.982905 3958 manager.go:319] Starting recovery of all containers Mar 19 11:51:09.986756 master-0 kubenswrapper[3958]: E0319 11:51:09.986717 3958 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 19 11:51:09.987533 master-0 kubenswrapper[3958]: E0319 11:51:09.982229 3958 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189e3bd08313a03b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:09.968891963 +0000 UTC m=+0.642613175,LastTimestamp:2026-03-19 11:51:09.968891963 +0000 UTC m=+0.642613175,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:09.998102 master-0 kubenswrapper[3958]: I0319 11:51:09.998061 3958 manager.go:324] Recovery completed Mar 19 11:51:10.006540 master-0 kubenswrapper[3958]: I0319 11:51:10.006494 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:10.008347 master-0 kubenswrapper[3958]: I0319 11:51:10.008275 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:10.008347 master-0 kubenswrapper[3958]: I0319 11:51:10.008336 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:10.008347 master-0 kubenswrapper[3958]: I0319 11:51:10.008353 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:10.009573 master-0 kubenswrapper[3958]: I0319 11:51:10.009534 3958 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 19 11:51:10.009573 master-0 kubenswrapper[3958]: I0319 11:51:10.009549 3958 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 19 11:51:10.009761 master-0 kubenswrapper[3958]: I0319 11:51:10.009610 3958 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:51:10.012959 master-0 kubenswrapper[3958]: I0319 11:51:10.012916 3958 policy_none.go:49] "None policy: Start" Mar 19 11:51:10.013678 master-0 kubenswrapper[3958]: I0319 11:51:10.013617 3958 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 11:51:10.013678 master-0 kubenswrapper[3958]: I0319 11:51:10.013667 3958 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:51:10.074984 master-0 kubenswrapper[3958]: E0319 11:51:10.074909 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:10.075286 master-0 kubenswrapper[3958]: I0319 11:51:10.075243 3958 manager.go:334] "Starting Device Plugin manager" Mar 19 11:51:10.075418 master-0 kubenswrapper[3958]: I0319 11:51:10.075369 3958 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:51:10.075418 master-0 kubenswrapper[3958]: I0319 11:51:10.075404 3958 server.go:79] "Starting device plugin registration server" Mar 19 11:51:10.076561 master-0 kubenswrapper[3958]: I0319 11:51:10.076247 3958 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 11:51:10.076561 master-0 kubenswrapper[3958]: I0319 11:51:10.076283 3958 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:51:10.076561 master-0 kubenswrapper[3958]: I0319 11:51:10.076493 3958 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 19 11:51:10.076704 master-0 kubenswrapper[3958]: I0319 11:51:10.076612 3958 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 19 11:51:10.076704 master-0 kubenswrapper[3958]: I0319 11:51:10.076624 3958 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:51:10.078080 master-0 kubenswrapper[3958]: E0319 11:51:10.078052 3958 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 19 11:51:10.117433 master-0 kubenswrapper[3958]: I0319 11:51:10.117338 3958 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:51:10.122654 master-0 kubenswrapper[3958]: I0319 11:51:10.120813 3958 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:51:10.122654 master-0 kubenswrapper[3958]: I0319 11:51:10.120857 3958 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 11:51:10.122654 master-0 kubenswrapper[3958]: I0319 11:51:10.120877 3958 kubelet.go:2335] "Starting kubelet main sync loop" Mar 19 11:51:10.122654 master-0 kubenswrapper[3958]: E0319 11:51:10.121018 3958 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 19 11:51:10.122654 master-0 kubenswrapper[3958]: W0319 11:51:10.121774 3958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:10.122654 master-0 kubenswrapper[3958]: E0319 11:51:10.121844 3958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:10.135693 master-0 kubenswrapper[3958]: E0319 11:51:10.135529 3958 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189e3bd08313a03b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:09.968891963 +0000 UTC m=+0.642613175,LastTimestamp:2026-03-19 11:51:09.968891963 +0000 UTC m=+0.642613175,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:10.177126 master-0 kubenswrapper[3958]: I0319 11:51:10.177053 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:10.178276 master-0 kubenswrapper[3958]: I0319 11:51:10.178235 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:10.178276 master-0 kubenswrapper[3958]: I0319 11:51:10.178273 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:10.178380 master-0 kubenswrapper[3958]: I0319 11:51:10.178282 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:10.178380 master-0 kubenswrapper[3958]: I0319 11:51:10.178316 3958 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 11:51:10.179278 master-0 kubenswrapper[3958]: E0319 11:51:10.179221 3958 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 19 11:51:10.184001 master-0 kubenswrapper[3958]: E0319 11:51:10.183954 3958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 19 11:51:10.221212 master-0 kubenswrapper[3958]: I0319 11:51:10.221105 3958 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0"] Mar 19 11:51:10.221484 master-0 kubenswrapper[3958]: I0319 11:51:10.221234 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:10.222623 master-0 kubenswrapper[3958]: I0319 11:51:10.222525 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:10.222965 master-0 kubenswrapper[3958]: I0319 11:51:10.222631 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:10.222965 master-0 kubenswrapper[3958]: I0319 11:51:10.222652 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:10.222965 master-0 kubenswrapper[3958]: I0319 11:51:10.222932 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:10.223212 master-0 kubenswrapper[3958]: I0319 11:51:10.223177 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:10.223302 master-0 kubenswrapper[3958]: I0319 11:51:10.223246 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:10.224029 master-0 kubenswrapper[3958]: I0319 11:51:10.223975 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:10.224029 master-0 kubenswrapper[3958]: I0319 11:51:10.224025 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:10.224260 master-0 kubenswrapper[3958]: I0319 11:51:10.224047 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:10.224260 master-0 kubenswrapper[3958]: I0319 11:51:10.224132 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:10.224260 master-0 kubenswrapper[3958]: I0319 11:51:10.224156 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:10.224260 master-0 kubenswrapper[3958]: I0319 11:51:10.224170 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:10.224260 master-0 kubenswrapper[3958]: I0319 11:51:10.224185 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:10.224626 master-0 kubenswrapper[3958]: I0319 11:51:10.224541 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:10.224626 master-0 kubenswrapper[3958]: I0319 11:51:10.224565 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:10.225229 master-0 kubenswrapper[3958]: I0319 11:51:10.225177 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:10.225229 master-0 kubenswrapper[3958]: I0319 11:51:10.225213 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:10.225229 master-0 kubenswrapper[3958]: I0319 11:51:10.225226 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:10.227526 master-0 kubenswrapper[3958]: I0319 11:51:10.227469 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:10.227682 master-0 kubenswrapper[3958]: I0319 11:51:10.227535 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:10.227682 master-0 kubenswrapper[3958]: I0319 11:51:10.227550 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:10.227682 master-0 kubenswrapper[3958]: I0319 11:51:10.227676 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:10.228203 master-0 kubenswrapper[3958]: I0319 11:51:10.228137 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 11:51:10.228203 master-0 kubenswrapper[3958]: I0319 11:51:10.228192 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:10.229058 master-0 kubenswrapper[3958]: I0319 11:51:10.229027 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:10.229111 master-0 kubenswrapper[3958]: I0319 11:51:10.229091 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:10.229111 master-0 kubenswrapper[3958]: I0319 11:51:10.229106 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:10.229261 master-0 kubenswrapper[3958]: I0319 11:51:10.229229 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:10.229292 master-0 kubenswrapper[3958]: I0319 11:51:10.229266 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:10.229292 master-0 kubenswrapper[3958]: I0319 11:51:10.229280 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:10.229438 master-0 kubenswrapper[3958]: I0319 11:51:10.229415 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:10.229572 master-0 kubenswrapper[3958]: I0319 11:51:10.229546 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 11:51:10.229602 master-0 kubenswrapper[3958]: I0319 11:51:10.229589 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:10.230218 master-0 kubenswrapper[3958]: I0319 11:51:10.230192 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:10.230255 master-0 kubenswrapper[3958]: I0319 11:51:10.230228 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:10.230255 master-0 kubenswrapper[3958]: I0319 11:51:10.230242 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:10.230411 master-0 kubenswrapper[3958]: I0319 11:51:10.230388 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 19 11:51:10.230442 master-0 kubenswrapper[3958]: I0319 11:51:10.230425 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:10.230442 master-0 kubenswrapper[3958]: I0319 11:51:10.230426 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:10.230495 master-0 kubenswrapper[3958]: I0319 11:51:10.230451 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:10.230495 master-0 kubenswrapper[3958]: I0319 11:51:10.230468 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:10.231199 master-0 kubenswrapper[3958]: I0319 11:51:10.231166 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:10.231250 master-0 kubenswrapper[3958]: I0319 11:51:10.231224 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:10.231250 master-0 kubenswrapper[3958]: I0319 11:51:10.231242 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:10.284845 master-0 kubenswrapper[3958]: I0319 11:51:10.284464 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:10.284845 master-0 kubenswrapper[3958]: I0319 11:51:10.284846 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:10.285130 master-0 kubenswrapper[3958]: I0319 11:51:10.284874 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:10.285130 master-0 kubenswrapper[3958]: I0319 11:51:10.284908 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:10.285130 master-0 kubenswrapper[3958]: I0319 11:51:10.284933 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:10.285130 master-0 kubenswrapper[3958]: I0319 11:51:10.284955 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:10.285130 master-0 kubenswrapper[3958]: I0319 11:51:10.284980 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 11:51:10.285130 master-0 kubenswrapper[3958]: I0319 11:51:10.285034 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:10.285130 master-0 kubenswrapper[3958]: I0319 11:51:10.285056 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:10.285130 master-0 kubenswrapper[3958]: I0319 11:51:10.285072 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 11:51:10.285130 master-0 kubenswrapper[3958]: I0319 11:51:10.285090 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 11:51:10.285130 master-0 kubenswrapper[3958]: I0319 11:51:10.285107 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 11:51:10.285528 master-0 kubenswrapper[3958]: I0319 11:51:10.285191 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:10.285528 master-0 kubenswrapper[3958]: I0319 11:51:10.285249 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:10.285528 master-0 kubenswrapper[3958]: I0319 11:51:10.285300 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 11:51:10.285528 master-0 kubenswrapper[3958]: I0319 11:51:10.285336 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 11:51:10.285528 master-0 kubenswrapper[3958]: I0319 11:51:10.285366 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:10.380089 master-0 kubenswrapper[3958]: I0319 11:51:10.379939 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:10.381559 master-0 kubenswrapper[3958]: I0319 11:51:10.381502 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:10.381638 master-0 kubenswrapper[3958]: I0319 11:51:10.381571 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:10.381638 master-0 kubenswrapper[3958]: I0319 11:51:10.381597 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:10.381723 master-0 kubenswrapper[3958]: I0319 11:51:10.381679 3958 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 11:51:10.383028 master-0 kubenswrapper[3958]: E0319 11:51:10.382943 3958 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 19 11:51:10.386242 master-0 kubenswrapper[3958]: I0319 11:51:10.386146 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:10.386242 master-0 kubenswrapper[3958]: I0319 11:51:10.386230 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 11:51:10.386567 master-0 kubenswrapper[3958]: I0319 11:51:10.386286 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:10.386567 master-0 kubenswrapper[3958]: I0319 11:51:10.386331 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:10.386567 master-0 kubenswrapper[3958]: I0319 11:51:10.386357 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:10.386567 master-0 kubenswrapper[3958]: I0319 11:51:10.386424 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:10.386567 master-0 kubenswrapper[3958]: I0319 11:51:10.386488 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:10.386857 master-0 kubenswrapper[3958]: I0319 11:51:10.386578 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:10.386857 master-0 kubenswrapper[3958]: I0319 11:51:10.386586 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:10.386857 master-0 kubenswrapper[3958]: I0319 11:51:10.386650 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 11:51:10.386857 master-0 kubenswrapper[3958]: I0319 11:51:10.386697 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:10.386857 master-0 kubenswrapper[3958]: I0319 11:51:10.386747 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 11:51:10.386857 master-0 kubenswrapper[3958]: I0319 11:51:10.386822 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:10.387279 master-0 kubenswrapper[3958]: I0319 11:51:10.386872 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:10.387279 master-0 kubenswrapper[3958]: I0319 11:51:10.387067 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:10.387279 master-0 kubenswrapper[3958]: I0319 11:51:10.387085 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 11:51:10.387279 master-0 kubenswrapper[3958]: I0319 11:51:10.387159 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:10.387279 master-0 kubenswrapper[3958]: I0319 11:51:10.387204 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:10.387279 master-0 kubenswrapper[3958]: I0319 11:51:10.387209 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 11:51:10.387279 master-0 kubenswrapper[3958]: I0319 11:51:10.387263 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 11:51:10.387279 master-0 kubenswrapper[3958]: I0319 11:51:10.387271 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 11:51:10.387643 master-0 kubenswrapper[3958]: I0319 11:51:10.387300 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:10.387643 master-0 kubenswrapper[3958]: I0319 11:51:10.387352 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:10.387643 master-0 kubenswrapper[3958]: I0319 11:51:10.387412 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 11:51:10.387643 master-0 kubenswrapper[3958]: I0319 11:51:10.387473 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 11:51:10.387643 master-0 kubenswrapper[3958]: I0319 11:51:10.387505 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 11:51:10.387643 master-0 kubenswrapper[3958]: I0319 11:51:10.387549 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 11:51:10.387643 master-0 kubenswrapper[3958]: I0319 11:51:10.387536 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:10.387643 master-0 kubenswrapper[3958]: I0319 11:51:10.387639 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:10.387963 master-0 kubenswrapper[3958]: I0319 11:51:10.387643 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 11:51:10.387963 master-0 kubenswrapper[3958]: I0319 11:51:10.387660 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:10.387963 master-0 kubenswrapper[3958]: I0319 11:51:10.387699 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:10.387963 master-0 kubenswrapper[3958]: I0319 11:51:10.387709 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:10.387963 master-0 kubenswrapper[3958]: I0319 11:51:10.387773 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:10.560732 master-0 kubenswrapper[3958]: I0319 11:51:10.560557 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:10.571329 master-0 kubenswrapper[3958]: I0319 11:51:10.571278 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:10.585508 master-0 kubenswrapper[3958]: E0319 11:51:10.585432 3958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 19 11:51:10.592885 master-0 kubenswrapper[3958]: I0319 11:51:10.592815 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 11:51:10.619078 master-0 kubenswrapper[3958]: I0319 11:51:10.619013 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 11:51:10.628374 master-0 kubenswrapper[3958]: I0319 11:51:10.628309 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 19 11:51:10.783281 master-0 kubenswrapper[3958]: I0319 11:51:10.783183 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:10.784842 master-0 kubenswrapper[3958]: I0319 11:51:10.784775 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:10.784909 master-0 kubenswrapper[3958]: I0319 11:51:10.784871 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:10.784909 master-0 kubenswrapper[3958]: I0319 11:51:10.784892 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:10.784982 master-0 kubenswrapper[3958]: I0319 11:51:10.784972 3958 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 11:51:10.786203 master-0 kubenswrapper[3958]: E0319 11:51:10.786137 3958 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 19 11:51:10.973415 master-0 kubenswrapper[3958]: I0319 11:51:10.973325 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:11.047026 master-0 kubenswrapper[3958]: W0319 11:51:11.046876 3958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:11.047026 master-0 kubenswrapper[3958]: E0319 11:51:11.047018 3958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:11.176130 master-0 kubenswrapper[3958]: W0319 11:51:11.176064 3958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f265536aba6292ead501bc9b49f327.slice/crio-b08462654300221b81e734b82711f8871d4674a9fca01ad1cc20011ae2d1abfa WatchSource:0}: Error finding container b08462654300221b81e734b82711f8871d4674a9fca01ad1cc20011ae2d1abfa: Status 404 returned error can't find the container with id b08462654300221b81e734b82711f8871d4674a9fca01ad1cc20011ae2d1abfa Mar 19 11:51:11.182727 master-0 kubenswrapper[3958]: I0319 11:51:11.182693 3958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 11:51:11.242202 master-0 kubenswrapper[3958]: W0319 11:51:11.242073 3958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:11.242380 master-0 kubenswrapper[3958]: E0319 11:51:11.242217 3958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:11.371563 master-0 kubenswrapper[3958]: W0319 11:51:11.371364 3958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:11.371563 master-0 kubenswrapper[3958]: E0319 11:51:11.371459 3958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:11.387475 master-0 kubenswrapper[3958]: E0319 11:51:11.387373 3958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 19 11:51:11.417040 master-0 kubenswrapper[3958]: W0319 11:51:11.416943 3958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1249822f86f23526277d165c0d5d3c19.slice/crio-e95d63ff24c648e1680e38f824e92cec0fcea7bc20cdade312b98b0468aad916 WatchSource:0}: Error finding container e95d63ff24c648e1680e38f824e92cec0fcea7bc20cdade312b98b0468aad916: Status 404 returned error can't find the container with id e95d63ff24c648e1680e38f824e92cec0fcea7bc20cdade312b98b0468aad916 Mar 19 11:51:11.502024 master-0 kubenswrapper[3958]: W0319 11:51:11.501868 3958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:11.502024 master-0 kubenswrapper[3958]: E0319 11:51:11.502010 3958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:11.586842 master-0 kubenswrapper[3958]: I0319 11:51:11.586750 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:11.588237 master-0 kubenswrapper[3958]: I0319 11:51:11.588187 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:11.588303 master-0 kubenswrapper[3958]: I0319 11:51:11.588248 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:11.588303 master-0 kubenswrapper[3958]: I0319 11:51:11.588261 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:11.588367 master-0 kubenswrapper[3958]: I0319 11:51:11.588324 3958 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 11:51:11.589291 master-0 kubenswrapper[3958]: E0319 11:51:11.589240 3958 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 19 11:51:11.608589 master-0 kubenswrapper[3958]: W0319 11:51:11.608519 3958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49fac1b46a11e49501805e891baae4a9.slice/crio-a1783b50c5a08e2a42241bed3f2df9ef9e7315549e4393a5e98fdcdce6ecef6e WatchSource:0}: Error finding container a1783b50c5a08e2a42241bed3f2df9ef9e7315549e4393a5e98fdcdce6ecef6e: Status 404 returned error can't find the container with id a1783b50c5a08e2a42241bed3f2df9ef9e7315549e4393a5e98fdcdce6ecef6e Mar 19 11:51:11.753894 master-0 kubenswrapper[3958]: W0319 11:51:11.753815 3958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc83737980b9ee109184b1d78e942cf36.slice/crio-48efbe72c10829dd5908b740a4651763088ff7358d327f0b015844979a99b5dd WatchSource:0}: Error finding container 48efbe72c10829dd5908b740a4651763088ff7358d327f0b015844979a99b5dd: Status 404 returned error can't find the container with id 48efbe72c10829dd5908b740a4651763088ff7358d327f0b015844979a99b5dd Mar 19 11:51:11.919388 master-0 kubenswrapper[3958]: I0319 11:51:11.919270 3958 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 19 11:51:11.920595 master-0 kubenswrapper[3958]: E0319 11:51:11.920524 3958 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:11.973175 master-0 kubenswrapper[3958]: I0319 11:51:11.973063 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:12.129704 master-0 kubenswrapper[3958]: I0319 11:51:12.129410 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"48efbe72c10829dd5908b740a4651763088ff7358d327f0b015844979a99b5dd"} Mar 19 11:51:12.130633 master-0 kubenswrapper[3958]: I0319 11:51:12.130556 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"a1783b50c5a08e2a42241bed3f2df9ef9e7315549e4393a5e98fdcdce6ecef6e"} Mar 19 11:51:12.131465 master-0 kubenswrapper[3958]: I0319 11:51:12.131382 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"e95d63ff24c648e1680e38f824e92cec0fcea7bc20cdade312b98b0468aad916"} Mar 19 11:51:12.132877 master-0 kubenswrapper[3958]: I0319 11:51:12.132832 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"b08462654300221b81e734b82711f8871d4674a9fca01ad1cc20011ae2d1abfa"} Mar 19 11:51:12.677227 master-0 kubenswrapper[3958]: W0319 11:51:12.677154 3958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:12.677227 master-0 kubenswrapper[3958]: E0319 11:51:12.677213 3958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:12.746816 master-0 kubenswrapper[3958]: W0319 11:51:12.746758 3958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd664a6d0d2a24360dee10612610f1b59.slice/crio-842d46230cd4097ecd49786313f777a88243300f4db6d95963150d13dc2d40af WatchSource:0}: Error finding container 842d46230cd4097ecd49786313f777a88243300f4db6d95963150d13dc2d40af: Status 404 returned error can't find the container with id 842d46230cd4097ecd49786313f777a88243300f4db6d95963150d13dc2d40af Mar 19 11:51:12.972663 master-0 kubenswrapper[3958]: I0319 11:51:12.972560 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:12.988996 master-0 kubenswrapper[3958]: E0319 11:51:12.988919 3958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 19 11:51:13.136243 master-0 kubenswrapper[3958]: I0319 11:51:13.136155 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"842d46230cd4097ecd49786313f777a88243300f4db6d95963150d13dc2d40af"} Mar 19 11:51:13.189664 master-0 kubenswrapper[3958]: I0319 11:51:13.189562 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:13.190578 master-0 kubenswrapper[3958]: I0319 11:51:13.190542 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:13.190578 master-0 kubenswrapper[3958]: I0319 11:51:13.190574 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:13.190578 master-0 kubenswrapper[3958]: I0319 11:51:13.190584 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:13.190751 master-0 kubenswrapper[3958]: I0319 11:51:13.190634 3958 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 11:51:13.191467 master-0 kubenswrapper[3958]: E0319 11:51:13.191402 3958 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 19 11:51:13.509506 master-0 kubenswrapper[3958]: W0319 11:51:13.509440 3958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:13.509506 master-0 kubenswrapper[3958]: E0319 11:51:13.509508 3958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:13.585641 master-0 kubenswrapper[3958]: W0319 11:51:13.585600 3958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:13.585740 master-0 kubenswrapper[3958]: E0319 11:51:13.585656 3958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:13.863755 master-0 kubenswrapper[3958]: W0319 11:51:13.863687 3958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:13.863944 master-0 kubenswrapper[3958]: E0319 11:51:13.863781 3958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:13.973372 master-0 kubenswrapper[3958]: I0319 11:51:13.973314 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:14.140524 master-0 kubenswrapper[3958]: I0319 11:51:14.140403 3958 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="b1a54e1d5a4e1d27db12da7c6949a0237da9f713c6a17f5af4237b1c8b03cbfa" exitCode=0 Mar 19 11:51:14.140524 master-0 kubenswrapper[3958]: I0319 11:51:14.140465 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"b1a54e1d5a4e1d27db12da7c6949a0237da9f713c6a17f5af4237b1c8b03cbfa"} Mar 19 11:51:14.141551 master-0 kubenswrapper[3958]: I0319 11:51:14.140535 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:14.142773 master-0 kubenswrapper[3958]: I0319 11:51:14.142715 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:14.142872 master-0 kubenswrapper[3958]: I0319 11:51:14.142788 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:14.142872 master-0 kubenswrapper[3958]: I0319 11:51:14.142822 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:14.972786 master-0 kubenswrapper[3958]: I0319 11:51:14.972720 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:15.145447 master-0 kubenswrapper[3958]: I0319 11:51:15.145390 3958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/0.log" Mar 19 11:51:15.146220 master-0 kubenswrapper[3958]: I0319 11:51:15.146159 3958 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="021729fe020174efd3381f70fa278842613e6cdf62de5a45f0ec6aa0c5f31ae9" exitCode=1 Mar 19 11:51:15.146299 master-0 kubenswrapper[3958]: I0319 11:51:15.146238 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"021729fe020174efd3381f70fa278842613e6cdf62de5a45f0ec6aa0c5f31ae9"} Mar 19 11:51:15.146299 master-0 kubenswrapper[3958]: I0319 11:51:15.146256 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:15.147180 master-0 kubenswrapper[3958]: I0319 11:51:15.147153 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:15.147231 master-0 kubenswrapper[3958]: I0319 11:51:15.147190 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:15.147231 master-0 kubenswrapper[3958]: I0319 11:51:15.147200 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:15.147682 master-0 kubenswrapper[3958]: I0319 11:51:15.147661 3958 scope.go:117] "RemoveContainer" containerID="021729fe020174efd3381f70fa278842613e6cdf62de5a45f0ec6aa0c5f31ae9" Mar 19 11:51:15.973124 master-0 kubenswrapper[3958]: I0319 11:51:15.973046 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:16.175734 master-0 kubenswrapper[3958]: I0319 11:51:16.175158 3958 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 19 11:51:16.177887 master-0 kubenswrapper[3958]: E0319 11:51:16.177664 3958 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:16.190520 master-0 kubenswrapper[3958]: E0319 11:51:16.190440 3958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 19 11:51:16.266907 master-0 kubenswrapper[3958]: W0319 11:51:16.266834 3958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:16.266976 master-0 kubenswrapper[3958]: E0319 11:51:16.266926 3958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:16.391908 master-0 kubenswrapper[3958]: I0319 11:51:16.391861 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:16.393667 master-0 kubenswrapper[3958]: I0319 11:51:16.393628 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:16.393731 master-0 kubenswrapper[3958]: I0319 11:51:16.393709 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:16.393731 master-0 kubenswrapper[3958]: I0319 11:51:16.393722 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:16.393835 master-0 kubenswrapper[3958]: I0319 11:51:16.393815 3958 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 11:51:16.395014 master-0 kubenswrapper[3958]: E0319 11:51:16.394977 3958 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 19 11:51:16.973215 master-0 kubenswrapper[3958]: I0319 11:51:16.973147 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:17.973056 master-0 kubenswrapper[3958]: I0319 11:51:17.973004 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:18.128431 master-0 kubenswrapper[3958]: W0319 11:51:18.128338 3958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:18.128644 master-0 kubenswrapper[3958]: E0319 11:51:18.128449 3958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:18.973556 master-0 kubenswrapper[3958]: I0319 11:51:18.973480 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:19.006188 master-0 kubenswrapper[3958]: W0319 11:51:19.006066 3958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:19.006188 master-0 kubenswrapper[3958]: E0319 11:51:19.006154 3958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:19.279946 master-0 kubenswrapper[3958]: W0319 11:51:19.279642 3958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:19.279946 master-0 kubenswrapper[3958]: E0319 11:51:19.279791 3958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:19.973996 master-0 kubenswrapper[3958]: I0319 11:51:19.973894 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:20.079757 master-0 kubenswrapper[3958]: E0319 11:51:20.078311 3958 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 19 11:51:20.137676 master-0 kubenswrapper[3958]: E0319 11:51:20.137494 3958 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189e3bd08313a03b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:09.968891963 +0000 UTC m=+0.642613175,LastTimestamp:2026-03-19 11:51:09.968891963 +0000 UTC m=+0.642613175,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:20.161293 master-0 kubenswrapper[3958]: I0319 11:51:20.161250 3958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/0.log" Mar 19 11:51:20.162986 master-0 kubenswrapper[3958]: I0319 11:51:20.162939 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"d8d6463706a002922b6bf91885e1b00e6557f01fc64e8ab28d2403acb657b68f"} Mar 19 11:51:20.163116 master-0 kubenswrapper[3958]: I0319 11:51:20.163093 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:20.163941 master-0 kubenswrapper[3958]: I0319 11:51:20.163914 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:20.163992 master-0 kubenswrapper[3958]: I0319 11:51:20.163947 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:20.163992 master-0 kubenswrapper[3958]: I0319 11:51:20.163960 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:20.973877 master-0 kubenswrapper[3958]: I0319 11:51:20.973771 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 11:51:21.167515 master-0 kubenswrapper[3958]: I0319 11:51:21.167462 3958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 19 11:51:21.168193 master-0 kubenswrapper[3958]: I0319 11:51:21.167955 3958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/0.log" Mar 19 11:51:21.168670 master-0 kubenswrapper[3958]: I0319 11:51:21.168628 3958 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="d8d6463706a002922b6bf91885e1b00e6557f01fc64e8ab28d2403acb657b68f" exitCode=1 Mar 19 11:51:21.168704 master-0 kubenswrapper[3958]: I0319 11:51:21.168680 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"d8d6463706a002922b6bf91885e1b00e6557f01fc64e8ab28d2403acb657b68f"} Mar 19 11:51:21.168815 master-0 kubenswrapper[3958]: I0319 11:51:21.168768 3958 scope.go:117] "RemoveContainer" containerID="021729fe020174efd3381f70fa278842613e6cdf62de5a45f0ec6aa0c5f31ae9" Mar 19 11:51:21.168862 master-0 kubenswrapper[3958]: I0319 11:51:21.168838 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:21.170117 master-0 kubenswrapper[3958]: I0319 11:51:21.170091 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:21.170169 master-0 kubenswrapper[3958]: I0319 11:51:21.170122 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:21.170169 master-0 kubenswrapper[3958]: I0319 11:51:21.170134 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:21.170618 master-0 kubenswrapper[3958]: I0319 11:51:21.170595 3958 scope.go:117] "RemoveContainer" containerID="d8d6463706a002922b6bf91885e1b00e6557f01fc64e8ab28d2403acb657b68f" Mar 19 11:51:21.171189 master-0 kubenswrapper[3958]: E0319 11:51:21.170870 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 19 11:51:21.171304 master-0 kubenswrapper[3958]: I0319 11:51:21.171276 3958 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="f7123f20a535bea151420277445f140ddc0e3200c0d15a65bcdb6b9d86c90ca9" exitCode=1 Mar 19 11:51:21.171355 master-0 kubenswrapper[3958]: I0319 11:51:21.171336 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"f7123f20a535bea151420277445f140ddc0e3200c0d15a65bcdb6b9d86c90ca9"} Mar 19 11:51:21.173678 master-0 kubenswrapper[3958]: I0319 11:51:21.173628 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"1f3d2affaef2ec02f4c8910d24c25e42512158f294dc5c7de4fd47923e7552fe"} Mar 19 11:51:21.173678 master-0 kubenswrapper[3958]: I0319 11:51:21.173651 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:21.173678 master-0 kubenswrapper[3958]: I0319 11:51:21.173662 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"5966cfc73c8dc098af4cd51014c727f29c95ad4372f9a6d75305e51f28be76da"} Mar 19 11:51:21.174872 master-0 kubenswrapper[3958]: I0319 11:51:21.174816 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:21.174872 master-0 kubenswrapper[3958]: I0319 11:51:21.174839 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:21.174872 master-0 kubenswrapper[3958]: I0319 11:51:21.174848 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:21.175460 master-0 kubenswrapper[3958]: I0319 11:51:21.175416 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:21.175460 master-0 kubenswrapper[3958]: I0319 11:51:21.175432 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"6606dc49963e1cc0f10c3000efffd7cbb91c76beb712be6d1c6cb91c1b4a7c79"} Mar 19 11:51:21.176242 master-0 kubenswrapper[3958]: I0319 11:51:21.176206 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:21.176242 master-0 kubenswrapper[3958]: I0319 11:51:21.176233 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:21.176242 master-0 kubenswrapper[3958]: I0319 11:51:21.176243 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:21.177089 master-0 kubenswrapper[3958]: I0319 11:51:21.177061 3958 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="95a5e59caf12dcb834fa10b5b5af9755159f99a81152a1ebbfb9f9785ea5edff" exitCode=0 Mar 19 11:51:21.177130 master-0 kubenswrapper[3958]: I0319 11:51:21.177090 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerDied","Data":"95a5e59caf12dcb834fa10b5b5af9755159f99a81152a1ebbfb9f9785ea5edff"} Mar 19 11:51:21.177162 master-0 kubenswrapper[3958]: I0319 11:51:21.177130 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:21.177726 master-0 kubenswrapper[3958]: I0319 11:51:21.177698 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:21.177772 master-0 kubenswrapper[3958]: I0319 11:51:21.177727 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:21.177772 master-0 kubenswrapper[3958]: I0319 11:51:21.177738 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:21.182910 master-0 kubenswrapper[3958]: I0319 11:51:21.182887 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:21.183819 master-0 kubenswrapper[3958]: I0319 11:51:21.183768 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:21.183819 master-0 kubenswrapper[3958]: I0319 11:51:21.183819 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:21.183898 master-0 kubenswrapper[3958]: I0319 11:51:21.183832 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:22.188632 master-0 kubenswrapper[3958]: I0319 11:51:22.188585 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"f347ebf4af2e430c7010deb32f74eaaa375be42bd1cb0fd78e647b0e4fd96480"} Mar 19 11:51:22.190163 master-0 kubenswrapper[3958]: I0319 11:51:22.190141 3958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 19 11:51:22.191389 master-0 kubenswrapper[3958]: I0319 11:51:22.191362 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:22.191445 master-0 kubenswrapper[3958]: I0319 11:51:22.191410 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:22.191492 master-0 kubenswrapper[3958]: I0319 11:51:22.191447 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:22.194600 master-0 kubenswrapper[3958]: I0319 11:51:22.194561 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:22.194600 master-0 kubenswrapper[3958]: I0319 11:51:22.194593 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:22.194600 master-0 kubenswrapper[3958]: I0319 11:51:22.194589 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:22.194741 master-0 kubenswrapper[3958]: I0319 11:51:22.194620 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:22.194741 master-0 kubenswrapper[3958]: I0319 11:51:22.194628 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:22.194741 master-0 kubenswrapper[3958]: I0319 11:51:22.194602 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:22.194741 master-0 kubenswrapper[3958]: I0319 11:51:22.194727 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:22.194894 master-0 kubenswrapper[3958]: I0319 11:51:22.194746 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:22.194894 master-0 kubenswrapper[3958]: I0319 11:51:22.194756 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:22.195091 master-0 kubenswrapper[3958]: I0319 11:51:22.195073 3958 scope.go:117] "RemoveContainer" containerID="d8d6463706a002922b6bf91885e1b00e6557f01fc64e8ab28d2403acb657b68f" Mar 19 11:51:22.195222 master-0 kubenswrapper[3958]: E0319 11:51:22.195200 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 19 11:51:22.795950 master-0 kubenswrapper[3958]: I0319 11:51:22.795365 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:22.796500 master-0 kubenswrapper[3958]: I0319 11:51:22.796444 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:22.796579 master-0 kubenswrapper[3958]: I0319 11:51:22.796510 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:22.796579 master-0 kubenswrapper[3958]: I0319 11:51:22.796536 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:22.796669 master-0 kubenswrapper[3958]: I0319 11:51:22.796618 3958 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 11:51:22.871178 master-0 kubenswrapper[3958]: I0319 11:51:22.871102 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 11:51:22.871178 master-0 kubenswrapper[3958]: E0319 11:51:22.871104 3958 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 19 11:51:22.871494 master-0 kubenswrapper[3958]: E0319 11:51:22.871192 3958 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 19 11:51:22.975256 master-0 kubenswrapper[3958]: I0319 11:51:22.975107 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 11:51:23.976733 master-0 kubenswrapper[3958]: I0319 11:51:23.976692 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 11:51:24.198225 master-0 kubenswrapper[3958]: I0319 11:51:24.198181 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"37d48d754396bd1bc2527510939d9c6fc67f46e6581c207179d0cd71cd638e7d"} Mar 19 11:51:24.198398 master-0 kubenswrapper[3958]: I0319 11:51:24.198276 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:24.198892 master-0 kubenswrapper[3958]: I0319 11:51:24.198878 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:24.198959 master-0 kubenswrapper[3958]: I0319 11:51:24.198902 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:24.198959 master-0 kubenswrapper[3958]: I0319 11:51:24.198911 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:24.199132 master-0 kubenswrapper[3958]: I0319 11:51:24.199121 3958 scope.go:117] "RemoveContainer" containerID="f7123f20a535bea151420277445f140ddc0e3200c0d15a65bcdb6b9d86c90ca9" Mar 19 11:51:24.216592 master-0 kubenswrapper[3958]: I0319 11:51:24.216551 3958 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 19 11:51:24.234865 master-0 kubenswrapper[3958]: I0319 11:51:24.234776 3958 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 19 11:51:24.943290 master-0 kubenswrapper[3958]: I0319 11:51:24.943170 3958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:24.978243 master-0 kubenswrapper[3958]: I0319 11:51:24.978185 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 11:51:25.432086 master-0 kubenswrapper[3958]: I0319 11:51:25.432026 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"570446cbe4fe51c612e56ccc1c781b010d9f51a4701a23ab3e0e9c3afd18acfd"} Mar 19 11:51:25.432302 master-0 kubenswrapper[3958]: I0319 11:51:25.432127 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:25.433178 master-0 kubenswrapper[3958]: I0319 11:51:25.433112 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:25.433178 master-0 kubenswrapper[3958]: I0319 11:51:25.433138 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:25.433178 master-0 kubenswrapper[3958]: I0319 11:51:25.433148 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:25.977065 master-0 kubenswrapper[3958]: I0319 11:51:25.977025 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 11:51:26.177267 master-0 kubenswrapper[3958]: W0319 11:51:26.177197 3958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 19 11:51:26.177267 master-0 kubenswrapper[3958]: E0319 11:51:26.177272 3958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 19 11:51:26.434905 master-0 kubenswrapper[3958]: I0319 11:51:26.434727 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:26.435513 master-0 kubenswrapper[3958]: I0319 11:51:26.435489 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:26.435570 master-0 kubenswrapper[3958]: I0319 11:51:26.435527 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:26.435570 master-0 kubenswrapper[3958]: I0319 11:51:26.435541 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:26.979637 master-0 kubenswrapper[3958]: I0319 11:51:26.979243 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 11:51:26.979637 master-0 kubenswrapper[3958]: W0319 11:51:26.979388 3958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 19 11:51:26.979933 master-0 kubenswrapper[3958]: E0319 11:51:26.979670 3958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 19 11:51:27.439586 master-0 kubenswrapper[3958]: I0319 11:51:27.439393 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"4eb7482c86a1b5f9e745f031e830bded6c37fd855abcbff4d6d73294bfadb247"} Mar 19 11:51:27.439586 master-0 kubenswrapper[3958]: I0319 11:51:27.439553 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:27.440637 master-0 kubenswrapper[3958]: I0319 11:51:27.440595 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:27.440681 master-0 kubenswrapper[3958]: I0319 11:51:27.440638 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:27.440681 master-0 kubenswrapper[3958]: I0319 11:51:27.440657 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:27.444329 master-0 kubenswrapper[3958]: I0319 11:51:27.444281 3958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:27.451348 master-0 kubenswrapper[3958]: I0319 11:51:27.451283 3958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:27.883410 master-0 kubenswrapper[3958]: W0319 11:51:27.883354 3958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 19 11:51:27.883709 master-0 kubenswrapper[3958]: E0319 11:51:27.883426 3958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 19 11:51:27.976791 master-0 kubenswrapper[3958]: I0319 11:51:27.976751 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 11:51:28.357748 master-0 kubenswrapper[3958]: I0319 11:51:28.357694 3958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:28.358133 master-0 kubenswrapper[3958]: I0319 11:51:28.358092 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:28.359245 master-0 kubenswrapper[3958]: I0319 11:51:28.359206 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:28.359245 master-0 kubenswrapper[3958]: I0319 11:51:28.359241 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:28.359245 master-0 kubenswrapper[3958]: I0319 11:51:28.359286 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:28.510115 master-0 kubenswrapper[3958]: I0319 11:51:28.510053 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:28.511859 master-0 kubenswrapper[3958]: I0319 11:51:28.510473 3958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:28.511859 master-0 kubenswrapper[3958]: I0319 11:51:28.511013 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:28.511859 master-0 kubenswrapper[3958]: I0319 11:51:28.511110 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:28.511859 master-0 kubenswrapper[3958]: I0319 11:51:28.511129 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:28.512449 master-0 kubenswrapper[3958]: I0319 11:51:28.512408 3958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:28.512548 master-0 kubenswrapper[3958]: I0319 11:51:28.512533 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:28.512713 master-0 kubenswrapper[3958]: I0319 11:51:28.512665 3958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:28.513456 master-0 kubenswrapper[3958]: I0319 11:51:28.513395 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:28.513531 master-0 kubenswrapper[3958]: I0319 11:51:28.513467 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:28.513531 master-0 kubenswrapper[3958]: I0319 11:51:28.513494 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:28.515000 master-0 kubenswrapper[3958]: I0319 11:51:28.514754 3958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:51:28.851640 master-0 kubenswrapper[3958]: W0319 11:51:28.851589 3958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 19 11:51:28.851640 master-0 kubenswrapper[3958]: E0319 11:51:28.851637 3958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 19 11:51:28.976701 master-0 kubenswrapper[3958]: I0319 11:51:28.976654 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 11:51:29.512231 master-0 kubenswrapper[3958]: I0319 11:51:29.512185 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:29.512698 master-0 kubenswrapper[3958]: I0319 11:51:29.512558 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:29.513231 master-0 kubenswrapper[3958]: I0319 11:51:29.513209 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:29.513331 master-0 kubenswrapper[3958]: I0319 11:51:29.513318 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:29.513411 master-0 kubenswrapper[3958]: I0319 11:51:29.513400 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:29.516140 master-0 kubenswrapper[3958]: I0319 11:51:29.516080 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:29.516140 master-0 kubenswrapper[3958]: I0319 11:51:29.516122 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:29.516140 master-0 kubenswrapper[3958]: I0319 11:51:29.516131 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:29.872189 master-0 kubenswrapper[3958]: I0319 11:51:29.871879 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:29.873230 master-0 kubenswrapper[3958]: I0319 11:51:29.873133 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:29.873230 master-0 kubenswrapper[3958]: I0319 11:51:29.873197 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:29.873230 master-0 kubenswrapper[3958]: I0319 11:51:29.873217 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:29.873521 master-0 kubenswrapper[3958]: I0319 11:51:29.873285 3958 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 11:51:29.879742 master-0 kubenswrapper[3958]: E0319 11:51:29.879656 3958 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 19 11:51:29.879742 master-0 kubenswrapper[3958]: E0319 11:51:29.879660 3958 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 19 11:51:29.974384 master-0 kubenswrapper[3958]: I0319 11:51:29.974282 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 11:51:30.079581 master-0 kubenswrapper[3958]: E0319 11:51:30.079483 3958 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 19 11:51:30.147078 master-0 kubenswrapper[3958]: E0319 11:51:30.146864 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd08313a03b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:09.968891963 +0000 UTC m=+0.642613175,LastTimestamp:2026-03-19 11:51:09.968891963 +0000 UTC m=+0.642613175,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.154304 master-0 kubenswrapper[3958]: E0319 11:51:30.154074 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856d3a02 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008318466 +0000 UTC m=+0.682039668,LastTimestamp:2026-03-19 11:51:10.008318466 +0000 UTC m=+0.682039668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.162312 master-0 kubenswrapper[3958]: E0319 11:51:30.162110 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856da23c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008345148 +0000 UTC m=+0.682066330,LastTimestamp:2026-03-19 11:51:10.008345148 +0000 UTC m=+0.682066330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.169442 master-0 kubenswrapper[3958]: E0319 11:51:30.169221 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856ddcf2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008360178 +0000 UTC m=+0.682081360,LastTimestamp:2026-03-19 11:51:10.008360178 +0000 UTC m=+0.682081360,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.178715 master-0 kubenswrapper[3958]: E0319 11:51:30.178552 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd089c495b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.08115244 +0000 UTC m=+0.754873622,LastTimestamp:2026-03-19 11:51:10.08115244 +0000 UTC m=+0.754873622,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.185391 master-0 kubenswrapper[3958]: E0319 11:51:30.185246 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e3bd0856d3a02\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856d3a02 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008318466 +0000 UTC m=+0.682039668,LastTimestamp:2026-03-19 11:51:10.178263756 +0000 UTC m=+0.851984938,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.193035 master-0 kubenswrapper[3958]: E0319 11:51:30.192858 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e3bd0856da23c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856da23c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008345148 +0000 UTC m=+0.682066330,LastTimestamp:2026-03-19 11:51:10.178279266 +0000 UTC m=+0.852000438,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.200706 master-0 kubenswrapper[3958]: E0319 11:51:30.200510 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e3bd0856ddcf2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856ddcf2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008360178 +0000 UTC m=+0.682081360,LastTimestamp:2026-03-19 11:51:10.178288207 +0000 UTC m=+0.852009389,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.207833 master-0 kubenswrapper[3958]: E0319 11:51:30.207620 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e3bd0856d3a02\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856d3a02 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008318466 +0000 UTC m=+0.682039668,LastTimestamp:2026-03-19 11:51:10.222602005 +0000 UTC m=+0.896323227,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.215188 master-0 kubenswrapper[3958]: E0319 11:51:30.214927 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e3bd0856da23c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856da23c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008345148 +0000 UTC m=+0.682066330,LastTimestamp:2026-03-19 11:51:10.222644607 +0000 UTC m=+0.896365829,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.222490 master-0 kubenswrapper[3958]: E0319 11:51:30.222311 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e3bd0856ddcf2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856ddcf2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008360178 +0000 UTC m=+0.682081360,LastTimestamp:2026-03-19 11:51:10.222662668 +0000 UTC m=+0.896383880,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.230043 master-0 kubenswrapper[3958]: E0319 11:51:30.229876 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e3bd0856d3a02\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856d3a02 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008318466 +0000 UTC m=+0.682039668,LastTimestamp:2026-03-19 11:51:10.224003349 +0000 UTC m=+0.897724571,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.236757 master-0 kubenswrapper[3958]: E0319 11:51:30.236583 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e3bd0856da23c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856da23c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008345148 +0000 UTC m=+0.682066330,LastTimestamp:2026-03-19 11:51:10.22403917 +0000 UTC m=+0.897760392,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.243755 master-0 kubenswrapper[3958]: E0319 11:51:30.243633 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e3bd0856ddcf2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856ddcf2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008360178 +0000 UTC m=+0.682081360,LastTimestamp:2026-03-19 11:51:10.224062851 +0000 UTC m=+0.897784073,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.250709 master-0 kubenswrapper[3958]: E0319 11:51:30.250525 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e3bd0856d3a02\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856d3a02 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008318466 +0000 UTC m=+0.682039668,LastTimestamp:2026-03-19 11:51:10.224148035 +0000 UTC m=+0.897869237,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.258847 master-0 kubenswrapper[3958]: E0319 11:51:30.258684 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e3bd0856da23c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856da23c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008345148 +0000 UTC m=+0.682066330,LastTimestamp:2026-03-19 11:51:10.224162776 +0000 UTC m=+0.897883978,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.269757 master-0 kubenswrapper[3958]: E0319 11:51:30.269624 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e3bd0856ddcf2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856ddcf2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008360178 +0000 UTC m=+0.682081360,LastTimestamp:2026-03-19 11:51:10.224176597 +0000 UTC m=+0.897897789,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.276446 master-0 kubenswrapper[3958]: E0319 11:51:30.276294 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e3bd0856d3a02\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856d3a02 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008318466 +0000 UTC m=+0.682039668,LastTimestamp:2026-03-19 11:51:10.225199964 +0000 UTC m=+0.898921166,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.283954 master-0 kubenswrapper[3958]: E0319 11:51:30.283781 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e3bd0856da23c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856da23c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008345148 +0000 UTC m=+0.682066330,LastTimestamp:2026-03-19 11:51:10.225220865 +0000 UTC m=+0.898942057,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.289481 master-0 kubenswrapper[3958]: E0319 11:51:30.289326 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e3bd0856ddcf2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856ddcf2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008360178 +0000 UTC m=+0.682081360,LastTimestamp:2026-03-19 11:51:10.225233275 +0000 UTC m=+0.898954467,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.293516 master-0 kubenswrapper[3958]: E0319 11:51:30.293391 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e3bd0856d3a02\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856d3a02 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008318466 +0000 UTC m=+0.682039668,LastTimestamp:2026-03-19 11:51:10.227494149 +0000 UTC m=+0.901215341,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.298762 master-0 kubenswrapper[3958]: E0319 11:51:30.298651 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e3bd0856da23c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856da23c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008345148 +0000 UTC m=+0.682066330,LastTimestamp:2026-03-19 11:51:10.227544691 +0000 UTC m=+0.901265893,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.303491 master-0 kubenswrapper[3958]: E0319 11:51:30.303360 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e3bd0856ddcf2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856ddcf2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008360178 +0000 UTC m=+0.682081360,LastTimestamp:2026-03-19 11:51:10.227557722 +0000 UTC m=+0.901278924,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.308398 master-0 kubenswrapper[3958]: E0319 11:51:30.308285 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e3bd0856d3a02\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856d3a02 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008318466 +0000 UTC m=+0.682039668,LastTimestamp:2026-03-19 11:51:10.229074542 +0000 UTC m=+0.902795734,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.312867 master-0 kubenswrapper[3958]: E0319 11:51:30.312727 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e3bd0856da23c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e3bd0856da23c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:10.008345148 +0000 UTC m=+0.682066330,LastTimestamp:2026-03-19 11:51:10.229101363 +0000 UTC m=+0.902822565,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.319921 master-0 kubenswrapper[3958]: E0319 11:51:30.319745 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e3bd0cb6c0e32 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:11.182646834 +0000 UTC m=+1.856368016,LastTimestamp:2026-03-19 11:51:11.182646834 +0000 UTC m=+1.856368016,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.324754 master-0 kubenswrapper[3958]: E0319 11:51:30.324630 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3bd0d99290c1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:11.420051649 +0000 UTC m=+2.093772871,LastTimestamp:2026-03-19 11:51:11.420051649 +0000 UTC m=+2.093772871,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.329656 master-0 kubenswrapper[3958]: E0319 11:51:30.329521 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e3bd0e4eb5fab openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:11.610421163 +0000 UTC m=+2.284142345,LastTimestamp:2026-03-19 11:51:11.610421163 +0000 UTC m=+2.284142345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.334192 master-0 kubenswrapper[3958]: E0319 11:51:30.334012 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e3bd0edb11bd2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:11.75759765 +0000 UTC m=+2.431318832,LastTimestamp:2026-03-19 11:51:11.75759765 +0000 UTC m=+2.431318832,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.339288 master-0 kubenswrapper[3958]: E0319 11:51:30.339131 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e3bd128c8785e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:12.748984414 +0000 UTC m=+3.422705636,LastTimestamp:2026-03-19 11:51:12.748984414 +0000 UTC m=+3.422705636,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.345289 master-0 kubenswrapper[3958]: E0319 11:51:30.345136 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3bd15a0afbc8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" in 2.155s (2.155s including waiting). Image size: 465090934 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:13.575427016 +0000 UTC m=+4.249148198,LastTimestamp:2026-03-19 11:51:13.575427016 +0000 UTC m=+4.249148198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.352596 master-0 kubenswrapper[3958]: E0319 11:51:30.352446 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3bd1673c5f83 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:13.796767619 +0000 UTC m=+4.470488791,LastTimestamp:2026-03-19 11:51:13.796767619 +0000 UTC m=+4.470488791,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.357586 master-0 kubenswrapper[3958]: E0319 11:51:30.357470 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3bd16833d2ad openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:13.812984493 +0000 UTC m=+4.486705675,LastTimestamp:2026-03-19 11:51:13.812984493 +0000 UTC m=+4.486705675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.362676 master-0 kubenswrapper[3958]: E0319 11:51:30.362499 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3bd17c1dbc9f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:14.147081375 +0000 UTC m=+4.820802557,LastTimestamp:2026-03-19 11:51:14.147081375 +0000 UTC m=+4.820802557,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.367987 master-0 kubenswrapper[3958]: E0319 11:51:30.367825 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3bd18763f638 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:14.336233016 +0000 UTC m=+5.009954198,LastTimestamp:2026-03-19 11:51:14.336233016 +0000 UTC m=+5.009954198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.373352 master-0 kubenswrapper[3958]: E0319 11:51:30.373195 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3bd187f93988 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:14.346015112 +0000 UTC m=+5.019736294,LastTimestamp:2026-03-19 11:51:14.346015112 +0000 UTC m=+5.019736294,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.379605 master-0 kubenswrapper[3958]: E0319 11:51:30.379452 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e3bd17c1dbc9f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3bd17c1dbc9f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:14.147081375 +0000 UTC m=+4.820802557,LastTimestamp:2026-03-19 11:51:19.891483107 +0000 UTC m=+10.565204289,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.384463 master-0 kubenswrapper[3958]: E0319 11:51:30.384326 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e3bd2d46b08a1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\" in 7.174s (7.174s including waiting). Image size: 529326739 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:19.923509409 +0000 UTC m=+10.597230591,LastTimestamp:2026-03-19 11:51:19.923509409 +0000 UTC m=+10.597230591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.389386 master-0 kubenswrapper[3958]: E0319 11:51:30.389268 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e3bd2dbcb9cd9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 8.436s (8.436s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:20.047279321 +0000 UTC m=+10.721000503,LastTimestamp:2026-03-19 11:51:20.047279321 +0000 UTC m=+10.721000503,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.393557 master-0 kubenswrapper[3958]: E0319 11:51:30.393455 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e3bd2dbf2e528 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 8.867s (8.867s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:20.049853736 +0000 UTC m=+10.723574918,LastTimestamp:2026-03-19 11:51:20.049853736 +0000 UTC m=+10.723574918,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.398756 master-0 kubenswrapper[3958]: E0319 11:51:30.398641 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e3bd18763f638\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3bd18763f638 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:14.336233016 +0000 UTC m=+5.009954198,LastTimestamp:2026-03-19 11:51:20.11998264 +0000 UTC m=+10.793703822,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.403668 master-0 kubenswrapper[3958]: E0319 11:51:30.403452 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e3bd2e0b9a62d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 8.372s (8.372s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:20.129988141 +0000 UTC m=+10.803709323,LastTimestamp:2026-03-19 11:51:20.129988141 +0000 UTC m=+10.803709323,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.409026 master-0 kubenswrapper[3958]: E0319 11:51:30.408881 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e3bd187f93988\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3bd187f93988 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:14.346015112 +0000 UTC m=+5.019736294,LastTimestamp:2026-03-19 11:51:20.139373483 +0000 UTC m=+10.813094685,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.413743 master-0 kubenswrapper[3958]: E0319 11:51:30.413632 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e3bd2e24f75c3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:20.156583363 +0000 UTC m=+10.830304535,LastTimestamp:2026-03-19 11:51:20.156583363 +0000 UTC m=+10.830304535,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.419145 master-0 kubenswrapper[3958]: E0319 11:51:30.418977 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e3bd2e3302d2c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:20.17131038 +0000 UTC m=+10.845031562,LastTimestamp:2026-03-19 11:51:20.17131038 +0000 UTC m=+10.845031562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.424421 master-0 kubenswrapper[3958]: E0319 11:51:30.424227 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e3bd2e39e3682 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:20.17852173 +0000 UTC m=+10.852242902,LastTimestamp:2026-03-19 11:51:20.17852173 +0000 UTC m=+10.852242902,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.429236 master-0 kubenswrapper[3958]: E0319 11:51:30.429092 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e3bd2ea0f2a77 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:20.286587511 +0000 UTC m=+10.960308693,LastTimestamp:2026-03-19 11:51:20.286587511 +0000 UTC m=+10.960308693,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.434197 master-0 kubenswrapper[3958]: E0319 11:51:30.433775 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e3bd2ea957a75 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:20.295389813 +0000 UTC m=+10.969110995,LastTimestamp:2026-03-19 11:51:20.295389813 +0000 UTC m=+10.969110995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.441128 master-0 kubenswrapper[3958]: E0319 11:51:30.440915 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e3bd2eaa1f72a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:20.29620817 +0000 UTC m=+10.969929352,LastTimestamp:2026-03-19 11:51:20.29620817 +0000 UTC m=+10.969929352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.448214 master-0 kubenswrapper[3958]: E0319 11:51:30.448030 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e3bd2eac51a58 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:20.298510936 +0000 UTC m=+10.972232118,LastTimestamp:2026-03-19 11:51:20.298510936 +0000 UTC m=+10.972232118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.456899 master-0 kubenswrapper[3958]: E0319 11:51:30.456683 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e3bd2ead080a5 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:20.299258021 +0000 UTC m=+10.972979203,LastTimestamp:2026-03-19 11:51:20.299258021 +0000 UTC m=+10.972979203,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.461691 master-0 kubenswrapper[3958]: E0319 11:51:30.461604 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e3bd2eb99f5c2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:20.312460738 +0000 UTC m=+10.986181930,LastTimestamp:2026-03-19 11:51:20.312460738 +0000 UTC m=+10.986181930,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.466428 master-0 kubenswrapper[3958]: E0319 11:51:30.466319 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e3bd2ebfd8c33 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:20.318987315 +0000 UTC m=+10.992708497,LastTimestamp:2026-03-19 11:51:20.318987315 +0000 UTC m=+10.992708497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.473109 master-0 kubenswrapper[3958]: E0319 11:51:30.472956 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e3bd2f3ad4dc2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:20.447946178 +0000 UTC m=+11.121667360,LastTimestamp:2026-03-19 11:51:20.447946178 +0000 UTC m=+11.121667360,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.477654 master-0 kubenswrapper[3958]: E0319 11:51:30.477448 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e3bd2f47fa8c3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:20.461732035 +0000 UTC m=+11.135453207,LastTimestamp:2026-03-19 11:51:20.461732035 +0000 UTC m=+11.135453207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.482369 master-0 kubenswrapper[3958]: E0319 11:51:30.482201 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3bd31ec3b497 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:21.170834583 +0000 UTC m=+11.844555785,LastTimestamp:2026-03-19 11:51:21.170834583 +0000 UTC m=+11.844555785,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.487051 master-0 kubenswrapper[3958]: E0319 11:51:30.486904 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e3bd31f7abb45 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:21.182829381 +0000 UTC m=+11.856550563,LastTimestamp:2026-03-19 11:51:21.182829381 +0000 UTC m=+11.856550563,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.495690 master-0 kubenswrapper[3958]: E0319 11:51:30.495578 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e3bd3295a2a8c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:21.34846734 +0000 UTC m=+12.022188532,LastTimestamp:2026-03-19 11:51:21.34846734 +0000 UTC m=+12.022188532,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.500200 master-0 kubenswrapper[3958]: E0319 11:51:30.500003 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e3bd32a2df5ec openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:21.3623475 +0000 UTC m=+12.036068682,LastTimestamp:2026-03-19 11:51:21.3623475 +0000 UTC m=+12.036068682,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.504632 master-0 kubenswrapper[3958]: E0319 11:51:30.504137 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e3bd32a447b58 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:21.363823448 +0000 UTC m=+12.037544630,LastTimestamp:2026-03-19 11:51:21.363823448 +0000 UTC m=+12.037544630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.509551 master-0 kubenswrapper[3958]: E0319 11:51:30.509407 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e3bd31ec3b497\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3bd31ec3b497 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:21.170834583 +0000 UTC m=+11.844555785,LastTimestamp:2026-03-19 11:51:22.195179888 +0000 UTC m=+12.868901070,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.514415 master-0 kubenswrapper[3958]: E0319 11:51:30.514226 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e3bd39fea4e27 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\" in 3.038s (3.038s including waiting). Image size: 505246690 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:23.337625127 +0000 UTC m=+14.011346309,LastTimestamp:2026-03-19 11:51:23.337625127 +0000 UTC m=+14.011346309,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.514977 master-0 kubenswrapper[3958]: I0319 11:51:30.514668 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:30.515867 master-0 kubenswrapper[3958]: I0319 11:51:30.515822 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:30.515931 master-0 kubenswrapper[3958]: I0319 11:51:30.515873 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:30.515931 master-0 kubenswrapper[3958]: I0319 11:51:30.515886 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:30.519299 master-0 kubenswrapper[3958]: E0319 11:51:30.519104 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e3bd3ace6d02d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:23.555500077 +0000 UTC m=+14.229221299,LastTimestamp:2026-03-19 11:51:23.555500077 +0000 UTC m=+14.229221299,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.523893 master-0 kubenswrapper[3958]: E0319 11:51:30.523722 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e3bd3ad9b9452 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:23.56734677 +0000 UTC m=+14.241067992,LastTimestamp:2026-03-19 11:51:23.56734677 +0000 UTC m=+14.241067992,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.528398 master-0 kubenswrapper[3958]: E0319 11:51:30.528279 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e3bd3d35ef931 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:24.200909105 +0000 UTC m=+14.874630287,LastTimestamp:2026-03-19 11:51:24.200909105 +0000 UTC m=+14.874630287,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.533029 master-0 kubenswrapper[3958]: E0319 11:51:30.532828 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189e3bd2ea0f2a77\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e3bd2ea0f2a77 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:20.286587511 +0000 UTC m=+10.960308693,LastTimestamp:2026-03-19 11:51:24.355606472 +0000 UTC m=+15.029327654,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.537313 master-0 kubenswrapper[3958]: E0319 11:51:30.537178 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189e3bd2eac51a58\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e3bd2eac51a58 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:20.298510936 +0000 UTC m=+10.972232118,LastTimestamp:2026-03-19 11:51:24.363357288 +0000 UTC m=+15.037078470,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.542233 master-0 kubenswrapper[3958]: E0319 11:51:30.542029 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e3bd45fe20457 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" in 5.194s (5.194s including waiting). Image size: 514984269 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:26.558307415 +0000 UTC m=+17.232028597,LastTimestamp:2026-03-19 11:51:26.558307415 +0000 UTC m=+17.232028597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.548893 master-0 kubenswrapper[3958]: E0319 11:51:30.548709 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e3bd47db98a0a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:27.058971146 +0000 UTC m=+17.732692328,LastTimestamp:2026-03-19 11:51:27.058971146 +0000 UTC m=+17.732692328,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.554241 master-0 kubenswrapper[3958]: E0319 11:51:30.554124 3958 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e3bd47e42c26f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:27.067964015 +0000 UTC m=+17.741685207,LastTimestamp:2026-03-19 11:51:27.067964015 +0000 UTC m=+17.741685207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:30.977505 master-0 kubenswrapper[3958]: I0319 11:51:30.977425 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 11:51:31.390985 master-0 kubenswrapper[3958]: I0319 11:51:31.390783 3958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:31.391199 master-0 kubenswrapper[3958]: I0319 11:51:31.391031 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:31.392397 master-0 kubenswrapper[3958]: I0319 11:51:31.392318 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:31.392397 master-0 kubenswrapper[3958]: I0319 11:51:31.392390 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:31.392639 master-0 kubenswrapper[3958]: I0319 11:51:31.392415 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:31.396830 master-0 kubenswrapper[3958]: I0319 11:51:31.396718 3958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:31.516835 master-0 kubenswrapper[3958]: I0319 11:51:31.516753 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:31.516835 master-0 kubenswrapper[3958]: I0319 11:51:31.516782 3958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:31.517706 master-0 kubenswrapper[3958]: I0319 11:51:31.517658 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:31.517785 master-0 kubenswrapper[3958]: I0319 11:51:31.517714 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:31.517785 master-0 kubenswrapper[3958]: I0319 11:51:31.517731 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:31.976719 master-0 kubenswrapper[3958]: I0319 11:51:31.976587 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 11:51:32.518668 master-0 kubenswrapper[3958]: I0319 11:51:32.518616 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:32.519713 master-0 kubenswrapper[3958]: I0319 11:51:32.519671 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:32.519758 master-0 kubenswrapper[3958]: I0319 11:51:32.519721 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:32.519758 master-0 kubenswrapper[3958]: I0319 11:51:32.519736 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:32.976407 master-0 kubenswrapper[3958]: I0319 11:51:32.976362 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 11:51:33.977691 master-0 kubenswrapper[3958]: I0319 11:51:33.977623 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 11:51:34.180067 master-0 kubenswrapper[3958]: I0319 11:51:34.180003 3958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:34.180067 master-0 kubenswrapper[3958]: I0319 11:51:34.180033 3958 csr.go:261] certificate signing request csr-57hck is approved, waiting to be issued Mar 19 11:51:34.180354 master-0 kubenswrapper[3958]: I0319 11:51:34.180141 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:34.181234 master-0 kubenswrapper[3958]: I0319 11:51:34.181072 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:34.181234 master-0 kubenswrapper[3958]: I0319 11:51:34.181126 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:34.181234 master-0 kubenswrapper[3958]: I0319 11:51:34.181153 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:34.184652 master-0 kubenswrapper[3958]: I0319 11:51:34.184616 3958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:51:34.522109 master-0 kubenswrapper[3958]: I0319 11:51:34.522040 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:34.522657 master-0 kubenswrapper[3958]: I0319 11:51:34.522618 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:34.522657 master-0 kubenswrapper[3958]: I0319 11:51:34.522656 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:34.522740 master-0 kubenswrapper[3958]: I0319 11:51:34.522671 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:34.980147 master-0 kubenswrapper[3958]: I0319 11:51:34.980055 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 11:51:35.122249 master-0 kubenswrapper[3958]: I0319 11:51:35.122133 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:35.123679 master-0 kubenswrapper[3958]: I0319 11:51:35.123606 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:35.123765 master-0 kubenswrapper[3958]: I0319 11:51:35.123699 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:35.123765 master-0 kubenswrapper[3958]: I0319 11:51:35.123723 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:35.124540 master-0 kubenswrapper[3958]: I0319 11:51:35.124500 3958 scope.go:117] "RemoveContainer" containerID="d8d6463706a002922b6bf91885e1b00e6557f01fc64e8ab28d2403acb657b68f" Mar 19 11:51:35.138061 master-0 kubenswrapper[3958]: E0319 11:51:35.137865 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e3bd17c1dbc9f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3bd17c1dbc9f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:14.147081375 +0000 UTC m=+4.820802557,LastTimestamp:2026-03-19 11:51:35.12886478 +0000 UTC m=+25.802586012,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:35.307449 master-0 kubenswrapper[3958]: E0319 11:51:35.307285 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e3bd18763f638\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3bd18763f638 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:14.336233016 +0000 UTC m=+5.009954198,LastTimestamp:2026-03-19 11:51:35.303181126 +0000 UTC m=+25.976902308,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:35.319312 master-0 kubenswrapper[3958]: E0319 11:51:35.319166 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e3bd187f93988\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3bd187f93988 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:14.346015112 +0000 UTC m=+5.019736294,LastTimestamp:2026-03-19 11:51:35.314588524 +0000 UTC m=+25.988309716,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:35.526309 master-0 kubenswrapper[3958]: I0319 11:51:35.526171 3958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 19 11:51:35.526937 master-0 kubenswrapper[3958]: I0319 11:51:35.526910 3958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 19 11:51:35.527405 master-0 kubenswrapper[3958]: I0319 11:51:35.527372 3958 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="6b554ade444a2218312faf004411e7ca5ff136f234fd5270edc3b29df56f6e17" exitCode=1 Mar 19 11:51:35.527475 master-0 kubenswrapper[3958]: I0319 11:51:35.527419 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"6b554ade444a2218312faf004411e7ca5ff136f234fd5270edc3b29df56f6e17"} Mar 19 11:51:35.527475 master-0 kubenswrapper[3958]: I0319 11:51:35.527468 3958 scope.go:117] "RemoveContainer" containerID="d8d6463706a002922b6bf91885e1b00e6557f01fc64e8ab28d2403acb657b68f" Mar 19 11:51:35.527612 master-0 kubenswrapper[3958]: I0319 11:51:35.527589 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:35.528406 master-0 kubenswrapper[3958]: I0319 11:51:35.528378 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:35.528494 master-0 kubenswrapper[3958]: I0319 11:51:35.528419 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:35.528494 master-0 kubenswrapper[3958]: I0319 11:51:35.528434 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:35.529130 master-0 kubenswrapper[3958]: I0319 11:51:35.528925 3958 scope.go:117] "RemoveContainer" containerID="6b554ade444a2218312faf004411e7ca5ff136f234fd5270edc3b29df56f6e17" Mar 19 11:51:35.529193 master-0 kubenswrapper[3958]: E0319 11:51:35.529122 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 19 11:51:35.534131 master-0 kubenswrapper[3958]: E0319 11:51:35.533997 3958 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e3bd31ec3b497\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3bd31ec3b497 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:51:21.170834583 +0000 UTC m=+11.844555785,LastTimestamp:2026-03-19 11:51:35.529084243 +0000 UTC m=+26.202805445,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:51:35.978145 master-0 kubenswrapper[3958]: I0319 11:51:35.978061 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 11:51:36.531233 master-0 kubenswrapper[3958]: I0319 11:51:36.531180 3958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 19 11:51:36.881035 master-0 kubenswrapper[3958]: I0319 11:51:36.880856 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:36.882080 master-0 kubenswrapper[3958]: I0319 11:51:36.882047 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:36.882080 master-0 kubenswrapper[3958]: I0319 11:51:36.882082 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:36.882206 master-0 kubenswrapper[3958]: I0319 11:51:36.882092 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:36.882206 master-0 kubenswrapper[3958]: I0319 11:51:36.882135 3958 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 11:51:36.886484 master-0 kubenswrapper[3958]: E0319 11:51:36.886438 3958 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 19 11:51:36.886708 master-0 kubenswrapper[3958]: E0319 11:51:36.886664 3958 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 19 11:51:36.974524 master-0 kubenswrapper[3958]: I0319 11:51:36.974409 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 11:51:37.979630 master-0 kubenswrapper[3958]: I0319 11:51:37.979541 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 11:51:38.978023 master-0 kubenswrapper[3958]: I0319 11:51:38.977943 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 11:51:39.977426 master-0 kubenswrapper[3958]: I0319 11:51:39.977343 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 11:51:40.031820 master-0 kubenswrapper[3958]: W0319 11:51:40.031729 3958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 19 11:51:40.032058 master-0 kubenswrapper[3958]: E0319 11:51:40.031830 3958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 19 11:51:40.080878 master-0 kubenswrapper[3958]: E0319 11:51:40.080716 3958 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 19 11:51:40.978649 master-0 kubenswrapper[3958]: I0319 11:51:40.978612 3958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 11:51:41.228750 master-0 kubenswrapper[3958]: I0319 11:51:41.228636 3958 csr.go:257] certificate signing request csr-57hck is issued Mar 19 11:51:41.866300 master-0 kubenswrapper[3958]: I0319 11:51:41.866258 3958 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 19 11:51:41.982089 master-0 kubenswrapper[3958]: I0319 11:51:41.982045 3958 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 11:51:42.000459 master-0 kubenswrapper[3958]: I0319 11:51:42.000401 3958 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 11:51:42.058728 master-0 kubenswrapper[3958]: I0319 11:51:42.058693 3958 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 11:51:42.230422 master-0 kubenswrapper[3958]: I0319 11:51:42.230333 3958 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-20 11:43:21 +0000 UTC, rotation deadline is 2026-03-20 04:49:47.35463877 +0000 UTC Mar 19 11:51:42.230887 master-0 kubenswrapper[3958]: I0319 11:51:42.230871 3958 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 16h58m5.123773821s for next certificate rotation Mar 19 11:51:42.350070 master-0 kubenswrapper[3958]: I0319 11:51:42.350012 3958 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 11:51:42.350531 master-0 kubenswrapper[3958]: E0319 11:51:42.350515 3958 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 19 11:51:42.375053 master-0 kubenswrapper[3958]: I0319 11:51:42.375006 3958 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 11:51:42.394948 master-0 kubenswrapper[3958]: I0319 11:51:42.394903 3958 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 11:51:42.450035 master-0 kubenswrapper[3958]: I0319 11:51:42.450001 3958 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 11:51:42.719580 master-0 kubenswrapper[3958]: I0319 11:51:42.719523 3958 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 11:51:42.719580 master-0 kubenswrapper[3958]: E0319 11:51:42.719558 3958 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 19 11:51:42.814885 master-0 kubenswrapper[3958]: I0319 11:51:42.814834 3958 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 11:51:42.829933 master-0 kubenswrapper[3958]: I0319 11:51:42.829898 3958 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 11:51:42.889998 master-0 kubenswrapper[3958]: I0319 11:51:42.889963 3958 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 11:51:43.150438 master-0 kubenswrapper[3958]: I0319 11:51:43.150309 3958 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 11:51:43.150438 master-0 kubenswrapper[3958]: E0319 11:51:43.150368 3958 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 19 11:51:43.745533 master-0 kubenswrapper[3958]: I0319 11:51:43.745489 3958 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 11:51:43.761308 master-0 kubenswrapper[3958]: I0319 11:51:43.761246 3958 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 11:51:43.819847 master-0 kubenswrapper[3958]: I0319 11:51:43.819734 3958 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 11:51:43.887601 master-0 kubenswrapper[3958]: I0319 11:51:43.887486 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:43.889542 master-0 kubenswrapper[3958]: I0319 11:51:43.889452 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:43.889542 master-0 kubenswrapper[3958]: I0319 11:51:43.889524 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:43.889542 master-0 kubenswrapper[3958]: I0319 11:51:43.889541 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:43.889938 master-0 kubenswrapper[3958]: I0319 11:51:43.889603 3958 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 11:51:43.894403 master-0 kubenswrapper[3958]: E0319 11:51:43.894351 3958 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Mar 19 11:51:43.903243 master-0 kubenswrapper[3958]: I0319 11:51:43.903188 3958 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 19 11:51:43.903243 master-0 kubenswrapper[3958]: E0319 11:51:43.903240 3958 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 19 11:51:43.913928 master-0 kubenswrapper[3958]: E0319 11:51:43.913865 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:44.014179 master-0 kubenswrapper[3958]: E0319 11:51:44.014034 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:44.115184 master-0 kubenswrapper[3958]: E0319 11:51:44.115124 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:44.216338 master-0 kubenswrapper[3958]: E0319 11:51:44.216024 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:44.317069 master-0 kubenswrapper[3958]: E0319 11:51:44.316727 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:44.417930 master-0 kubenswrapper[3958]: E0319 11:51:44.417878 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:44.518493 master-0 kubenswrapper[3958]: E0319 11:51:44.518440 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:44.519614 master-0 kubenswrapper[3958]: I0319 11:51:44.519589 3958 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 19 11:51:44.596330 master-0 kubenswrapper[3958]: I0319 11:51:44.596181 3958 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 19 11:51:44.619353 master-0 kubenswrapper[3958]: E0319 11:51:44.619310 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:44.720529 master-0 kubenswrapper[3958]: E0319 11:51:44.720390 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:44.821198 master-0 kubenswrapper[3958]: E0319 11:51:44.821101 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:44.921480 master-0 kubenswrapper[3958]: E0319 11:51:44.921362 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:45.022197 master-0 kubenswrapper[3958]: E0319 11:51:45.022124 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:45.123358 master-0 kubenswrapper[3958]: E0319 11:51:45.123257 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:45.224327 master-0 kubenswrapper[3958]: E0319 11:51:45.224204 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:45.325414 master-0 kubenswrapper[3958]: E0319 11:51:45.325343 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:45.426524 master-0 kubenswrapper[3958]: E0319 11:51:45.426430 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:45.527766 master-0 kubenswrapper[3958]: E0319 11:51:45.527609 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:45.627921 master-0 kubenswrapper[3958]: E0319 11:51:45.627781 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:45.728687 master-0 kubenswrapper[3958]: E0319 11:51:45.728616 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:45.829541 master-0 kubenswrapper[3958]: E0319 11:51:45.829435 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:45.906419 master-0 kubenswrapper[3958]: I0319 11:51:45.906368 3958 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 19 11:51:45.930528 master-0 kubenswrapper[3958]: E0319 11:51:45.930433 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:46.031631 master-0 kubenswrapper[3958]: E0319 11:51:46.031557 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:46.132702 master-0 kubenswrapper[3958]: E0319 11:51:46.132562 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:46.233181 master-0 kubenswrapper[3958]: E0319 11:51:46.233075 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:46.333900 master-0 kubenswrapper[3958]: E0319 11:51:46.333759 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:46.434926 master-0 kubenswrapper[3958]: E0319 11:51:46.434723 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:46.535456 master-0 kubenswrapper[3958]: E0319 11:51:46.535386 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:46.636401 master-0 kubenswrapper[3958]: E0319 11:51:46.636321 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:46.737285 master-0 kubenswrapper[3958]: E0319 11:51:46.737210 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:46.837671 master-0 kubenswrapper[3958]: E0319 11:51:46.837576 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:46.938717 master-0 kubenswrapper[3958]: E0319 11:51:46.938622 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:47.039971 master-0 kubenswrapper[3958]: E0319 11:51:47.039787 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:47.140580 master-0 kubenswrapper[3958]: E0319 11:51:47.140497 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:47.241592 master-0 kubenswrapper[3958]: E0319 11:51:47.241492 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:47.342549 master-0 kubenswrapper[3958]: E0319 11:51:47.342396 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:47.443575 master-0 kubenswrapper[3958]: E0319 11:51:47.443500 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:47.544297 master-0 kubenswrapper[3958]: E0319 11:51:47.544138 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:47.644877 master-0 kubenswrapper[3958]: E0319 11:51:47.644656 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:47.745551 master-0 kubenswrapper[3958]: E0319 11:51:47.745472 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:47.846319 master-0 kubenswrapper[3958]: E0319 11:51:47.846259 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:47.947100 master-0 kubenswrapper[3958]: E0319 11:51:47.946938 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:48.048094 master-0 kubenswrapper[3958]: E0319 11:51:48.047995 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:48.054498 master-0 kubenswrapper[3958]: I0319 11:51:48.054443 3958 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 19 11:51:48.122234 master-0 kubenswrapper[3958]: I0319 11:51:48.122108 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:51:48.123680 master-0 kubenswrapper[3958]: I0319 11:51:48.123480 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:51:48.123930 master-0 kubenswrapper[3958]: I0319 11:51:48.123728 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:51:48.123930 master-0 kubenswrapper[3958]: I0319 11:51:48.123752 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:51:48.124526 master-0 kubenswrapper[3958]: I0319 11:51:48.124471 3958 scope.go:117] "RemoveContainer" containerID="6b554ade444a2218312faf004411e7ca5ff136f234fd5270edc3b29df56f6e17" Mar 19 11:51:48.124888 master-0 kubenswrapper[3958]: E0319 11:51:48.124770 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 19 11:51:48.149073 master-0 kubenswrapper[3958]: E0319 11:51:48.148969 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:48.250042 master-0 kubenswrapper[3958]: E0319 11:51:48.249970 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:48.350956 master-0 kubenswrapper[3958]: E0319 11:51:48.350896 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:48.451733 master-0 kubenswrapper[3958]: E0319 11:51:48.451651 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:48.552544 master-0 kubenswrapper[3958]: E0319 11:51:48.552368 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:48.653333 master-0 kubenswrapper[3958]: E0319 11:51:48.653201 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:48.754398 master-0 kubenswrapper[3958]: E0319 11:51:48.754243 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:48.855302 master-0 kubenswrapper[3958]: E0319 11:51:48.855058 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:48.955570 master-0 kubenswrapper[3958]: E0319 11:51:48.955432 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:49.056535 master-0 kubenswrapper[3958]: E0319 11:51:49.056453 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:49.157429 master-0 kubenswrapper[3958]: E0319 11:51:49.157271 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:49.258293 master-0 kubenswrapper[3958]: E0319 11:51:49.258170 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:49.359067 master-0 kubenswrapper[3958]: E0319 11:51:49.358951 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:49.460157 master-0 kubenswrapper[3958]: E0319 11:51:49.459982 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:49.475614 master-0 kubenswrapper[3958]: I0319 11:51:49.475556 3958 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 19 11:51:49.560824 master-0 kubenswrapper[3958]: E0319 11:51:49.560720 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:49.661505 master-0 kubenswrapper[3958]: E0319 11:51:49.661451 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:49.762423 master-0 kubenswrapper[3958]: E0319 11:51:49.762286 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:49.863445 master-0 kubenswrapper[3958]: E0319 11:51:49.863315 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:49.965084 master-0 kubenswrapper[3958]: E0319 11:51:49.964254 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:50.065548 master-0 kubenswrapper[3958]: E0319 11:51:50.065252 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:50.081967 master-0 kubenswrapper[3958]: E0319 11:51:50.081844 3958 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 19 11:51:50.166181 master-0 kubenswrapper[3958]: E0319 11:51:50.166057 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:50.266371 master-0 kubenswrapper[3958]: E0319 11:51:50.266218 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:50.367636 master-0 kubenswrapper[3958]: E0319 11:51:50.367410 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:50.467931 master-0 kubenswrapper[3958]: E0319 11:51:50.467756 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:50.568074 master-0 kubenswrapper[3958]: E0319 11:51:50.567966 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:50.668953 master-0 kubenswrapper[3958]: E0319 11:51:50.668769 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:50.770034 master-0 kubenswrapper[3958]: E0319 11:51:50.769904 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:50.870926 master-0 kubenswrapper[3958]: E0319 11:51:50.870778 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:50.971788 master-0 kubenswrapper[3958]: E0319 11:51:50.971697 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:51.072100 master-0 kubenswrapper[3958]: E0319 11:51:51.071914 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:51.172281 master-0 kubenswrapper[3958]: E0319 11:51:51.172156 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:51.272645 master-0 kubenswrapper[3958]: E0319 11:51:51.272410 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:51.373352 master-0 kubenswrapper[3958]: E0319 11:51:51.373240 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:51.474301 master-0 kubenswrapper[3958]: E0319 11:51:51.474210 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:51.575393 master-0 kubenswrapper[3958]: E0319 11:51:51.575231 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:51.675446 master-0 kubenswrapper[3958]: E0319 11:51:51.675371 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:51.775548 master-0 kubenswrapper[3958]: E0319 11:51:51.775469 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:51.876385 master-0 kubenswrapper[3958]: E0319 11:51:51.876233 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:51.976525 master-0 kubenswrapper[3958]: E0319 11:51:51.976445 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:52.076684 master-0 kubenswrapper[3958]: E0319 11:51:52.076618 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:52.178016 master-0 kubenswrapper[3958]: E0319 11:51:52.177825 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:52.278679 master-0 kubenswrapper[3958]: E0319 11:51:52.278602 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:52.379753 master-0 kubenswrapper[3958]: E0319 11:51:52.379644 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:52.480722 master-0 kubenswrapper[3958]: E0319 11:51:52.480605 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:52.580826 master-0 kubenswrapper[3958]: E0319 11:51:52.580752 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:52.681551 master-0 kubenswrapper[3958]: E0319 11:51:52.681472 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:52.782550 master-0 kubenswrapper[3958]: E0319 11:51:52.782397 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:52.883315 master-0 kubenswrapper[3958]: E0319 11:51:52.883252 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:52.983832 master-0 kubenswrapper[3958]: E0319 11:51:52.983713 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:53.084996 master-0 kubenswrapper[3958]: E0319 11:51:53.084736 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:53.185086 master-0 kubenswrapper[3958]: E0319 11:51:53.184948 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:53.285762 master-0 kubenswrapper[3958]: E0319 11:51:53.285626 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:53.386510 master-0 kubenswrapper[3958]: E0319 11:51:53.386309 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:53.486950 master-0 kubenswrapper[3958]: E0319 11:51:53.486760 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:53.588029 master-0 kubenswrapper[3958]: E0319 11:51:53.587882 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:53.689114 master-0 kubenswrapper[3958]: E0319 11:51:53.688965 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:53.790078 master-0 kubenswrapper[3958]: E0319 11:51:53.789984 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:53.891088 master-0 kubenswrapper[3958]: E0319 11:51:53.890976 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:53.992100 master-0 kubenswrapper[3958]: E0319 11:51:53.991987 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:54.088123 master-0 kubenswrapper[3958]: E0319 11:51:54.087987 3958 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 19 11:51:54.103591 master-0 kubenswrapper[3958]: E0319 11:51:54.103534 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:54.204362 master-0 kubenswrapper[3958]: E0319 11:51:54.204165 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:54.305473 master-0 kubenswrapper[3958]: E0319 11:51:54.305295 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:54.406611 master-0 kubenswrapper[3958]: E0319 11:51:54.406487 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:54.507726 master-0 kubenswrapper[3958]: E0319 11:51:54.507649 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:54.608788 master-0 kubenswrapper[3958]: E0319 11:51:54.608658 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:54.709582 master-0 kubenswrapper[3958]: E0319 11:51:54.709499 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:54.810282 master-0 kubenswrapper[3958]: E0319 11:51:54.810198 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:54.911435 master-0 kubenswrapper[3958]: E0319 11:51:54.911239 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:54.938613 master-0 kubenswrapper[3958]: I0319 11:51:54.938552 3958 csr.go:261] certificate signing request csr-t56hr is approved, waiting to be issued Mar 19 11:51:54.948950 master-0 kubenswrapper[3958]: I0319 11:51:54.948847 3958 csr.go:257] certificate signing request csr-t56hr is issued Mar 19 11:51:55.011970 master-0 kubenswrapper[3958]: E0319 11:51:55.011872 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:55.113144 master-0 kubenswrapper[3958]: E0319 11:51:55.113025 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:55.214335 master-0 kubenswrapper[3958]: E0319 11:51:55.214162 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:55.315175 master-0 kubenswrapper[3958]: E0319 11:51:55.315050 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:55.416110 master-0 kubenswrapper[3958]: E0319 11:51:55.415989 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:55.516650 master-0 kubenswrapper[3958]: E0319 11:51:55.516583 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:55.617669 master-0 kubenswrapper[3958]: E0319 11:51:55.617609 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:55.718784 master-0 kubenswrapper[3958]: E0319 11:51:55.718675 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:55.819724 master-0 kubenswrapper[3958]: E0319 11:51:55.819494 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:55.919909 master-0 kubenswrapper[3958]: E0319 11:51:55.919839 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:55.950160 master-0 kubenswrapper[3958]: I0319 11:51:55.950064 3958 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-20 11:43:21 +0000 UTC, rotation deadline is 2026-03-20 08:14:41.593976078 +0000 UTC Mar 19 11:51:55.950160 master-0 kubenswrapper[3958]: I0319 11:51:55.950122 3958 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h22m45.6438591s for next certificate rotation Mar 19 11:51:56.020271 master-0 kubenswrapper[3958]: E0319 11:51:56.020166 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:56.120950 master-0 kubenswrapper[3958]: E0319 11:51:56.120774 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:56.221301 master-0 kubenswrapper[3958]: E0319 11:51:56.221212 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:56.322149 master-0 kubenswrapper[3958]: E0319 11:51:56.322064 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:56.422336 master-0 kubenswrapper[3958]: E0319 11:51:56.422188 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:56.522410 master-0 kubenswrapper[3958]: E0319 11:51:56.522342 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:56.623676 master-0 kubenswrapper[3958]: E0319 11:51:56.623536 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:56.724549 master-0 kubenswrapper[3958]: E0319 11:51:56.724455 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:56.825372 master-0 kubenswrapper[3958]: E0319 11:51:56.825261 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:56.926096 master-0 kubenswrapper[3958]: E0319 11:51:56.925971 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:56.951444 master-0 kubenswrapper[3958]: I0319 11:51:56.951343 3958 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-20 11:43:21 +0000 UTC, rotation deadline is 2026-03-20 09:06:04.114632815 +0000 UTC Mar 19 11:51:56.951444 master-0 kubenswrapper[3958]: I0319 11:51:56.951413 3958 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 21h14m7.163227334s for next certificate rotation Mar 19 11:51:57.027344 master-0 kubenswrapper[3958]: E0319 11:51:57.027127 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:57.128092 master-0 kubenswrapper[3958]: E0319 11:51:57.128025 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:57.228941 master-0 kubenswrapper[3958]: E0319 11:51:57.228843 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:57.329773 master-0 kubenswrapper[3958]: E0319 11:51:57.329598 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:57.430791 master-0 kubenswrapper[3958]: E0319 11:51:57.430685 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:57.531791 master-0 kubenswrapper[3958]: E0319 11:51:57.531680 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:57.632566 master-0 kubenswrapper[3958]: E0319 11:51:57.632342 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:57.733252 master-0 kubenswrapper[3958]: E0319 11:51:57.733143 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:57.834087 master-0 kubenswrapper[3958]: E0319 11:51:57.833964 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:57.935038 master-0 kubenswrapper[3958]: E0319 11:51:57.934913 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:58.035564 master-0 kubenswrapper[3958]: E0319 11:51:58.035510 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:58.136646 master-0 kubenswrapper[3958]: E0319 11:51:58.136536 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:58.237486 master-0 kubenswrapper[3958]: E0319 11:51:58.237399 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:58.338604 master-0 kubenswrapper[3958]: E0319 11:51:58.338536 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:58.438833 master-0 kubenswrapper[3958]: E0319 11:51:58.438755 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:58.539841 master-0 kubenswrapper[3958]: E0319 11:51:58.539653 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:58.640145 master-0 kubenswrapper[3958]: E0319 11:51:58.640028 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:58.741249 master-0 kubenswrapper[3958]: E0319 11:51:58.741132 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:58.842394 master-0 kubenswrapper[3958]: E0319 11:51:58.842241 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:58.943086 master-0 kubenswrapper[3958]: E0319 11:51:58.942939 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:59.043853 master-0 kubenswrapper[3958]: E0319 11:51:59.043706 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:59.144038 master-0 kubenswrapper[3958]: E0319 11:51:59.143885 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:59.244748 master-0 kubenswrapper[3958]: E0319 11:51:59.244694 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:59.345851 master-0 kubenswrapper[3958]: E0319 11:51:59.345718 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:59.446946 master-0 kubenswrapper[3958]: E0319 11:51:59.446707 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:59.547020 master-0 kubenswrapper[3958]: E0319 11:51:59.546933 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:59.648055 master-0 kubenswrapper[3958]: E0319 11:51:59.647986 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:59.748998 master-0 kubenswrapper[3958]: E0319 11:51:59.748905 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:59.849449 master-0 kubenswrapper[3958]: E0319 11:51:59.849335 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:51:59.950115 master-0 kubenswrapper[3958]: E0319 11:51:59.950042 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:00.050866 master-0 kubenswrapper[3958]: E0319 11:52:00.050683 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:00.082274 master-0 kubenswrapper[3958]: E0319 11:52:00.082180 3958 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 19 11:52:00.151457 master-0 kubenswrapper[3958]: E0319 11:52:00.151366 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:00.252432 master-0 kubenswrapper[3958]: E0319 11:52:00.252318 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:00.352909 master-0 kubenswrapper[3958]: E0319 11:52:00.352751 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:00.453477 master-0 kubenswrapper[3958]: E0319 11:52:00.453401 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:00.553782 master-0 kubenswrapper[3958]: E0319 11:52:00.553713 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:00.654551 master-0 kubenswrapper[3958]: E0319 11:52:00.654370 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:00.755334 master-0 kubenswrapper[3958]: E0319 11:52:00.755274 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:00.856150 master-0 kubenswrapper[3958]: E0319 11:52:00.856089 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:00.957357 master-0 kubenswrapper[3958]: E0319 11:52:00.957240 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:01.057540 master-0 kubenswrapper[3958]: E0319 11:52:01.057468 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:01.121339 master-0 kubenswrapper[3958]: I0319 11:52:01.121251 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:52:01.122792 master-0 kubenswrapper[3958]: I0319 11:52:01.122736 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:52:01.122792 master-0 kubenswrapper[3958]: I0319 11:52:01.122822 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:52:01.122792 master-0 kubenswrapper[3958]: I0319 11:52:01.122839 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:52:01.123291 master-0 kubenswrapper[3958]: I0319 11:52:01.123258 3958 scope.go:117] "RemoveContainer" containerID="6b554ade444a2218312faf004411e7ca5ff136f234fd5270edc3b29df56f6e17" Mar 19 11:52:01.158504 master-0 kubenswrapper[3958]: E0319 11:52:01.158391 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:01.259444 master-0 kubenswrapper[3958]: E0319 11:52:01.259391 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:01.360251 master-0 kubenswrapper[3958]: E0319 11:52:01.360179 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:01.460571 master-0 kubenswrapper[3958]: E0319 11:52:01.460486 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:01.561710 master-0 kubenswrapper[3958]: E0319 11:52:01.561559 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:01.615742 master-0 kubenswrapper[3958]: I0319 11:52:01.615665 3958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 19 11:52:01.616310 master-0 kubenswrapper[3958]: I0319 11:52:01.616256 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"20d447d60e6c323ac2a99fb9005538b9f698220ad800f2a9d7a82ebdd391df17"} Mar 19 11:52:01.616438 master-0 kubenswrapper[3958]: I0319 11:52:01.616412 3958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:52:01.617664 master-0 kubenswrapper[3958]: I0319 11:52:01.617632 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:52:01.617880 master-0 kubenswrapper[3958]: I0319 11:52:01.617677 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:52:01.617880 master-0 kubenswrapper[3958]: I0319 11:52:01.617692 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:52:01.662881 master-0 kubenswrapper[3958]: E0319 11:52:01.662793 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:01.763791 master-0 kubenswrapper[3958]: E0319 11:52:01.763675 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:01.864571 master-0 kubenswrapper[3958]: E0319 11:52:01.864425 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:01.965327 master-0 kubenswrapper[3958]: E0319 11:52:01.965236 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:02.066498 master-0 kubenswrapper[3958]: E0319 11:52:02.066420 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:02.167542 master-0 kubenswrapper[3958]: E0319 11:52:02.167398 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:02.268222 master-0 kubenswrapper[3958]: E0319 11:52:02.268141 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:02.369416 master-0 kubenswrapper[3958]: E0319 11:52:02.369322 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:02.470238 master-0 kubenswrapper[3958]: E0319 11:52:02.470164 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:02.571298 master-0 kubenswrapper[3958]: E0319 11:52:02.571205 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:02.671757 master-0 kubenswrapper[3958]: E0319 11:52:02.671678 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:02.772668 master-0 kubenswrapper[3958]: E0319 11:52:02.772504 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:02.873496 master-0 kubenswrapper[3958]: E0319 11:52:02.873397 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:02.974291 master-0 kubenswrapper[3958]: E0319 11:52:02.974202 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:03.075548 master-0 kubenswrapper[3958]: E0319 11:52:03.075381 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:03.176034 master-0 kubenswrapper[3958]: E0319 11:52:03.175950 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:03.276578 master-0 kubenswrapper[3958]: E0319 11:52:03.276510 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:03.377734 master-0 kubenswrapper[3958]: E0319 11:52:03.377596 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:03.477827 master-0 kubenswrapper[3958]: E0319 11:52:03.477721 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:03.578985 master-0 kubenswrapper[3958]: E0319 11:52:03.578910 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:03.679191 master-0 kubenswrapper[3958]: E0319 11:52:03.679033 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:03.779927 master-0 kubenswrapper[3958]: E0319 11:52:03.779866 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:03.880659 master-0 kubenswrapper[3958]: E0319 11:52:03.880568 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:03.981011 master-0 kubenswrapper[3958]: E0319 11:52:03.980934 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:04.081970 master-0 kubenswrapper[3958]: E0319 11:52:04.081863 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:04.165740 master-0 kubenswrapper[3958]: E0319 11:52:04.165593 3958 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 19 11:52:04.182245 master-0 kubenswrapper[3958]: E0319 11:52:04.182150 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:04.283025 master-0 kubenswrapper[3958]: E0319 11:52:04.282866 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:04.383572 master-0 kubenswrapper[3958]: E0319 11:52:04.383494 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:04.483780 master-0 kubenswrapper[3958]: E0319 11:52:04.483687 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:04.583902 master-0 kubenswrapper[3958]: E0319 11:52:04.583779 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:04.684246 master-0 kubenswrapper[3958]: E0319 11:52:04.684125 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:04.785413 master-0 kubenswrapper[3958]: E0319 11:52:04.785309 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:04.886638 master-0 kubenswrapper[3958]: E0319 11:52:04.886430 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:04.986669 master-0 kubenswrapper[3958]: E0319 11:52:04.986558 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:05.087526 master-0 kubenswrapper[3958]: E0319 11:52:05.087470 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:05.188011 master-0 kubenswrapper[3958]: E0319 11:52:05.187830 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:05.288478 master-0 kubenswrapper[3958]: E0319 11:52:05.288382 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:05.389111 master-0 kubenswrapper[3958]: E0319 11:52:05.388959 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:05.490088 master-0 kubenswrapper[3958]: E0319 11:52:05.489990 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:05.590299 master-0 kubenswrapper[3958]: E0319 11:52:05.590179 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:05.690887 master-0 kubenswrapper[3958]: E0319 11:52:05.690834 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:05.792082 master-0 kubenswrapper[3958]: E0319 11:52:05.791877 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:05.893094 master-0 kubenswrapper[3958]: E0319 11:52:05.893020 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:05.993704 master-0 kubenswrapper[3958]: E0319 11:52:05.993628 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:06.094571 master-0 kubenswrapper[3958]: E0319 11:52:06.094381 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:06.195528 master-0 kubenswrapper[3958]: E0319 11:52:06.195452 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:06.296181 master-0 kubenswrapper[3958]: E0319 11:52:06.296085 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:06.397310 master-0 kubenswrapper[3958]: E0319 11:52:06.397157 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:06.498332 master-0 kubenswrapper[3958]: E0319 11:52:06.498249 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:06.598580 master-0 kubenswrapper[3958]: E0319 11:52:06.598466 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:06.699482 master-0 kubenswrapper[3958]: E0319 11:52:06.699266 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:06.800515 master-0 kubenswrapper[3958]: E0319 11:52:06.800396 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:06.900929 master-0 kubenswrapper[3958]: E0319 11:52:06.900820 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:07.001819 master-0 kubenswrapper[3958]: E0319 11:52:07.001726 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:07.102984 master-0 kubenswrapper[3958]: E0319 11:52:07.102898 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:07.204080 master-0 kubenswrapper[3958]: E0319 11:52:07.203996 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:07.305010 master-0 kubenswrapper[3958]: E0319 11:52:07.304853 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:07.405888 master-0 kubenswrapper[3958]: E0319 11:52:07.405782 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:07.506837 master-0 kubenswrapper[3958]: E0319 11:52:07.506764 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:07.608148 master-0 kubenswrapper[3958]: E0319 11:52:07.607960 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:07.708459 master-0 kubenswrapper[3958]: E0319 11:52:07.708359 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:07.809340 master-0 kubenswrapper[3958]: E0319 11:52:07.809240 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:07.910375 master-0 kubenswrapper[3958]: E0319 11:52:07.910202 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:08.011023 master-0 kubenswrapper[3958]: E0319 11:52:08.010915 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:08.111133 master-0 kubenswrapper[3958]: E0319 11:52:08.111063 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:08.212323 master-0 kubenswrapper[3958]: E0319 11:52:08.212168 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:08.312921 master-0 kubenswrapper[3958]: E0319 11:52:08.312827 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:08.413112 master-0 kubenswrapper[3958]: E0319 11:52:08.412951 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:08.514178 master-0 kubenswrapper[3958]: E0319 11:52:08.514113 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:08.615335 master-0 kubenswrapper[3958]: E0319 11:52:08.615200 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:08.716175 master-0 kubenswrapper[3958]: E0319 11:52:08.716042 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:08.817023 master-0 kubenswrapper[3958]: E0319 11:52:08.816886 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:08.917737 master-0 kubenswrapper[3958]: E0319 11:52:08.917696 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:09.018041 master-0 kubenswrapper[3958]: E0319 11:52:09.017982 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:09.119019 master-0 kubenswrapper[3958]: E0319 11:52:09.118767 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:09.219991 master-0 kubenswrapper[3958]: E0319 11:52:09.219897 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:09.320921 master-0 kubenswrapper[3958]: E0319 11:52:09.320825 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:09.422036 master-0 kubenswrapper[3958]: E0319 11:52:09.421792 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:09.522771 master-0 kubenswrapper[3958]: E0319 11:52:09.522687 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:09.623530 master-0 kubenswrapper[3958]: E0319 11:52:09.623319 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:09.723780 master-0 kubenswrapper[3958]: E0319 11:52:09.723717 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:09.824231 master-0 kubenswrapper[3958]: E0319 11:52:09.824127 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:09.924372 master-0 kubenswrapper[3958]: E0319 11:52:09.924294 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:10.025322 master-0 kubenswrapper[3958]: E0319 11:52:10.025180 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:10.082448 master-0 kubenswrapper[3958]: E0319 11:52:10.082354 3958 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 19 11:52:10.126348 master-0 kubenswrapper[3958]: E0319 11:52:10.126233 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:10.226891 master-0 kubenswrapper[3958]: E0319 11:52:10.226774 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:10.327771 master-0 kubenswrapper[3958]: E0319 11:52:10.327591 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:10.428770 master-0 kubenswrapper[3958]: E0319 11:52:10.428656 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:10.530114 master-0 kubenswrapper[3958]: E0319 11:52:10.529875 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:10.631144 master-0 kubenswrapper[3958]: E0319 11:52:10.630946 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:10.732113 master-0 kubenswrapper[3958]: E0319 11:52:10.732013 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:10.832899 master-0 kubenswrapper[3958]: E0319 11:52:10.832785 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:10.933714 master-0 kubenswrapper[3958]: E0319 11:52:10.933525 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:11.033765 master-0 kubenswrapper[3958]: E0319 11:52:11.033669 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:11.134836 master-0 kubenswrapper[3958]: E0319 11:52:11.134684 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:11.234937 master-0 kubenswrapper[3958]: E0319 11:52:11.234860 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:11.336080 master-0 kubenswrapper[3958]: E0319 11:52:11.335993 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:11.436502 master-0 kubenswrapper[3958]: E0319 11:52:11.436399 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:11.537052 master-0 kubenswrapper[3958]: E0319 11:52:11.536853 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:11.637216 master-0 kubenswrapper[3958]: E0319 11:52:11.637114 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:11.737297 master-0 kubenswrapper[3958]: E0319 11:52:11.737226 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:11.838461 master-0 kubenswrapper[3958]: E0319 11:52:11.838282 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:11.939430 master-0 kubenswrapper[3958]: E0319 11:52:11.939325 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:12.039863 master-0 kubenswrapper[3958]: E0319 11:52:12.039729 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:12.141029 master-0 kubenswrapper[3958]: E0319 11:52:12.140851 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:12.241653 master-0 kubenswrapper[3958]: E0319 11:52:12.241579 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:12.342467 master-0 kubenswrapper[3958]: E0319 11:52:12.342395 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:12.443761 master-0 kubenswrapper[3958]: E0319 11:52:12.443555 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:12.543993 master-0 kubenswrapper[3958]: E0319 11:52:12.543918 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:12.644990 master-0 kubenswrapper[3958]: E0319 11:52:12.644898 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:12.745965 master-0 kubenswrapper[3958]: E0319 11:52:12.745873 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:12.846269 master-0 kubenswrapper[3958]: E0319 11:52:12.846182 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:12.946403 master-0 kubenswrapper[3958]: E0319 11:52:12.946284 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:13.046965 master-0 kubenswrapper[3958]: E0319 11:52:13.046775 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:13.147994 master-0 kubenswrapper[3958]: E0319 11:52:13.147876 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:13.249315 master-0 kubenswrapper[3958]: E0319 11:52:13.248957 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:13.350514 master-0 kubenswrapper[3958]: E0319 11:52:13.350327 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:13.450553 master-0 kubenswrapper[3958]: E0319 11:52:13.450464 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:13.550756 master-0 kubenswrapper[3958]: E0319 11:52:13.550627 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:13.651844 master-0 kubenswrapper[3958]: E0319 11:52:13.651645 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:13.752769 master-0 kubenswrapper[3958]: E0319 11:52:13.752690 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:13.853944 master-0 kubenswrapper[3958]: E0319 11:52:13.853873 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:13.955309 master-0 kubenswrapper[3958]: E0319 11:52:13.955100 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:14.055926 master-0 kubenswrapper[3958]: E0319 11:52:14.055840 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:14.156931 master-0 kubenswrapper[3958]: E0319 11:52:14.156771 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:14.257747 master-0 kubenswrapper[3958]: E0319 11:52:14.257668 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:14.358042 master-0 kubenswrapper[3958]: E0319 11:52:14.357924 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:14.396939 master-0 kubenswrapper[3958]: E0319 11:52:14.396842 3958 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 19 11:52:14.458946 master-0 kubenswrapper[3958]: E0319 11:52:14.458847 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:14.559185 master-0 kubenswrapper[3958]: E0319 11:52:14.559006 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:14.659728 master-0 kubenswrapper[3958]: E0319 11:52:14.659656 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:14.760398 master-0 kubenswrapper[3958]: E0319 11:52:14.760286 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:14.860636 master-0 kubenswrapper[3958]: E0319 11:52:14.860441 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:14.961835 master-0 kubenswrapper[3958]: E0319 11:52:14.961701 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:15.062424 master-0 kubenswrapper[3958]: E0319 11:52:15.062315 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:15.162906 master-0 kubenswrapper[3958]: E0319 11:52:15.162737 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:15.263717 master-0 kubenswrapper[3958]: E0319 11:52:15.263621 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:15.364486 master-0 kubenswrapper[3958]: E0319 11:52:15.364374 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:15.464835 master-0 kubenswrapper[3958]: E0319 11:52:15.464739 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:15.565072 master-0 kubenswrapper[3958]: E0319 11:52:15.564979 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:15.665964 master-0 kubenswrapper[3958]: E0319 11:52:15.665855 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:15.766784 master-0 kubenswrapper[3958]: E0319 11:52:15.766671 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:15.868015 master-0 kubenswrapper[3958]: E0319 11:52:15.867835 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:15.968469 master-0 kubenswrapper[3958]: E0319 11:52:15.968357 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:16.069589 master-0 kubenswrapper[3958]: E0319 11:52:16.069392 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:16.170325 master-0 kubenswrapper[3958]: E0319 11:52:16.170189 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:16.271231 master-0 kubenswrapper[3958]: E0319 11:52:16.271100 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:16.372440 master-0 kubenswrapper[3958]: E0319 11:52:16.372227 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:16.472998 master-0 kubenswrapper[3958]: E0319 11:52:16.472884 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:16.573924 master-0 kubenswrapper[3958]: E0319 11:52:16.573851 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:16.674971 master-0 kubenswrapper[3958]: E0319 11:52:16.674830 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:16.775143 master-0 kubenswrapper[3958]: E0319 11:52:16.775050 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:16.876160 master-0 kubenswrapper[3958]: E0319 11:52:16.876058 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:16.976417 master-0 kubenswrapper[3958]: E0319 11:52:16.976337 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:17.076595 master-0 kubenswrapper[3958]: E0319 11:52:17.076483 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:17.176775 master-0 kubenswrapper[3958]: E0319 11:52:17.176674 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:17.277730 master-0 kubenswrapper[3958]: E0319 11:52:17.277563 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:17.378600 master-0 kubenswrapper[3958]: E0319 11:52:17.378486 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:17.478822 master-0 kubenswrapper[3958]: E0319 11:52:17.478713 3958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:52:17.536866 master-0 kubenswrapper[3958]: I0319 11:52:17.536614 3958 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 19 11:52:17.997477 master-0 kubenswrapper[3958]: I0319 11:52:17.997412 3958 apiserver.go:52] "Watching apiserver" Mar 19 11:52:18.003103 master-0 kubenswrapper[3958]: I0319 11:52:18.003046 3958 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 19 11:52:18.003416 master-0 kubenswrapper[3958]: I0319 11:52:18.003301 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-b6qm2","openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v","openshift-network-operator/network-operator-7bd846bfc4-nb8bk"] Mar 19 11:52:18.004981 master-0 kubenswrapper[3958]: I0319 11:52:18.003991 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-b6qm2" Mar 19 11:52:18.004981 master-0 kubenswrapper[3958]: I0319 11:52:18.004140 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 11:52:18.004981 master-0 kubenswrapper[3958]: I0319 11:52:18.004218 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:52:18.007588 master-0 kubenswrapper[3958]: I0319 11:52:18.007507 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Mar 19 11:52:18.008379 master-0 kubenswrapper[3958]: I0319 11:52:18.008341 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Mar 19 11:52:18.008504 master-0 kubenswrapper[3958]: I0319 11:52:18.008452 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 19 11:52:18.008867 master-0 kubenswrapper[3958]: I0319 11:52:18.008830 3958 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Mar 19 11:52:18.008970 master-0 kubenswrapper[3958]: I0319 11:52:18.008903 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Mar 19 11:52:18.009204 master-0 kubenswrapper[3958]: I0319 11:52:18.009168 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 19 11:52:18.009412 master-0 kubenswrapper[3958]: I0319 11:52:18.009371 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 19 11:52:18.009508 master-0 kubenswrapper[3958]: I0319 11:52:18.009492 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 19 11:52:18.010037 master-0 kubenswrapper[3958]: I0319 11:52:18.009997 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 19 11:52:18.010771 master-0 kubenswrapper[3958]: I0319 11:52:18.010634 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 19 11:52:18.075299 master-0 kubenswrapper[3958]: I0319 11:52:18.075232 3958 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 19 11:52:18.163223 master-0 kubenswrapper[3958]: I0319 11:52:18.163108 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-sno-bootstrap-files\") pod \"assisted-installer-controller-b6qm2\" (UID: \"a9819a56-abb1-485c-b424-5c62e30d5afc\") " pod="assisted-installer/assisted-installer-controller-b6qm2" Mar 19 11:52:18.163223 master-0 kubenswrapper[3958]: I0319 11:52:18.163186 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85912908-c447-4868-871b-82c5eadbfdbe-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:52:18.163223 master-0 kubenswrapper[3958]: I0319 11:52:18.163228 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85912908-c447-4868-871b-82c5eadbfdbe-kube-api-access\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:52:18.163672 master-0 kubenswrapper[3958]: I0319 11:52:18.163262 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/284768b8-9d70-4cf7-bace-8adc6b587186-metrics-tls\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 11:52:18.163672 master-0 kubenswrapper[3958]: I0319 11:52:18.163304 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85912908-c447-4868-871b-82c5eadbfdbe-service-ca\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:52:18.163672 master-0 kubenswrapper[3958]: I0319 11:52:18.163336 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:52:18.163672 master-0 kubenswrapper[3958]: I0319 11:52:18.163370 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-host-resolv-conf\") pod \"assisted-installer-controller-b6qm2\" (UID: \"a9819a56-abb1-485c-b424-5c62e30d5afc\") " pod="assisted-installer/assisted-installer-controller-b6qm2" Mar 19 11:52:18.163672 master-0 kubenswrapper[3958]: I0319 11:52:18.163406 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85912908-c447-4868-871b-82c5eadbfdbe-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:52:18.163672 master-0 kubenswrapper[3958]: I0319 11:52:18.163443 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-host-ca-bundle\") pod \"assisted-installer-controller-b6qm2\" (UID: \"a9819a56-abb1-485c-b424-5c62e30d5afc\") " pod="assisted-installer/assisted-installer-controller-b6qm2" Mar 19 11:52:18.163672 master-0 kubenswrapper[3958]: I0319 11:52:18.163474 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/284768b8-9d70-4cf7-bace-8adc6b587186-host-etc-kube\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 11:52:18.163672 master-0 kubenswrapper[3958]: I0319 11:52:18.163508 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-host-var-run-resolv-conf\") pod \"assisted-installer-controller-b6qm2\" (UID: \"a9819a56-abb1-485c-b424-5c62e30d5afc\") " pod="assisted-installer/assisted-installer-controller-b6qm2" Mar 19 11:52:18.163672 master-0 kubenswrapper[3958]: I0319 11:52:18.163545 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg4vv\" (UniqueName: \"kubernetes.io/projected/a9819a56-abb1-485c-b424-5c62e30d5afc-kube-api-access-zg4vv\") pod \"assisted-installer-controller-b6qm2\" (UID: \"a9819a56-abb1-485c-b424-5c62e30d5afc\") " pod="assisted-installer/assisted-installer-controller-b6qm2" Mar 19 11:52:18.164271 master-0 kubenswrapper[3958]: I0319 11:52:18.163666 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p6vn\" (UniqueName: \"kubernetes.io/projected/284768b8-9d70-4cf7-bace-8adc6b587186-kube-api-access-8p6vn\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 11:52:18.264506 master-0 kubenswrapper[3958]: I0319 11:52:18.264258 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:52:18.264506 master-0 kubenswrapper[3958]: I0319 11:52:18.264344 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-host-resolv-conf\") pod \"assisted-installer-controller-b6qm2\" (UID: \"a9819a56-abb1-485c-b424-5c62e30d5afc\") " pod="assisted-installer/assisted-installer-controller-b6qm2" Mar 19 11:52:18.264506 master-0 kubenswrapper[3958]: I0319 11:52:18.264385 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85912908-c447-4868-871b-82c5eadbfdbe-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:52:18.264506 master-0 kubenswrapper[3958]: I0319 11:52:18.264488 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85912908-c447-4868-871b-82c5eadbfdbe-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:52:18.265192 master-0 kubenswrapper[3958]: I0319 11:52:18.264545 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-host-resolv-conf\") pod \"assisted-installer-controller-b6qm2\" (UID: \"a9819a56-abb1-485c-b424-5c62e30d5afc\") " pod="assisted-installer/assisted-installer-controller-b6qm2" Mar 19 11:52:18.265192 master-0 kubenswrapper[3958]: E0319 11:52:18.264651 3958 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 11:52:18.265192 master-0 kubenswrapper[3958]: I0319 11:52:18.264716 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-host-ca-bundle\") pod \"assisted-installer-controller-b6qm2\" (UID: \"a9819a56-abb1-485c-b424-5c62e30d5afc\") " pod="assisted-installer/assisted-installer-controller-b6qm2" Mar 19 11:52:18.265192 master-0 kubenswrapper[3958]: E0319 11:52:18.264780 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert podName:85912908-c447-4868-871b-82c5eadbfdbe nodeName:}" failed. No retries permitted until 2026-03-19 11:52:18.764718546 +0000 UTC m=+69.438439768 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert") pod "cluster-version-operator-56d8475767-gjj5v" (UID: "85912908-c447-4868-871b-82c5eadbfdbe") : secret "cluster-version-operator-serving-cert" not found Mar 19 11:52:18.265192 master-0 kubenswrapper[3958]: I0319 11:52:18.264855 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/284768b8-9d70-4cf7-bace-8adc6b587186-host-etc-kube\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 11:52:18.265192 master-0 kubenswrapper[3958]: I0319 11:52:18.264905 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-host-ca-bundle\") pod \"assisted-installer-controller-b6qm2\" (UID: \"a9819a56-abb1-485c-b424-5c62e30d5afc\") " pod="assisted-installer/assisted-installer-controller-b6qm2" Mar 19 11:52:18.265192 master-0 kubenswrapper[3958]: I0319 11:52:18.264911 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-host-var-run-resolv-conf\") pod \"assisted-installer-controller-b6qm2\" (UID: \"a9819a56-abb1-485c-b424-5c62e30d5afc\") " pod="assisted-installer/assisted-installer-controller-b6qm2" Mar 19 11:52:18.265192 master-0 kubenswrapper[3958]: I0319 11:52:18.264961 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-host-var-run-resolv-conf\") pod \"assisted-installer-controller-b6qm2\" (UID: \"a9819a56-abb1-485c-b424-5c62e30d5afc\") " pod="assisted-installer/assisted-installer-controller-b6qm2" Mar 19 11:52:18.265192 master-0 kubenswrapper[3958]: I0319 11:52:18.264987 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg4vv\" (UniqueName: \"kubernetes.io/projected/a9819a56-abb1-485c-b424-5c62e30d5afc-kube-api-access-zg4vv\") pod \"assisted-installer-controller-b6qm2\" (UID: \"a9819a56-abb1-485c-b424-5c62e30d5afc\") " pod="assisted-installer/assisted-installer-controller-b6qm2" Mar 19 11:52:18.265192 master-0 kubenswrapper[3958]: I0319 11:52:18.265043 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p6vn\" (UniqueName: \"kubernetes.io/projected/284768b8-9d70-4cf7-bace-8adc6b587186-kube-api-access-8p6vn\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 11:52:18.265192 master-0 kubenswrapper[3958]: I0319 11:52:18.265098 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-sno-bootstrap-files\") pod \"assisted-installer-controller-b6qm2\" (UID: \"a9819a56-abb1-485c-b424-5c62e30d5afc\") " pod="assisted-installer/assisted-installer-controller-b6qm2" Mar 19 11:52:18.266666 master-0 kubenswrapper[3958]: I0319 11:52:18.265649 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/284768b8-9d70-4cf7-bace-8adc6b587186-host-etc-kube\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 11:52:18.266666 master-0 kubenswrapper[3958]: I0319 11:52:18.265659 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-sno-bootstrap-files\") pod \"assisted-installer-controller-b6qm2\" (UID: \"a9819a56-abb1-485c-b424-5c62e30d5afc\") " pod="assisted-installer/assisted-installer-controller-b6qm2" Mar 19 11:52:18.266666 master-0 kubenswrapper[3958]: I0319 11:52:18.265713 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85912908-c447-4868-871b-82c5eadbfdbe-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:52:18.266666 master-0 kubenswrapper[3958]: I0319 11:52:18.265774 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85912908-c447-4868-871b-82c5eadbfdbe-kube-api-access\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:52:18.266666 master-0 kubenswrapper[3958]: I0319 11:52:18.265786 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85912908-c447-4868-871b-82c5eadbfdbe-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:52:18.266666 master-0 kubenswrapper[3958]: I0319 11:52:18.265842 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/284768b8-9d70-4cf7-bace-8adc6b587186-metrics-tls\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 11:52:18.266666 master-0 kubenswrapper[3958]: I0319 11:52:18.266042 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85912908-c447-4868-871b-82c5eadbfdbe-service-ca\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:52:18.267125 master-0 kubenswrapper[3958]: I0319 11:52:18.266789 3958 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 19 11:52:18.268540 master-0 kubenswrapper[3958]: I0319 11:52:18.268443 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85912908-c447-4868-871b-82c5eadbfdbe-service-ca\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:52:18.275817 master-0 kubenswrapper[3958]: I0319 11:52:18.275700 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/284768b8-9d70-4cf7-bace-8adc6b587186-metrics-tls\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 11:52:18.287950 master-0 kubenswrapper[3958]: I0319 11:52:18.287859 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85912908-c447-4868-871b-82c5eadbfdbe-kube-api-access\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:52:18.295978 master-0 kubenswrapper[3958]: I0319 11:52:18.295890 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg4vv\" (UniqueName: \"kubernetes.io/projected/a9819a56-abb1-485c-b424-5c62e30d5afc-kube-api-access-zg4vv\") pod \"assisted-installer-controller-b6qm2\" (UID: \"a9819a56-abb1-485c-b424-5c62e30d5afc\") " pod="assisted-installer/assisted-installer-controller-b6qm2" Mar 19 11:52:18.296522 master-0 kubenswrapper[3958]: I0319 11:52:18.296470 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p6vn\" (UniqueName: \"kubernetes.io/projected/284768b8-9d70-4cf7-bace-8adc6b587186-kube-api-access-8p6vn\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 11:52:18.342857 master-0 kubenswrapper[3958]: I0319 11:52:18.342748 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-b6qm2" Mar 19 11:52:18.358590 master-0 kubenswrapper[3958]: I0319 11:52:18.358458 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 11:52:18.362331 master-0 kubenswrapper[3958]: W0319 11:52:18.362262 3958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9819a56_abb1_485c_b424_5c62e30d5afc.slice/crio-9d2d73d5870e62554bb684d309080c493974123e3d07fe8faf016c90bfd3fdd4 WatchSource:0}: Error finding container 9d2d73d5870e62554bb684d309080c493974123e3d07fe8faf016c90bfd3fdd4: Status 404 returned error can't find the container with id 9d2d73d5870e62554bb684d309080c493974123e3d07fe8faf016c90bfd3fdd4 Mar 19 11:52:18.377028 master-0 kubenswrapper[3958]: W0319 11:52:18.376984 3958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod284768b8_9d70_4cf7_bace_8adc6b587186.slice/crio-de69f46fe324ea455cfa701ce77df2ff17c8f9f38f189dbc84ded004836d5af0 WatchSource:0}: Error finding container de69f46fe324ea455cfa701ce77df2ff17c8f9f38f189dbc84ded004836d5af0: Status 404 returned error can't find the container with id de69f46fe324ea455cfa701ce77df2ff17c8f9f38f189dbc84ded004836d5af0 Mar 19 11:52:18.657870 master-0 kubenswrapper[3958]: I0319 11:52:18.657597 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" event={"ID":"284768b8-9d70-4cf7-bace-8adc6b587186","Type":"ContainerStarted","Data":"de69f46fe324ea455cfa701ce77df2ff17c8f9f38f189dbc84ded004836d5af0"} Mar 19 11:52:18.659258 master-0 kubenswrapper[3958]: I0319 11:52:18.658780 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-b6qm2" event={"ID":"a9819a56-abb1-485c-b424-5c62e30d5afc","Type":"ContainerStarted","Data":"9d2d73d5870e62554bb684d309080c493974123e3d07fe8faf016c90bfd3fdd4"} Mar 19 11:52:18.771056 master-0 kubenswrapper[3958]: I0319 11:52:18.770949 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:52:18.771328 master-0 kubenswrapper[3958]: E0319 11:52:18.771116 3958 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 11:52:18.771328 master-0 kubenswrapper[3958]: E0319 11:52:18.771214 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert podName:85912908-c447-4868-871b-82c5eadbfdbe nodeName:}" failed. No retries permitted until 2026-03-19 11:52:19.771191303 +0000 UTC m=+70.444912475 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert") pod "cluster-version-operator-56d8475767-gjj5v" (UID: "85912908-c447-4868-871b-82c5eadbfdbe") : secret "cluster-version-operator-serving-cert" not found Mar 19 11:52:19.778650 master-0 kubenswrapper[3958]: I0319 11:52:19.778577 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:52:19.779202 master-0 kubenswrapper[3958]: E0319 11:52:19.778714 3958 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 11:52:19.779202 master-0 kubenswrapper[3958]: E0319 11:52:19.778771 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert podName:85912908-c447-4868-871b-82c5eadbfdbe nodeName:}" failed. No retries permitted until 2026-03-19 11:52:21.778756532 +0000 UTC m=+72.452477714 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert") pod "cluster-version-operator-56d8475767-gjj5v" (UID: "85912908-c447-4868-871b-82c5eadbfdbe") : secret "cluster-version-operator-serving-cert" not found Mar 19 11:52:21.794384 master-0 kubenswrapper[3958]: I0319 11:52:21.794333 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:52:21.794894 master-0 kubenswrapper[3958]: E0319 11:52:21.794495 3958 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 11:52:21.794894 master-0 kubenswrapper[3958]: E0319 11:52:21.794549 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert podName:85912908-c447-4868-871b-82c5eadbfdbe nodeName:}" failed. No retries permitted until 2026-03-19 11:52:25.794529258 +0000 UTC m=+76.468250440 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert") pod "cluster-version-operator-56d8475767-gjj5v" (UID: "85912908-c447-4868-871b-82c5eadbfdbe") : secret "cluster-version-operator-serving-cert" not found Mar 19 11:52:23.671069 master-0 kubenswrapper[3958]: I0319 11:52:23.670980 3958 generic.go:334] "Generic (PLEG): container finished" podID="a9819a56-abb1-485c-b424-5c62e30d5afc" containerID="61889dd9a935bc86ee38882d43925886388331ab38ba3004e85cc49cd1f39072" exitCode=0 Mar 19 11:52:23.672232 master-0 kubenswrapper[3958]: I0319 11:52:23.671071 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-b6qm2" event={"ID":"a9819a56-abb1-485c-b424-5c62e30d5afc","Type":"ContainerDied","Data":"61889dd9a935bc86ee38882d43925886388331ab38ba3004e85cc49cd1f39072"} Mar 19 11:52:24.734020 master-0 kubenswrapper[3958]: I0319 11:52:24.733954 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" event={"ID":"284768b8-9d70-4cf7-bace-8adc6b587186","Type":"ContainerStarted","Data":"4a5b36532ee146a92740f77707f5b0a6a8c33bb89c0054e1d9177bfea2033a2d"} Mar 19 11:52:24.760162 master-0 kubenswrapper[3958]: I0319 11:52:24.760114 3958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-b6qm2" Mar 19 11:52:24.775992 master-0 kubenswrapper[3958]: I0319 11:52:24.775919 3958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" podStartSLOduration=33.658207588 podStartE2EDuration="38.775899735s" podCreationTimestamp="2026-03-19 11:51:46 +0000 UTC" firstStartedPulling="2026-03-19 11:52:18.378834667 +0000 UTC m=+69.052555849" lastFinishedPulling="2026-03-19 11:52:23.496526824 +0000 UTC m=+74.170247996" observedRunningTime="2026-03-19 11:52:24.750541999 +0000 UTC m=+75.424263191" watchObservedRunningTime="2026-03-19 11:52:24.775899735 +0000 UTC m=+75.449620927" Mar 19 11:52:24.930897 master-0 kubenswrapper[3958]: I0319 11:52:24.930775 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-sno-bootstrap-files\") pod \"a9819a56-abb1-485c-b424-5c62e30d5afc\" (UID: \"a9819a56-abb1-485c-b424-5c62e30d5afc\") " Mar 19 11:52:24.930897 master-0 kubenswrapper[3958]: I0319 11:52:24.930887 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-host-var-run-resolv-conf\") pod \"a9819a56-abb1-485c-b424-5c62e30d5afc\" (UID: \"a9819a56-abb1-485c-b424-5c62e30d5afc\") " Mar 19 11:52:24.931192 master-0 kubenswrapper[3958]: I0319 11:52:24.930923 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-host-resolv-conf\") pod \"a9819a56-abb1-485c-b424-5c62e30d5afc\" (UID: \"a9819a56-abb1-485c-b424-5c62e30d5afc\") " Mar 19 11:52:24.931192 master-0 kubenswrapper[3958]: I0319 11:52:24.930954 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-host-ca-bundle\") pod \"a9819a56-abb1-485c-b424-5c62e30d5afc\" (UID: \"a9819a56-abb1-485c-b424-5c62e30d5afc\") " Mar 19 11:52:24.931192 master-0 kubenswrapper[3958]: I0319 11:52:24.930929 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "a9819a56-abb1-485c-b424-5c62e30d5afc" (UID: "a9819a56-abb1-485c-b424-5c62e30d5afc"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:52:24.931192 master-0 kubenswrapper[3958]: I0319 11:52:24.930977 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "a9819a56-abb1-485c-b424-5c62e30d5afc" (UID: "a9819a56-abb1-485c-b424-5c62e30d5afc"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:52:24.931192 master-0 kubenswrapper[3958]: I0319 11:52:24.930999 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg4vv\" (UniqueName: \"kubernetes.io/projected/a9819a56-abb1-485c-b424-5c62e30d5afc-kube-api-access-zg4vv\") pod \"a9819a56-abb1-485c-b424-5c62e30d5afc\" (UID: \"a9819a56-abb1-485c-b424-5c62e30d5afc\") " Mar 19 11:52:24.931192 master-0 kubenswrapper[3958]: I0319 11:52:24.931023 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "a9819a56-abb1-485c-b424-5c62e30d5afc" (UID: "a9819a56-abb1-485c-b424-5c62e30d5afc"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:52:24.931192 master-0 kubenswrapper[3958]: I0319 11:52:24.931044 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "a9819a56-abb1-485c-b424-5c62e30d5afc" (UID: "a9819a56-abb1-485c-b424-5c62e30d5afc"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:52:24.931192 master-0 kubenswrapper[3958]: I0319 11:52:24.931174 3958 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Mar 19 11:52:24.931192 master-0 kubenswrapper[3958]: I0319 11:52:24.931197 3958 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 19 11:52:24.931534 master-0 kubenswrapper[3958]: I0319 11:52:24.931216 3958 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 19 11:52:24.931534 master-0 kubenswrapper[3958]: I0319 11:52:24.931232 3958 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/a9819a56-abb1-485c-b424-5c62e30d5afc-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 11:52:24.935954 master-0 kubenswrapper[3958]: I0319 11:52:24.935906 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9819a56-abb1-485c-b424-5c62e30d5afc-kube-api-access-zg4vv" (OuterVolumeSpecName: "kube-api-access-zg4vv") pod "a9819a56-abb1-485c-b424-5c62e30d5afc" (UID: "a9819a56-abb1-485c-b424-5c62e30d5afc"). InnerVolumeSpecName "kube-api-access-zg4vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:52:25.032006 master-0 kubenswrapper[3958]: I0319 11:52:25.031859 3958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg4vv\" (UniqueName: \"kubernetes.io/projected/a9819a56-abb1-485c-b424-5c62e30d5afc-kube-api-access-zg4vv\") on node \"master-0\" DevicePath \"\"" Mar 19 11:52:25.738572 master-0 kubenswrapper[3958]: I0319 11:52:25.738507 3958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-b6qm2" Mar 19 11:52:25.738572 master-0 kubenswrapper[3958]: I0319 11:52:25.738499 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-b6qm2" event={"ID":"a9819a56-abb1-485c-b424-5c62e30d5afc","Type":"ContainerDied","Data":"9d2d73d5870e62554bb684d309080c493974123e3d07fe8faf016c90bfd3fdd4"} Mar 19 11:52:25.738572 master-0 kubenswrapper[3958]: I0319 11:52:25.738561 3958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d2d73d5870e62554bb684d309080c493974123e3d07fe8faf016c90bfd3fdd4" Mar 19 11:52:25.836715 master-0 kubenswrapper[3958]: I0319 11:52:25.836641 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:52:25.836953 master-0 kubenswrapper[3958]: E0319 11:52:25.836824 3958 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 11:52:25.836953 master-0 kubenswrapper[3958]: E0319 11:52:25.836891 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert podName:85912908-c447-4868-871b-82c5eadbfdbe nodeName:}" failed. No retries permitted until 2026-03-19 11:52:33.8368702 +0000 UTC m=+84.510591382 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert") pod "cluster-version-operator-56d8475767-gjj5v" (UID: "85912908-c447-4868-871b-82c5eadbfdbe") : secret "cluster-version-operator-serving-cert" not found Mar 19 11:52:26.417384 master-0 kubenswrapper[3958]: I0319 11:52:26.417276 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-8ndtd"] Mar 19 11:52:26.417671 master-0 kubenswrapper[3958]: E0319 11:52:26.417422 3958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9819a56-abb1-485c-b424-5c62e30d5afc" containerName="assisted-installer-controller" Mar 19 11:52:26.417671 master-0 kubenswrapper[3958]: I0319 11:52:26.417453 3958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9819a56-abb1-485c-b424-5c62e30d5afc" containerName="assisted-installer-controller" Mar 19 11:52:26.417671 master-0 kubenswrapper[3958]: I0319 11:52:26.417510 3958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9819a56-abb1-485c-b424-5c62e30d5afc" containerName="assisted-installer-controller" Mar 19 11:52:26.418012 master-0 kubenswrapper[3958]: I0319 11:52:26.417907 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-8ndtd" Mar 19 11:52:26.541546 master-0 kubenswrapper[3958]: I0319 11:52:26.541467 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdpvv\" (UniqueName: \"kubernetes.io/projected/118dd8fa-f11f-4dda-96d7-f207e175b4da-kube-api-access-sdpvv\") pod \"mtu-prober-8ndtd\" (UID: \"118dd8fa-f11f-4dda-96d7-f207e175b4da\") " pod="openshift-network-operator/mtu-prober-8ndtd" Mar 19 11:52:26.642328 master-0 kubenswrapper[3958]: I0319 11:52:26.642250 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdpvv\" (UniqueName: \"kubernetes.io/projected/118dd8fa-f11f-4dda-96d7-f207e175b4da-kube-api-access-sdpvv\") pod \"mtu-prober-8ndtd\" (UID: \"118dd8fa-f11f-4dda-96d7-f207e175b4da\") " pod="openshift-network-operator/mtu-prober-8ndtd" Mar 19 11:52:26.659216 master-0 kubenswrapper[3958]: I0319 11:52:26.659141 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdpvv\" (UniqueName: \"kubernetes.io/projected/118dd8fa-f11f-4dda-96d7-f207e175b4da-kube-api-access-sdpvv\") pod \"mtu-prober-8ndtd\" (UID: \"118dd8fa-f11f-4dda-96d7-f207e175b4da\") " pod="openshift-network-operator/mtu-prober-8ndtd" Mar 19 11:52:26.738242 master-0 kubenswrapper[3958]: I0319 11:52:26.738151 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-8ndtd" Mar 19 11:52:26.750722 master-0 kubenswrapper[3958]: W0319 11:52:26.750139 3958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod118dd8fa_f11f_4dda_96d7_f207e175b4da.slice/crio-abb59c84a4c72145d2743db8f3e69c4a48795ef4c7b107cbbfb92f3b5047887c WatchSource:0}: Error finding container abb59c84a4c72145d2743db8f3e69c4a48795ef4c7b107cbbfb92f3b5047887c: Status 404 returned error can't find the container with id abb59c84a4c72145d2743db8f3e69c4a48795ef4c7b107cbbfb92f3b5047887c Mar 19 11:52:27.744390 master-0 kubenswrapper[3958]: I0319 11:52:27.744353 3958 generic.go:334] "Generic (PLEG): container finished" podID="118dd8fa-f11f-4dda-96d7-f207e175b4da" containerID="5130296ba65834ed8eebf5136547f5b58340e0b2714dd3dba811f10381f648f5" exitCode=0 Mar 19 11:52:27.744390 master-0 kubenswrapper[3958]: I0319 11:52:27.744392 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-8ndtd" event={"ID":"118dd8fa-f11f-4dda-96d7-f207e175b4da","Type":"ContainerDied","Data":"5130296ba65834ed8eebf5136547f5b58340e0b2714dd3dba811f10381f648f5"} Mar 19 11:52:27.744667 master-0 kubenswrapper[3958]: I0319 11:52:27.744417 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-8ndtd" event={"ID":"118dd8fa-f11f-4dda-96d7-f207e175b4da","Type":"ContainerStarted","Data":"abb59c84a4c72145d2743db8f3e69c4a48795ef4c7b107cbbfb92f3b5047887c"} Mar 19 11:52:28.139613 master-0 kubenswrapper[3958]: I0319 11:52:28.139437 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 19 11:52:28.762149 master-0 kubenswrapper[3958]: I0319 11:52:28.762118 3958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-8ndtd" Mar 19 11:52:28.772093 master-0 kubenswrapper[3958]: I0319 11:52:28.772026 3958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=0.772002616 podStartE2EDuration="772.002616ms" podCreationTimestamp="2026-03-19 11:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:52:28.771595893 +0000 UTC m=+79.445317115" watchObservedRunningTime="2026-03-19 11:52:28.772002616 +0000 UTC m=+79.445723828" Mar 19 11:52:28.958731 master-0 kubenswrapper[3958]: I0319 11:52:28.958670 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdpvv\" (UniqueName: \"kubernetes.io/projected/118dd8fa-f11f-4dda-96d7-f207e175b4da-kube-api-access-sdpvv\") pod \"118dd8fa-f11f-4dda-96d7-f207e175b4da\" (UID: \"118dd8fa-f11f-4dda-96d7-f207e175b4da\") " Mar 19 11:52:28.962761 master-0 kubenswrapper[3958]: I0319 11:52:28.962702 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/118dd8fa-f11f-4dda-96d7-f207e175b4da-kube-api-access-sdpvv" (OuterVolumeSpecName: "kube-api-access-sdpvv") pod "118dd8fa-f11f-4dda-96d7-f207e175b4da" (UID: "118dd8fa-f11f-4dda-96d7-f207e175b4da"). InnerVolumeSpecName "kube-api-access-sdpvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:52:29.059898 master-0 kubenswrapper[3958]: I0319 11:52:29.059744 3958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdpvv\" (UniqueName: \"kubernetes.io/projected/118dd8fa-f11f-4dda-96d7-f207e175b4da-kube-api-access-sdpvv\") on node \"master-0\" DevicePath \"\"" Mar 19 11:52:29.749176 master-0 kubenswrapper[3958]: I0319 11:52:29.749130 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-8ndtd" event={"ID":"118dd8fa-f11f-4dda-96d7-f207e175b4da","Type":"ContainerDied","Data":"abb59c84a4c72145d2743db8f3e69c4a48795ef4c7b107cbbfb92f3b5047887c"} Mar 19 11:52:29.749176 master-0 kubenswrapper[3958]: I0319 11:52:29.749168 3958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-8ndtd" Mar 19 11:52:29.749665 master-0 kubenswrapper[3958]: I0319 11:52:29.749174 3958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abb59c84a4c72145d2743db8f3e69c4a48795ef4c7b107cbbfb92f3b5047887c" Mar 19 11:52:31.138271 master-0 kubenswrapper[3958]: I0319 11:52:31.138178 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 19 11:52:31.139422 master-0 kubenswrapper[3958]: W0319 11:52:31.138539 3958 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 19 11:52:31.449289 master-0 kubenswrapper[3958]: I0319 11:52:31.449149 3958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-8ndtd"] Mar 19 11:52:31.451946 master-0 kubenswrapper[3958]: I0319 11:52:31.451894 3958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-8ndtd"] Mar 19 11:52:32.125542 master-0 kubenswrapper[3958]: I0319 11:52:32.125497 3958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="118dd8fa-f11f-4dda-96d7-f207e175b4da" path="/var/lib/kubelet/pods/118dd8fa-f11f-4dda-96d7-f207e175b4da/volumes" Mar 19 11:52:33.892832 master-0 kubenswrapper[3958]: I0319 11:52:33.892723 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:52:33.893608 master-0 kubenswrapper[3958]: E0319 11:52:33.893019 3958 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 11:52:33.893608 master-0 kubenswrapper[3958]: E0319 11:52:33.893168 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert podName:85912908-c447-4868-871b-82c5eadbfdbe nodeName:}" failed. No retries permitted until 2026-03-19 11:52:49.893139922 +0000 UTC m=+100.566861104 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert") pod "cluster-version-operator-56d8475767-gjj5v" (UID: "85912908-c447-4868-871b-82c5eadbfdbe") : secret "cluster-version-operator-serving-cert" not found Mar 19 11:52:36.315485 master-0 kubenswrapper[3958]: I0319 11:52:36.315323 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-w82cg"] Mar 19 11:52:36.315485 master-0 kubenswrapper[3958]: E0319 11:52:36.315415 3958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118dd8fa-f11f-4dda-96d7-f207e175b4da" containerName="prober" Mar 19 11:52:36.315485 master-0 kubenswrapper[3958]: I0319 11:52:36.315431 3958 state_mem.go:107] "Deleted CPUSet assignment" podUID="118dd8fa-f11f-4dda-96d7-f207e175b4da" containerName="prober" Mar 19 11:52:36.315485 master-0 kubenswrapper[3958]: I0319 11:52:36.315458 3958 memory_manager.go:354] "RemoveStaleState removing state" podUID="118dd8fa-f11f-4dda-96d7-f207e175b4da" containerName="prober" Mar 19 11:52:36.316450 master-0 kubenswrapper[3958]: I0319 11:52:36.315658 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.317346 master-0 kubenswrapper[3958]: I0319 11:52:36.317279 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 19 11:52:36.318333 master-0 kubenswrapper[3958]: I0319 11:52:36.318301 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 19 11:52:36.318333 master-0 kubenswrapper[3958]: I0319 11:52:36.318310 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 19 11:52:36.318443 master-0 kubenswrapper[3958]: I0319 11:52:36.318397 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 19 11:52:36.347092 master-0 kubenswrapper[3958]: I0319 11:52:36.347024 3958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=5.347007511 podStartE2EDuration="5.347007511s" podCreationTimestamp="2026-03-19 11:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:52:36.346836656 +0000 UTC m=+87.020557828" watchObservedRunningTime="2026-03-19 11:52:36.347007511 +0000 UTC m=+87.020728683" Mar 19 11:52:36.412169 master-0 kubenswrapper[3958]: I0319 11:52:36.412085 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-system-cni-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.412169 master-0 kubenswrapper[3958]: I0319 11:52:36.412139 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-cni-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.513449 master-0 kubenswrapper[3958]: I0319 11:52:36.513364 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-system-cni-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.513449 master-0 kubenswrapper[3958]: I0319 11:52:36.513438 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-os-release\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.514068 master-0 kubenswrapper[3958]: I0319 11:52:36.513481 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-k8s-cni-cncf-io\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.514068 master-0 kubenswrapper[3958]: I0319 11:52:36.513531 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4hsp\" (UniqueName: \"kubernetes.io/projected/fe245927-c937-4ec7-ab83-4900bade72cf-kube-api-access-s4hsp\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.514068 master-0 kubenswrapper[3958]: I0319 11:52:36.513586 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-kubelet\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.514068 master-0 kubenswrapper[3958]: I0319 11:52:36.513596 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-system-cni-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.514068 master-0 kubenswrapper[3958]: I0319 11:52:36.513617 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-multus-certs\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.514068 master-0 kubenswrapper[3958]: I0319 11:52:36.513685 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-cni-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.514068 master-0 kubenswrapper[3958]: I0319 11:52:36.513791 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-cnibin\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.514068 master-0 kubenswrapper[3958]: I0319 11:52:36.513894 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-cni-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.514068 master-0 kubenswrapper[3958]: I0319 11:52:36.513897 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-socket-dir-parent\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.514068 master-0 kubenswrapper[3958]: I0319 11:52:36.513961 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-hostroot\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.514068 master-0 kubenswrapper[3958]: I0319 11:52:36.514051 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-cni-bin\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.515026 master-0 kubenswrapper[3958]: I0319 11:52:36.514158 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fe245927-c937-4ec7-ab83-4900bade72cf-cni-binary-copy\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.515026 master-0 kubenswrapper[3958]: I0319 11:52:36.514212 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fe245927-c937-4ec7-ab83-4900bade72cf-multus-daemon-config\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.515026 master-0 kubenswrapper[3958]: I0319 11:52:36.514258 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-etc-kubernetes\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.515026 master-0 kubenswrapper[3958]: I0319 11:52:36.514498 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-cni-multus\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.515026 master-0 kubenswrapper[3958]: I0319 11:52:36.514588 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-conf-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.515026 master-0 kubenswrapper[3958]: I0319 11:52:36.515023 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-netns\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.515534 master-0 kubenswrapper[3958]: I0319 11:52:36.515236 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-2z4h8"] Mar 19 11:52:36.516250 master-0 kubenswrapper[3958]: I0319 11:52:36.516110 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.518536 master-0 kubenswrapper[3958]: I0319 11:52:36.518465 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 19 11:52:36.520055 master-0 kubenswrapper[3958]: I0319 11:52:36.520010 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 19 11:52:36.615891 master-0 kubenswrapper[3958]: I0319 11:52:36.615703 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.615891 master-0 kubenswrapper[3958]: I0319 11:52:36.615827 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-cni-bin\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.616150 master-0 kubenswrapper[3958]: I0319 11:52:36.615902 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-cnibin\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.616150 master-0 kubenswrapper[3958]: I0319 11:52:36.615951 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fe245927-c937-4ec7-ab83-4900bade72cf-cni-binary-copy\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.616150 master-0 kubenswrapper[3958]: I0319 11:52:36.616028 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-cni-bin\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.616150 master-0 kubenswrapper[3958]: I0319 11:52:36.616102 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fe245927-c937-4ec7-ab83-4900bade72cf-multus-daemon-config\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.616495 master-0 kubenswrapper[3958]: I0319 11:52:36.616157 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-etc-kubernetes\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.616495 master-0 kubenswrapper[3958]: I0319 11:52:36.616244 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-etc-kubernetes\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.616495 master-0 kubenswrapper[3958]: I0319 11:52:36.616306 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-os-release\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.616495 master-0 kubenswrapper[3958]: I0319 11:52:36.616361 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-conf-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.616495 master-0 kubenswrapper[3958]: I0319 11:52:36.616411 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shfs6\" (UniqueName: \"kubernetes.io/projected/7044a7b3-4fac-40af-a31c-054a1a1db26b-kube-api-access-shfs6\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.616495 master-0 kubenswrapper[3958]: I0319 11:52:36.616469 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-netns\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.617117 master-0 kubenswrapper[3958]: I0319 11:52:36.616515 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-cni-multus\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.617117 master-0 kubenswrapper[3958]: I0319 11:52:36.616527 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-conf-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.617117 master-0 kubenswrapper[3958]: I0319 11:52:36.616594 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-netns\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.617117 master-0 kubenswrapper[3958]: I0319 11:52:36.616605 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-cni-multus\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.617117 master-0 kubenswrapper[3958]: I0319 11:52:36.616644 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-os-release\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.617117 master-0 kubenswrapper[3958]: I0319 11:52:36.616699 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-cni-binary-copy\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.617117 master-0 kubenswrapper[3958]: I0319 11:52:36.616849 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.617117 master-0 kubenswrapper[3958]: I0319 11:52:36.616893 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-k8s-cni-cncf-io\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.617117 master-0 kubenswrapper[3958]: I0319 11:52:36.616915 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-k8s-cni-cncf-io\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.617117 master-0 kubenswrapper[3958]: I0319 11:52:36.616952 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4hsp\" (UniqueName: \"kubernetes.io/projected/fe245927-c937-4ec7-ab83-4900bade72cf-kube-api-access-s4hsp\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.617117 master-0 kubenswrapper[3958]: I0319 11:52:36.616945 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-os-release\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.617117 master-0 kubenswrapper[3958]: I0319 11:52:36.617056 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-system-cni-dir\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.618013 master-0 kubenswrapper[3958]: I0319 11:52:36.617125 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.618013 master-0 kubenswrapper[3958]: I0319 11:52:36.617262 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-multus-certs\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.618013 master-0 kubenswrapper[3958]: I0319 11:52:36.617346 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-multus-certs\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.618013 master-0 kubenswrapper[3958]: I0319 11:52:36.617433 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-kubelet\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.618013 master-0 kubenswrapper[3958]: I0319 11:52:36.617466 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-cnibin\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.618013 master-0 kubenswrapper[3958]: I0319 11:52:36.617489 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-socket-dir-parent\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.618013 master-0 kubenswrapper[3958]: I0319 11:52:36.617514 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-hostroot\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.618013 master-0 kubenswrapper[3958]: I0319 11:52:36.617523 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fe245927-c937-4ec7-ab83-4900bade72cf-cni-binary-copy\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.618013 master-0 kubenswrapper[3958]: I0319 11:52:36.617612 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fe245927-c937-4ec7-ab83-4900bade72cf-multus-daemon-config\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.618013 master-0 kubenswrapper[3958]: I0319 11:52:36.617638 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-socket-dir-parent\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.618013 master-0 kubenswrapper[3958]: I0319 11:52:36.617697 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-cnibin\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.618013 master-0 kubenswrapper[3958]: I0319 11:52:36.617729 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-hostroot\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.618013 master-0 kubenswrapper[3958]: I0319 11:52:36.617705 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-kubelet\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.640012 master-0 kubenswrapper[3958]: I0319 11:52:36.639919 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4hsp\" (UniqueName: \"kubernetes.io/projected/fe245927-c937-4ec7-ab83-4900bade72cf-kube-api-access-s4hsp\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.718348 master-0 kubenswrapper[3958]: I0319 11:52:36.718242 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-system-cni-dir\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.718348 master-0 kubenswrapper[3958]: I0319 11:52:36.718308 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.718790 master-0 kubenswrapper[3958]: I0319 11:52:36.718369 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-cnibin\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.718790 master-0 kubenswrapper[3958]: I0319 11:52:36.718403 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.718790 master-0 kubenswrapper[3958]: I0319 11:52:36.718440 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-os-release\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.718790 master-0 kubenswrapper[3958]: I0319 11:52:36.718488 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-system-cni-dir\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.718790 master-0 kubenswrapper[3958]: I0319 11:52:36.718531 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-cnibin\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.718790 master-0 kubenswrapper[3958]: I0319 11:52:36.718687 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shfs6\" (UniqueName: \"kubernetes.io/projected/7044a7b3-4fac-40af-a31c-054a1a1db26b-kube-api-access-shfs6\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.718790 master-0 kubenswrapper[3958]: I0319 11:52:36.718762 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-cni-binary-copy\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.719448 master-0 kubenswrapper[3958]: I0319 11:52:36.718858 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.719448 master-0 kubenswrapper[3958]: I0319 11:52:36.718870 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.719448 master-0 kubenswrapper[3958]: I0319 11:52:36.719400 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-os-release\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.719867 master-0 kubenswrapper[3958]: I0319 11:52:36.719774 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.720498 master-0 kubenswrapper[3958]: I0319 11:52:36.720433 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-cni-binary-copy\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.720627 master-0 kubenswrapper[3958]: I0319 11:52:36.720579 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.747699 master-0 kubenswrapper[3958]: I0319 11:52:36.747561 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shfs6\" (UniqueName: \"kubernetes.io/projected/7044a7b3-4fac-40af-a31c-054a1a1db26b-kube-api-access-shfs6\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.831083 master-0 kubenswrapper[3958]: I0319 11:52:36.831018 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:52:36.932063 master-0 kubenswrapper[3958]: I0319 11:52:36.931923 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-w82cg" Mar 19 11:52:36.945286 master-0 kubenswrapper[3958]: W0319 11:52:36.945246 3958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe245927_c937_4ec7_ab83_4900bade72cf.slice/crio-b4bf04cf4a874cdad02ad51f153ce323d3cc5a93749aa40aeabd5ac11d70f65e WatchSource:0}: Error finding container b4bf04cf4a874cdad02ad51f153ce323d3cc5a93749aa40aeabd5ac11d70f65e: Status 404 returned error can't find the container with id b4bf04cf4a874cdad02ad51f153ce323d3cc5a93749aa40aeabd5ac11d70f65e Mar 19 11:52:37.296478 master-0 kubenswrapper[3958]: I0319 11:52:37.296432 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-6t6sn"] Mar 19 11:52:37.296932 master-0 kubenswrapper[3958]: I0319 11:52:37.296906 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:37.297036 master-0 kubenswrapper[3958]: E0319 11:52:37.297003 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:52:37.323366 master-0 kubenswrapper[3958]: I0319 11:52:37.323230 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:37.323845 master-0 kubenswrapper[3958]: I0319 11:52:37.323385 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhqhb\" (UniqueName: \"kubernetes.io/projected/398bcaca-1bea-4633-a78f-717e3d015ddd-kube-api-access-fhqhb\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:37.424846 master-0 kubenswrapper[3958]: I0319 11:52:37.424718 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhqhb\" (UniqueName: \"kubernetes.io/projected/398bcaca-1bea-4633-a78f-717e3d015ddd-kube-api-access-fhqhb\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:37.425234 master-0 kubenswrapper[3958]: I0319 11:52:37.424975 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:37.425310 master-0 kubenswrapper[3958]: E0319 11:52:37.425263 3958 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 11:52:37.425445 master-0 kubenswrapper[3958]: E0319 11:52:37.425412 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs podName:398bcaca-1bea-4633-a78f-717e3d015ddd nodeName:}" failed. No retries permitted until 2026-03-19 11:52:37.925376511 +0000 UTC m=+88.599097723 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs") pod "network-metrics-daemon-6t6sn" (UID: "398bcaca-1bea-4633-a78f-717e3d015ddd") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 11:52:37.454547 master-0 kubenswrapper[3958]: I0319 11:52:37.454433 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhqhb\" (UniqueName: \"kubernetes.io/projected/398bcaca-1bea-4633-a78f-717e3d015ddd-kube-api-access-fhqhb\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:37.772015 master-0 kubenswrapper[3958]: I0319 11:52:37.771942 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w82cg" event={"ID":"fe245927-c937-4ec7-ab83-4900bade72cf","Type":"ContainerStarted","Data":"b4bf04cf4a874cdad02ad51f153ce323d3cc5a93749aa40aeabd5ac11d70f65e"} Mar 19 11:52:37.772770 master-0 kubenswrapper[3958]: I0319 11:52:37.772735 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2z4h8" event={"ID":"7044a7b3-4fac-40af-a31c-054a1a1db26b","Type":"ContainerStarted","Data":"b9477b33d342b45771f3690cbbe221e1438e0d225ffd950edeb419c6de979401"} Mar 19 11:52:37.929056 master-0 kubenswrapper[3958]: I0319 11:52:37.928966 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:37.929273 master-0 kubenswrapper[3958]: E0319 11:52:37.929168 3958 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 11:52:37.929273 master-0 kubenswrapper[3958]: E0319 11:52:37.929255 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs podName:398bcaca-1bea-4633-a78f-717e3d015ddd nodeName:}" failed. No retries permitted until 2026-03-19 11:52:38.929232988 +0000 UTC m=+89.602954260 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs") pod "network-metrics-daemon-6t6sn" (UID: "398bcaca-1bea-4633-a78f-717e3d015ddd") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 11:52:38.134134 master-0 kubenswrapper[3958]: I0319 11:52:38.133941 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 19 11:52:38.938419 master-0 kubenswrapper[3958]: I0319 11:52:38.938335 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:38.939320 master-0 kubenswrapper[3958]: E0319 11:52:38.938525 3958 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 11:52:38.939320 master-0 kubenswrapper[3958]: E0319 11:52:38.938615 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs podName:398bcaca-1bea-4633-a78f-717e3d015ddd nodeName:}" failed. No retries permitted until 2026-03-19 11:52:40.938584092 +0000 UTC m=+91.612305274 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs") pod "network-metrics-daemon-6t6sn" (UID: "398bcaca-1bea-4633-a78f-717e3d015ddd") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 11:52:39.122489 master-0 kubenswrapper[3958]: I0319 11:52:39.121410 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:39.122489 master-0 kubenswrapper[3958]: E0319 11:52:39.121659 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:52:39.134096 master-0 kubenswrapper[3958]: I0319 11:52:39.134007 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 19 11:52:40.140263 master-0 kubenswrapper[3958]: I0319 11:52:40.136558 3958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=1.136538896 podStartE2EDuration="1.136538896s" podCreationTimestamp="2026-03-19 11:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:52:40.136107642 +0000 UTC m=+90.809828824" watchObservedRunningTime="2026-03-19 11:52:40.136538896 +0000 UTC m=+90.810260078" Mar 19 11:52:40.152354 master-0 kubenswrapper[3958]: I0319 11:52:40.152270 3958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=2.152239859 podStartE2EDuration="2.152239859s" podCreationTimestamp="2026-03-19 11:52:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:52:40.152198337 +0000 UTC m=+90.825919519" watchObservedRunningTime="2026-03-19 11:52:40.152239859 +0000 UTC m=+90.825961041" Mar 19 11:52:40.953609 master-0 kubenswrapper[3958]: I0319 11:52:40.953546 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:40.954055 master-0 kubenswrapper[3958]: E0319 11:52:40.953819 3958 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 11:52:40.954055 master-0 kubenswrapper[3958]: E0319 11:52:40.953980 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs podName:398bcaca-1bea-4633-a78f-717e3d015ddd nodeName:}" failed. No retries permitted until 2026-03-19 11:52:44.953948685 +0000 UTC m=+95.627669867 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs") pod "network-metrics-daemon-6t6sn" (UID: "398bcaca-1bea-4633-a78f-717e3d015ddd") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 11:52:41.122482 master-0 kubenswrapper[3958]: I0319 11:52:41.121707 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:41.122482 master-0 kubenswrapper[3958]: E0319 11:52:41.121966 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:52:41.783506 master-0 kubenswrapper[3958]: I0319 11:52:41.783448 3958 generic.go:334] "Generic (PLEG): container finished" podID="7044a7b3-4fac-40af-a31c-054a1a1db26b" containerID="2993484a619b94d2ea27105e0262a5ba0f7bb5c64e52ff512e989510a1380a8f" exitCode=0 Mar 19 11:52:41.783506 master-0 kubenswrapper[3958]: I0319 11:52:41.783510 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2z4h8" event={"ID":"7044a7b3-4fac-40af-a31c-054a1a1db26b","Type":"ContainerDied","Data":"2993484a619b94d2ea27105e0262a5ba0f7bb5c64e52ff512e989510a1380a8f"} Mar 19 11:52:43.121674 master-0 kubenswrapper[3958]: I0319 11:52:43.121092 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:43.121674 master-0 kubenswrapper[3958]: E0319 11:52:43.121253 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:52:44.987128 master-0 kubenswrapper[3958]: I0319 11:52:44.987042 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:44.987814 master-0 kubenswrapper[3958]: E0319 11:52:44.987192 3958 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 11:52:44.987814 master-0 kubenswrapper[3958]: E0319 11:52:44.987251 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs podName:398bcaca-1bea-4633-a78f-717e3d015ddd nodeName:}" failed. No retries permitted until 2026-03-19 11:52:52.987235881 +0000 UTC m=+103.660957063 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs") pod "network-metrics-daemon-6t6sn" (UID: "398bcaca-1bea-4633-a78f-717e3d015ddd") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 11:52:45.121243 master-0 kubenswrapper[3958]: I0319 11:52:45.121192 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:45.121461 master-0 kubenswrapper[3958]: E0319 11:52:45.121301 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:52:47.121504 master-0 kubenswrapper[3958]: I0319 11:52:47.121420 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:47.122294 master-0 kubenswrapper[3958]: E0319 11:52:47.121625 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:52:48.714027 master-0 kubenswrapper[3958]: I0319 11:52:48.712791 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t"] Mar 19 11:52:48.714027 master-0 kubenswrapper[3958]: I0319 11:52:48.713298 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:52:48.721036 master-0 kubenswrapper[3958]: I0319 11:52:48.720291 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 19 11:52:48.721036 master-0 kubenswrapper[3958]: I0319 11:52:48.720563 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 19 11:52:48.721036 master-0 kubenswrapper[3958]: I0319 11:52:48.720694 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 19 11:52:48.721036 master-0 kubenswrapper[3958]: I0319 11:52:48.720847 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 19 11:52:48.721036 master-0 kubenswrapper[3958]: I0319 11:52:48.720977 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 19 11:52:48.822422 master-0 kubenswrapper[3958]: I0319 11:52:48.822377 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bf226d89-450d-4876-a113-345632b94ee9-env-overrides\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:52:48.822616 master-0 kubenswrapper[3958]: I0319 11:52:48.822438 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bf226d89-450d-4876-a113-345632b94ee9-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:52:48.822616 master-0 kubenswrapper[3958]: I0319 11:52:48.822475 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcxqj\" (UniqueName: \"kubernetes.io/projected/bf226d89-450d-4876-a113-345632b94ee9-kube-api-access-wcxqj\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:52:48.822616 master-0 kubenswrapper[3958]: I0319 11:52:48.822524 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bf226d89-450d-4876-a113-345632b94ee9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:52:48.924125 master-0 kubenswrapper[3958]: I0319 11:52:48.923850 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bf226d89-450d-4876-a113-345632b94ee9-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:52:48.924125 master-0 kubenswrapper[3958]: I0319 11:52:48.924128 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcxqj\" (UniqueName: \"kubernetes.io/projected/bf226d89-450d-4876-a113-345632b94ee9-kube-api-access-wcxqj\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:52:48.924367 master-0 kubenswrapper[3958]: I0319 11:52:48.924176 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bf226d89-450d-4876-a113-345632b94ee9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:52:48.924367 master-0 kubenswrapper[3958]: I0319 11:52:48.924217 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bf226d89-450d-4876-a113-345632b94ee9-env-overrides\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:52:48.925478 master-0 kubenswrapper[3958]: I0319 11:52:48.925431 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bf226d89-450d-4876-a113-345632b94ee9-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:52:48.925702 master-0 kubenswrapper[3958]: I0319 11:52:48.925663 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bf226d89-450d-4876-a113-345632b94ee9-env-overrides\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:52:48.930406 master-0 kubenswrapper[3958]: I0319 11:52:48.930356 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bf226d89-450d-4876-a113-345632b94ee9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:52:48.955293 master-0 kubenswrapper[3958]: I0319 11:52:48.953883 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcxqj\" (UniqueName: \"kubernetes.io/projected/bf226d89-450d-4876-a113-345632b94ee9-kube-api-access-wcxqj\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:52:48.960039 master-0 kubenswrapper[3958]: I0319 11:52:48.959976 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dfq9s"] Mar 19 11:52:48.960885 master-0 kubenswrapper[3958]: I0319 11:52:48.960833 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:48.963047 master-0 kubenswrapper[3958]: I0319 11:52:48.962998 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 19 11:52:48.965584 master-0 kubenswrapper[3958]: I0319 11:52:48.965508 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 19 11:52:49.024755 master-0 kubenswrapper[3958]: I0319 11:52:49.024666 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e0fd5e09-140d-49a5-b542-d2584fdffb43-ovnkube-config\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.024755 master-0 kubenswrapper[3958]: I0319 11:52:49.024767 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-run-netns\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.025003 master-0 kubenswrapper[3958]: I0319 11:52:49.024786 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-systemd-units\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.025003 master-0 kubenswrapper[3958]: I0319 11:52:49.024822 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.025003 master-0 kubenswrapper[3958]: I0319 11:52:49.024847 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhzlh\" (UniqueName: \"kubernetes.io/projected/e0fd5e09-140d-49a5-b542-d2584fdffb43-kube-api-access-qhzlh\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.025003 master-0 kubenswrapper[3958]: I0319 11:52:49.024872 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-kubelet\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.025003 master-0 kubenswrapper[3958]: I0319 11:52:49.024888 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-run-openvswitch\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.025003 master-0 kubenswrapper[3958]: I0319 11:52:49.024971 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-run-ovn-kubernetes\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.025180 master-0 kubenswrapper[3958]: I0319 11:52:49.025015 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e0fd5e09-140d-49a5-b542-d2584fdffb43-env-overrides\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.025180 master-0 kubenswrapper[3958]: I0319 11:52:49.025050 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-run-systemd\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.025180 master-0 kubenswrapper[3958]: I0319 11:52:49.025074 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-cni-netd\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.025180 master-0 kubenswrapper[3958]: I0319 11:52:49.025093 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0fd5e09-140d-49a5-b542-d2584fdffb43-ovn-node-metrics-cert\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.025180 master-0 kubenswrapper[3958]: I0319 11:52:49.025119 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-etc-openvswitch\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.025180 master-0 kubenswrapper[3958]: I0319 11:52:49.025136 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-cni-bin\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.025180 master-0 kubenswrapper[3958]: I0319 11:52:49.025155 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-log-socket\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.025180 master-0 kubenswrapper[3958]: I0319 11:52:49.025174 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-var-lib-openvswitch\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.025380 master-0 kubenswrapper[3958]: I0319 11:52:49.025198 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e0fd5e09-140d-49a5-b542-d2584fdffb43-ovnkube-script-lib\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.025380 master-0 kubenswrapper[3958]: I0319 11:52:49.025311 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-slash\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.025380 master-0 kubenswrapper[3958]: I0319 11:52:49.025347 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-run-ovn\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.025380 master-0 kubenswrapper[3958]: I0319 11:52:49.025365 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-node-log\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.044729 master-0 kubenswrapper[3958]: I0319 11:52:49.043962 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:52:49.121741 master-0 kubenswrapper[3958]: I0319 11:52:49.121689 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:49.121933 master-0 kubenswrapper[3958]: E0319 11:52:49.121860 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:52:49.126211 master-0 kubenswrapper[3958]: I0319 11:52:49.125920 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-slash\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.126211 master-0 kubenswrapper[3958]: I0319 11:52:49.125950 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-run-ovn\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.126211 master-0 kubenswrapper[3958]: I0319 11:52:49.125965 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-node-log\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.126211 master-0 kubenswrapper[3958]: I0319 11:52:49.126178 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-slash\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.126907 master-0 kubenswrapper[3958]: I0319 11:52:49.126880 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-run-netns\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.126961 master-0 kubenswrapper[3958]: I0319 11:52:49.126918 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-run-ovn\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.126961 master-0 kubenswrapper[3958]: I0319 11:52:49.126929 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e0fd5e09-140d-49a5-b542-d2584fdffb43-ovnkube-config\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.126961 master-0 kubenswrapper[3958]: I0319 11:52:49.126887 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-node-log\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.127042 master-0 kubenswrapper[3958]: I0319 11:52:49.126962 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-kubelet\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.127042 master-0 kubenswrapper[3958]: I0319 11:52:49.126990 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-systemd-units\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.127042 master-0 kubenswrapper[3958]: I0319 11:52:49.127015 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-kubelet\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.127042 master-0 kubenswrapper[3958]: I0319 11:52:49.127020 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.127156 master-0 kubenswrapper[3958]: I0319 11:52:49.127057 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.127156 master-0 kubenswrapper[3958]: I0319 11:52:49.127061 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhzlh\" (UniqueName: \"kubernetes.io/projected/e0fd5e09-140d-49a5-b542-d2584fdffb43-kube-api-access-qhzlh\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.127156 master-0 kubenswrapper[3958]: I0319 11:52:49.127109 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-run-openvswitch\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.127291 master-0 kubenswrapper[3958]: I0319 11:52:49.127148 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-run-ovn-kubernetes\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.127291 master-0 kubenswrapper[3958]: I0319 11:52:49.127192 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e0fd5e09-140d-49a5-b542-d2584fdffb43-env-overrides\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.127491 master-0 kubenswrapper[3958]: I0319 11:52:49.127435 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-systemd-units\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.127491 master-0 kubenswrapper[3958]: I0319 11:52:49.126986 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-run-netns\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.127491 master-0 kubenswrapper[3958]: I0319 11:52:49.127475 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-run-openvswitch\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.127638 master-0 kubenswrapper[3958]: I0319 11:52:49.127503 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-run-ovn-kubernetes\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.127638 master-0 kubenswrapper[3958]: I0319 11:52:49.127632 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e0fd5e09-140d-49a5-b542-d2584fdffb43-ovnkube-config\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.128155 master-0 kubenswrapper[3958]: I0319 11:52:49.128112 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-run-systemd\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.128225 master-0 kubenswrapper[3958]: I0319 11:52:49.128153 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e0fd5e09-140d-49a5-b542-d2584fdffb43-env-overrides\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.128225 master-0 kubenswrapper[3958]: I0319 11:52:49.128171 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-cni-netd\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.128225 master-0 kubenswrapper[3958]: I0319 11:52:49.128180 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-run-systemd\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.128225 master-0 kubenswrapper[3958]: I0319 11:52:49.128198 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0fd5e09-140d-49a5-b542-d2584fdffb43-ovn-node-metrics-cert\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.128728 master-0 kubenswrapper[3958]: I0319 11:52:49.128229 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-etc-openvswitch\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.128728 master-0 kubenswrapper[3958]: I0319 11:52:49.128247 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-cni-netd\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.128728 master-0 kubenswrapper[3958]: I0319 11:52:49.128261 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-cni-bin\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.128728 master-0 kubenswrapper[3958]: I0319 11:52:49.128399 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-etc-openvswitch\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.128728 master-0 kubenswrapper[3958]: I0319 11:52:49.128515 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-cni-bin\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.128728 master-0 kubenswrapper[3958]: I0319 11:52:49.128566 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-log-socket\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.131762 master-0 kubenswrapper[3958]: I0319 11:52:49.130292 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-log-socket\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.131762 master-0 kubenswrapper[3958]: I0319 11:52:49.130360 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-var-lib-openvswitch\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.131762 master-0 kubenswrapper[3958]: I0319 11:52:49.130404 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e0fd5e09-140d-49a5-b542-d2584fdffb43-ovnkube-script-lib\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.131762 master-0 kubenswrapper[3958]: I0319 11:52:49.130454 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-var-lib-openvswitch\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.131762 master-0 kubenswrapper[3958]: I0319 11:52:49.131072 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0fd5e09-140d-49a5-b542-d2584fdffb43-ovn-node-metrics-cert\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.131762 master-0 kubenswrapper[3958]: I0319 11:52:49.131413 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e0fd5e09-140d-49a5-b542-d2584fdffb43-ovnkube-script-lib\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.142928 master-0 kubenswrapper[3958]: I0319 11:52:49.142356 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhzlh\" (UniqueName: \"kubernetes.io/projected/e0fd5e09-140d-49a5-b542-d2584fdffb43-kube-api-access-qhzlh\") pod \"ovnkube-node-dfq9s\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.287252 master-0 kubenswrapper[3958]: I0319 11:52:49.287140 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:52:49.935976 master-0 kubenswrapper[3958]: I0319 11:52:49.935907 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:52:49.936683 master-0 kubenswrapper[3958]: E0319 11:52:49.936111 3958 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 11:52:49.936683 master-0 kubenswrapper[3958]: E0319 11:52:49.936216 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert podName:85912908-c447-4868-871b-82c5eadbfdbe nodeName:}" failed. No retries permitted until 2026-03-19 11:53:21.936192431 +0000 UTC m=+132.609913613 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert") pod "cluster-version-operator-56d8475767-gjj5v" (UID: "85912908-c447-4868-871b-82c5eadbfdbe") : secret "cluster-version-operator-serving-cert" not found Mar 19 11:52:50.417305 master-0 kubenswrapper[3958]: W0319 11:52:50.417251 3958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0fd5e09_140d_49a5_b542_d2584fdffb43.slice/crio-c27e98a561ffe786fc1b95b71c3a149aa1f22e3037947fc028437c10cba9712b WatchSource:0}: Error finding container c27e98a561ffe786fc1b95b71c3a149aa1f22e3037947fc028437c10cba9712b: Status 404 returned error can't find the container with id c27e98a561ffe786fc1b95b71c3a149aa1f22e3037947fc028437c10cba9712b Mar 19 11:52:50.807464 master-0 kubenswrapper[3958]: I0319 11:52:50.806944 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w82cg" event={"ID":"fe245927-c937-4ec7-ab83-4900bade72cf","Type":"ContainerStarted","Data":"53d1201850cc444d73a30dee9994be21bf710572e152a7e5256101d9de1be916"} Mar 19 11:52:50.827675 master-0 kubenswrapper[3958]: I0319 11:52:50.827553 3958 generic.go:334] "Generic (PLEG): container finished" podID="7044a7b3-4fac-40af-a31c-054a1a1db26b" containerID="a7b363361678d9e81d9d8ef32a8db06e2b9f3625d0d6871f670414917c137669" exitCode=0 Mar 19 11:52:50.828043 master-0 kubenswrapper[3958]: I0319 11:52:50.827705 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2z4h8" event={"ID":"7044a7b3-4fac-40af-a31c-054a1a1db26b","Type":"ContainerDied","Data":"a7b363361678d9e81d9d8ef32a8db06e2b9f3625d0d6871f670414917c137669"} Mar 19 11:52:50.830440 master-0 kubenswrapper[3958]: I0319 11:52:50.830367 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" event={"ID":"bf226d89-450d-4876-a113-345632b94ee9","Type":"ContainerStarted","Data":"46c2c45eb28c61fc2f5982bf62624e1448bd12d9e3129b73c70d66cabc189434"} Mar 19 11:52:50.830541 master-0 kubenswrapper[3958]: I0319 11:52:50.830463 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" event={"ID":"bf226d89-450d-4876-a113-345632b94ee9","Type":"ContainerStarted","Data":"e6ef8104a726a85f4fa80186a64ea3c00a2cbb1be2c668fb9e94709c10d980c0"} Mar 19 11:52:50.833426 master-0 kubenswrapper[3958]: I0319 11:52:50.833348 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" event={"ID":"e0fd5e09-140d-49a5-b542-d2584fdffb43","Type":"ContainerStarted","Data":"c27e98a561ffe786fc1b95b71c3a149aa1f22e3037947fc028437c10cba9712b"} Mar 19 11:52:50.859079 master-0 kubenswrapper[3958]: I0319 11:52:50.858978 3958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-w82cg" podStartSLOduration=1.317458554 podStartE2EDuration="14.858937556s" podCreationTimestamp="2026-03-19 11:52:36 +0000 UTC" firstStartedPulling="2026-03-19 11:52:36.946897522 +0000 UTC m=+87.620618704" lastFinishedPulling="2026-03-19 11:52:50.488376514 +0000 UTC m=+101.162097706" observedRunningTime="2026-03-19 11:52:50.832377112 +0000 UTC m=+101.506098314" watchObservedRunningTime="2026-03-19 11:52:50.858937556 +0000 UTC m=+101.532658778" Mar 19 11:52:51.121655 master-0 kubenswrapper[3958]: I0319 11:52:51.121469 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:51.122640 master-0 kubenswrapper[3958]: E0319 11:52:51.121677 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:52:51.893597 master-0 kubenswrapper[3958]: I0319 11:52:51.893383 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-v66z4"] Mar 19 11:52:51.893872 master-0 kubenswrapper[3958]: I0319 11:52:51.893738 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:52:51.893872 master-0 kubenswrapper[3958]: E0319 11:52:51.893825 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v66z4" podUID="616dbb32-6b65-4e44-a217-6b1be2844cc9" Mar 19 11:52:51.955412 master-0 kubenswrapper[3958]: I0319 11:52:51.955378 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g6zz\" (UniqueName: \"kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz\") pod \"network-check-target-v66z4\" (UID: \"616dbb32-6b65-4e44-a217-6b1be2844cc9\") " pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:52:52.056452 master-0 kubenswrapper[3958]: I0319 11:52:52.056345 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g6zz\" (UniqueName: \"kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz\") pod \"network-check-target-v66z4\" (UID: \"616dbb32-6b65-4e44-a217-6b1be2844cc9\") " pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:52:52.068748 master-0 kubenswrapper[3958]: E0319 11:52:52.068662 3958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 11:52:52.068748 master-0 kubenswrapper[3958]: E0319 11:52:52.068747 3958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 11:52:52.068968 master-0 kubenswrapper[3958]: E0319 11:52:52.068765 3958 projected.go:194] Error preparing data for projected volume kube-api-access-7g6zz for pod openshift-network-diagnostics/network-check-target-v66z4: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 11:52:52.068968 master-0 kubenswrapper[3958]: E0319 11:52:52.068856 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz podName:616dbb32-6b65-4e44-a217-6b1be2844cc9 nodeName:}" failed. No retries permitted until 2026-03-19 11:52:52.568833935 +0000 UTC m=+103.242555137 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7g6zz" (UniqueName: "kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz") pod "network-check-target-v66z4" (UID: "616dbb32-6b65-4e44-a217-6b1be2844cc9") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 11:52:52.662788 master-0 kubenswrapper[3958]: I0319 11:52:52.662728 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g6zz\" (UniqueName: \"kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz\") pod \"network-check-target-v66z4\" (UID: \"616dbb32-6b65-4e44-a217-6b1be2844cc9\") " pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:52:52.663634 master-0 kubenswrapper[3958]: E0319 11:52:52.662889 3958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 11:52:52.663634 master-0 kubenswrapper[3958]: E0319 11:52:52.662906 3958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 11:52:52.663634 master-0 kubenswrapper[3958]: E0319 11:52:52.662917 3958 projected.go:194] Error preparing data for projected volume kube-api-access-7g6zz for pod openshift-network-diagnostics/network-check-target-v66z4: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 11:52:52.663634 master-0 kubenswrapper[3958]: E0319 11:52:52.662965 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz podName:616dbb32-6b65-4e44-a217-6b1be2844cc9 nodeName:}" failed. No retries permitted until 2026-03-19 11:52:53.662951725 +0000 UTC m=+104.336672907 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7g6zz" (UniqueName: "kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz") pod "network-check-target-v66z4" (UID: "616dbb32-6b65-4e44-a217-6b1be2844cc9") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 11:52:52.840456 master-0 kubenswrapper[3958]: I0319 11:52:52.840404 3958 generic.go:334] "Generic (PLEG): container finished" podID="7044a7b3-4fac-40af-a31c-054a1a1db26b" containerID="056242a76e14af2b45592d6a5dba2e28b2cd2e138b0b1a0f773a8e9eef170947" exitCode=0 Mar 19 11:52:52.840456 master-0 kubenswrapper[3958]: I0319 11:52:52.840449 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2z4h8" event={"ID":"7044a7b3-4fac-40af-a31c-054a1a1db26b","Type":"ContainerDied","Data":"056242a76e14af2b45592d6a5dba2e28b2cd2e138b0b1a0f773a8e9eef170947"} Mar 19 11:52:53.066228 master-0 kubenswrapper[3958]: I0319 11:52:53.066154 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:53.066448 master-0 kubenswrapper[3958]: E0319 11:52:53.066341 3958 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 11:52:53.066448 master-0 kubenswrapper[3958]: E0319 11:52:53.066446 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs podName:398bcaca-1bea-4633-a78f-717e3d015ddd nodeName:}" failed. No retries permitted until 2026-03-19 11:53:09.066423719 +0000 UTC m=+119.740144901 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs") pod "network-metrics-daemon-6t6sn" (UID: "398bcaca-1bea-4633-a78f-717e3d015ddd") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 11:52:53.267195 master-0 kubenswrapper[3958]: I0319 11:52:53.121991 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:53.267195 master-0 kubenswrapper[3958]: E0319 11:52:53.122169 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:52:53.670442 master-0 kubenswrapper[3958]: I0319 11:52:53.670397 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g6zz\" (UniqueName: \"kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz\") pod \"network-check-target-v66z4\" (UID: \"616dbb32-6b65-4e44-a217-6b1be2844cc9\") " pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:52:53.670923 master-0 kubenswrapper[3958]: E0319 11:52:53.670688 3958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 11:52:53.670923 master-0 kubenswrapper[3958]: E0319 11:52:53.670735 3958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 11:52:53.670923 master-0 kubenswrapper[3958]: E0319 11:52:53.670754 3958 projected.go:194] Error preparing data for projected volume kube-api-access-7g6zz for pod openshift-network-diagnostics/network-check-target-v66z4: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 11:52:53.670923 master-0 kubenswrapper[3958]: E0319 11:52:53.670851 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz podName:616dbb32-6b65-4e44-a217-6b1be2844cc9 nodeName:}" failed. No retries permitted until 2026-03-19 11:52:55.670825412 +0000 UTC m=+106.344546594 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-7g6zz" (UniqueName: "kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz") pod "network-check-target-v66z4" (UID: "616dbb32-6b65-4e44-a217-6b1be2844cc9") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 11:52:54.123630 master-0 kubenswrapper[3958]: I0319 11:52:54.123578 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:52:54.123844 master-0 kubenswrapper[3958]: E0319 11:52:54.123695 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v66z4" podUID="616dbb32-6b65-4e44-a217-6b1be2844cc9" Mar 19 11:52:55.121454 master-0 kubenswrapper[3958]: I0319 11:52:55.121400 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:55.122142 master-0 kubenswrapper[3958]: E0319 11:52:55.121528 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:52:55.688323 master-0 kubenswrapper[3958]: I0319 11:52:55.688248 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g6zz\" (UniqueName: \"kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz\") pod \"network-check-target-v66z4\" (UID: \"616dbb32-6b65-4e44-a217-6b1be2844cc9\") " pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:52:55.688559 master-0 kubenswrapper[3958]: E0319 11:52:55.688413 3958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 11:52:55.688559 master-0 kubenswrapper[3958]: E0319 11:52:55.688429 3958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 11:52:55.688559 master-0 kubenswrapper[3958]: E0319 11:52:55.688442 3958 projected.go:194] Error preparing data for projected volume kube-api-access-7g6zz for pod openshift-network-diagnostics/network-check-target-v66z4: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 11:52:55.688559 master-0 kubenswrapper[3958]: E0319 11:52:55.688495 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz podName:616dbb32-6b65-4e44-a217-6b1be2844cc9 nodeName:}" failed. No retries permitted until 2026-03-19 11:52:59.688480418 +0000 UTC m=+110.362201600 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-7g6zz" (UniqueName: "kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz") pod "network-check-target-v66z4" (UID: "616dbb32-6b65-4e44-a217-6b1be2844cc9") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 11:52:56.111945 master-0 kubenswrapper[3958]: I0319 11:52:56.111872 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-wd4nx"] Mar 19 11:52:56.112888 master-0 kubenswrapper[3958]: I0319 11:52:56.112847 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:52:56.115285 master-0 kubenswrapper[3958]: I0319 11:52:56.115249 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 19 11:52:56.115370 master-0 kubenswrapper[3958]: I0319 11:52:56.115331 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 19 11:52:56.115411 master-0 kubenswrapper[3958]: I0319 11:52:56.115360 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 19 11:52:56.115578 master-0 kubenswrapper[3958]: I0319 11:52:56.115539 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 19 11:52:56.115644 master-0 kubenswrapper[3958]: I0319 11:52:56.115546 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 19 11:52:56.121218 master-0 kubenswrapper[3958]: I0319 11:52:56.121167 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:52:56.121387 master-0 kubenswrapper[3958]: E0319 11:52:56.121335 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v66z4" podUID="616dbb32-6b65-4e44-a217-6b1be2844cc9" Mar 19 11:52:56.193202 master-0 kubenswrapper[3958]: I0319 11:52:56.193134 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-864rg\" (UniqueName: \"kubernetes.io/projected/8414b6b0-ee16-47a5-982b-ee58b136cfcf-kube-api-access-864rg\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:52:56.193614 master-0 kubenswrapper[3958]: I0319 11:52:56.193220 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8414b6b0-ee16-47a5-982b-ee58b136cfcf-env-overrides\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:52:56.193614 master-0 kubenswrapper[3958]: I0319 11:52:56.193363 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/8414b6b0-ee16-47a5-982b-ee58b136cfcf-ovnkube-identity-cm\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:52:56.193614 master-0 kubenswrapper[3958]: I0319 11:52:56.193419 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8414b6b0-ee16-47a5-982b-ee58b136cfcf-webhook-cert\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:52:56.496313 master-0 kubenswrapper[3958]: I0319 11:52:56.294761 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-864rg\" (UniqueName: \"kubernetes.io/projected/8414b6b0-ee16-47a5-982b-ee58b136cfcf-kube-api-access-864rg\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:52:56.496313 master-0 kubenswrapper[3958]: I0319 11:52:56.294866 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8414b6b0-ee16-47a5-982b-ee58b136cfcf-env-overrides\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:52:56.496313 master-0 kubenswrapper[3958]: I0319 11:52:56.295151 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/8414b6b0-ee16-47a5-982b-ee58b136cfcf-ovnkube-identity-cm\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:52:56.496313 master-0 kubenswrapper[3958]: I0319 11:52:56.295332 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8414b6b0-ee16-47a5-982b-ee58b136cfcf-webhook-cert\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:52:56.496313 master-0 kubenswrapper[3958]: I0319 11:52:56.296012 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8414b6b0-ee16-47a5-982b-ee58b136cfcf-env-overrides\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:52:56.496313 master-0 kubenswrapper[3958]: I0319 11:52:56.296173 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/8414b6b0-ee16-47a5-982b-ee58b136cfcf-ovnkube-identity-cm\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:52:56.496313 master-0 kubenswrapper[3958]: I0319 11:52:56.300036 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8414b6b0-ee16-47a5-982b-ee58b136cfcf-webhook-cert\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:52:56.795510 master-0 kubenswrapper[3958]: I0319 11:52:56.795470 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-864rg\" (UniqueName: \"kubernetes.io/projected/8414b6b0-ee16-47a5-982b-ee58b136cfcf-kube-api-access-864rg\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:52:57.027429 master-0 kubenswrapper[3958]: I0319 11:52:57.027355 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:52:57.121162 master-0 kubenswrapper[3958]: I0319 11:52:57.121019 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:57.121162 master-0 kubenswrapper[3958]: E0319 11:52:57.121151 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:52:57.857450 master-0 kubenswrapper[3958]: I0319 11:52:57.857399 3958 generic.go:334] "Generic (PLEG): container finished" podID="7044a7b3-4fac-40af-a31c-054a1a1db26b" containerID="d621a54b4c12065eb160ef19e85adc68090a98c2fb8fea5b5228543edbaf07e1" exitCode=0 Mar 19 11:52:57.858041 master-0 kubenswrapper[3958]: I0319 11:52:57.857715 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2z4h8" event={"ID":"7044a7b3-4fac-40af-a31c-054a1a1db26b","Type":"ContainerDied","Data":"d621a54b4c12065eb160ef19e85adc68090a98c2fb8fea5b5228543edbaf07e1"} Mar 19 11:52:57.859038 master-0 kubenswrapper[3958]: I0319 11:52:57.859007 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-wd4nx" event={"ID":"8414b6b0-ee16-47a5-982b-ee58b136cfcf","Type":"ContainerStarted","Data":"5396ef64e03af5cd8fbb98838e00f4f08020d9b7b41c5ccef26950f1e41fec60"} Mar 19 11:52:58.121707 master-0 kubenswrapper[3958]: I0319 11:52:58.121166 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:52:58.121707 master-0 kubenswrapper[3958]: E0319 11:52:58.121288 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v66z4" podUID="616dbb32-6b65-4e44-a217-6b1be2844cc9" Mar 19 11:52:59.121268 master-0 kubenswrapper[3958]: I0319 11:52:59.121218 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:52:59.121765 master-0 kubenswrapper[3958]: E0319 11:52:59.121357 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:52:59.736611 master-0 kubenswrapper[3958]: I0319 11:52:59.736530 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g6zz\" (UniqueName: \"kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz\") pod \"network-check-target-v66z4\" (UID: \"616dbb32-6b65-4e44-a217-6b1be2844cc9\") " pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:52:59.737061 master-0 kubenswrapper[3958]: E0319 11:52:59.736787 3958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 11:52:59.737061 master-0 kubenswrapper[3958]: E0319 11:52:59.736848 3958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 11:52:59.737061 master-0 kubenswrapper[3958]: E0319 11:52:59.736866 3958 projected.go:194] Error preparing data for projected volume kube-api-access-7g6zz for pod openshift-network-diagnostics/network-check-target-v66z4: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 11:52:59.737061 master-0 kubenswrapper[3958]: E0319 11:52:59.736938 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz podName:616dbb32-6b65-4e44-a217-6b1be2844cc9 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:07.73691634 +0000 UTC m=+118.410637532 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-7g6zz" (UniqueName: "kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz") pod "network-check-target-v66z4" (UID: "616dbb32-6b65-4e44-a217-6b1be2844cc9") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 11:53:00.236448 master-0 kubenswrapper[3958]: I0319 11:53:00.236373 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:00.237922 master-0 kubenswrapper[3958]: E0319 11:53:00.237882 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v66z4" podUID="616dbb32-6b65-4e44-a217-6b1be2844cc9" Mar 19 11:53:00.237922 master-0 kubenswrapper[3958]: I0319 11:53:00.237903 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:00.238034 master-0 kubenswrapper[3958]: E0319 11:53:00.237987 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:53:02.121633 master-0 kubenswrapper[3958]: I0319 11:53:02.121529 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:02.121633 master-0 kubenswrapper[3958]: I0319 11:53:02.121600 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:02.122610 master-0 kubenswrapper[3958]: E0319 11:53:02.121702 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v66z4" podUID="616dbb32-6b65-4e44-a217-6b1be2844cc9" Mar 19 11:53:02.122610 master-0 kubenswrapper[3958]: E0319 11:53:02.121767 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:53:04.121269 master-0 kubenswrapper[3958]: I0319 11:53:04.121182 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:04.121269 master-0 kubenswrapper[3958]: I0319 11:53:04.121266 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:04.122031 master-0 kubenswrapper[3958]: E0319 11:53:04.121403 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v66z4" podUID="616dbb32-6b65-4e44-a217-6b1be2844cc9" Mar 19 11:53:04.122031 master-0 kubenswrapper[3958]: E0319 11:53:04.121577 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:53:06.124581 master-0 kubenswrapper[3958]: I0319 11:53:06.124522 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:06.125216 master-0 kubenswrapper[3958]: E0319 11:53:06.124669 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v66z4" podUID="616dbb32-6b65-4e44-a217-6b1be2844cc9" Mar 19 11:53:06.125274 master-0 kubenswrapper[3958]: I0319 11:53:06.125246 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:06.125360 master-0 kubenswrapper[3958]: E0319 11:53:06.125325 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:53:07.808840 master-0 kubenswrapper[3958]: I0319 11:53:07.808767 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g6zz\" (UniqueName: \"kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz\") pod \"network-check-target-v66z4\" (UID: \"616dbb32-6b65-4e44-a217-6b1be2844cc9\") " pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:07.809347 master-0 kubenswrapper[3958]: E0319 11:53:07.808928 3958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 11:53:07.809347 master-0 kubenswrapper[3958]: E0319 11:53:07.808942 3958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 11:53:07.809347 master-0 kubenswrapper[3958]: E0319 11:53:07.808953 3958 projected.go:194] Error preparing data for projected volume kube-api-access-7g6zz for pod openshift-network-diagnostics/network-check-target-v66z4: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 11:53:07.809347 master-0 kubenswrapper[3958]: E0319 11:53:07.808993 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz podName:616dbb32-6b65-4e44-a217-6b1be2844cc9 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:23.808980755 +0000 UTC m=+134.482701937 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-7g6zz" (UniqueName: "kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz") pod "network-check-target-v66z4" (UID: "616dbb32-6b65-4e44-a217-6b1be2844cc9") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 11:53:08.122860 master-0 kubenswrapper[3958]: I0319 11:53:08.122688 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:08.122860 master-0 kubenswrapper[3958]: E0319 11:53:08.122849 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v66z4" podUID="616dbb32-6b65-4e44-a217-6b1be2844cc9" Mar 19 11:53:08.123098 master-0 kubenswrapper[3958]: I0319 11:53:08.122924 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:08.123240 master-0 kubenswrapper[3958]: E0319 11:53:08.123184 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:53:09.120902 master-0 kubenswrapper[3958]: I0319 11:53:09.120279 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:09.120902 master-0 kubenswrapper[3958]: E0319 11:53:09.120562 3958 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 11:53:09.120902 master-0 kubenswrapper[3958]: E0319 11:53:09.120642 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs podName:398bcaca-1bea-4633-a78f-717e3d015ddd nodeName:}" failed. No retries permitted until 2026-03-19 11:53:41.120620039 +0000 UTC m=+151.794341221 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs") pod "network-metrics-daemon-6t6sn" (UID: "398bcaca-1bea-4633-a78f-717e3d015ddd") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 11:53:10.025359 master-0 kubenswrapper[3958]: E0319 11:53:10.025303 3958 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 19 11:53:10.122164 master-0 kubenswrapper[3958]: I0319 11:53:10.122086 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:10.122945 master-0 kubenswrapper[3958]: E0319 11:53:10.122735 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v66z4" podUID="616dbb32-6b65-4e44-a217-6b1be2844cc9" Mar 19 11:53:10.122945 master-0 kubenswrapper[3958]: I0319 11:53:10.122909 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:10.123485 master-0 kubenswrapper[3958]: E0319 11:53:10.123101 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:53:10.240450 master-0 kubenswrapper[3958]: E0319 11:53:10.240087 3958 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 11:53:12.121828 master-0 kubenswrapper[3958]: I0319 11:53:12.121757 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:12.121828 master-0 kubenswrapper[3958]: I0319 11:53:12.121791 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:12.122488 master-0 kubenswrapper[3958]: E0319 11:53:12.121930 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v66z4" podUID="616dbb32-6b65-4e44-a217-6b1be2844cc9" Mar 19 11:53:12.122488 master-0 kubenswrapper[3958]: E0319 11:53:12.122185 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:53:13.879770 master-0 kubenswrapper[3958]: I0319 11:53:13.879530 3958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dfq9s"] Mar 19 11:53:14.121929 master-0 kubenswrapper[3958]: I0319 11:53:14.121875 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:14.122139 master-0 kubenswrapper[3958]: E0319 11:53:14.121991 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v66z4" podUID="616dbb32-6b65-4e44-a217-6b1be2844cc9" Mar 19 11:53:14.122139 master-0 kubenswrapper[3958]: I0319 11:53:14.122133 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:14.122338 master-0 kubenswrapper[3958]: E0319 11:53:14.122205 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:53:14.905391 master-0 kubenswrapper[3958]: I0319 11:53:14.905267 3958 generic.go:334] "Generic (PLEG): container finished" podID="e0fd5e09-140d-49a5-b542-d2584fdffb43" containerID="716d25cb6659bbdfcd4212839d9b9a94b0e9cfbbe4aa442ba93908a0f053f9ef" exitCode=0 Mar 19 11:53:14.906551 master-0 kubenswrapper[3958]: I0319 11:53:14.905434 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" event={"ID":"e0fd5e09-140d-49a5-b542-d2584fdffb43","Type":"ContainerDied","Data":"716d25cb6659bbdfcd4212839d9b9a94b0e9cfbbe4aa442ba93908a0f053f9ef"} Mar 19 11:53:14.913251 master-0 kubenswrapper[3958]: I0319 11:53:14.913210 3958 generic.go:334] "Generic (PLEG): container finished" podID="7044a7b3-4fac-40af-a31c-054a1a1db26b" containerID="09e947b1211885dac847d7f6f4b5d685a97ae8ac56061459ae15b5ca2dde25cb" exitCode=0 Mar 19 11:53:14.916443 master-0 kubenswrapper[3958]: I0319 11:53:14.914295 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2z4h8" event={"ID":"7044a7b3-4fac-40af-a31c-054a1a1db26b","Type":"ContainerDied","Data":"09e947b1211885dac847d7f6f4b5d685a97ae8ac56061459ae15b5ca2dde25cb"} Mar 19 11:53:14.920524 master-0 kubenswrapper[3958]: I0319 11:53:14.920447 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" event={"ID":"bf226d89-450d-4876-a113-345632b94ee9","Type":"ContainerStarted","Data":"e708db8e66828556f8b708025575f23f8aa12842fc7126337dc3672b562dc4b1"} Mar 19 11:53:14.923440 master-0 kubenswrapper[3958]: I0319 11:53:14.923365 3958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:53:14.927590 master-0 kubenswrapper[3958]: I0319 11:53:14.927539 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-wd4nx" event={"ID":"8414b6b0-ee16-47a5-982b-ee58b136cfcf","Type":"ContainerStarted","Data":"acd01abcc3b9701b51c684ecc460502246e3fa79a2f3e8b56cc2aec4e47bef9f"} Mar 19 11:53:14.927590 master-0 kubenswrapper[3958]: I0319 11:53:14.927573 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-wd4nx" event={"ID":"8414b6b0-ee16-47a5-982b-ee58b136cfcf","Type":"ContainerStarted","Data":"9056c1fc7fd95fa7aafeb785b453a91ad0a0bc459dc640aabc135173a8c4a812"} Mar 19 11:53:14.970390 master-0 kubenswrapper[3958]: I0319 11:53:14.970269 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-kubelet\") pod \"e0fd5e09-140d-49a5-b542-d2584fdffb43\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " Mar 19 11:53:14.970390 master-0 kubenswrapper[3958]: I0319 11:53:14.970346 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e0fd5e09-140d-49a5-b542-d2584fdffb43-ovnkube-script-lib\") pod \"e0fd5e09-140d-49a5-b542-d2584fdffb43\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " Mar 19 11:53:14.970390 master-0 kubenswrapper[3958]: I0319 11:53:14.970386 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-run-ovn-kubernetes\") pod \"e0fd5e09-140d-49a5-b542-d2584fdffb43\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " Mar 19 11:53:14.970730 master-0 kubenswrapper[3958]: I0319 11:53:14.970422 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-run-openvswitch\") pod \"e0fd5e09-140d-49a5-b542-d2584fdffb43\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " Mar 19 11:53:14.970730 master-0 kubenswrapper[3958]: I0319 11:53:14.970456 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-etc-openvswitch\") pod \"e0fd5e09-140d-49a5-b542-d2584fdffb43\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " Mar 19 11:53:14.970730 master-0 kubenswrapper[3958]: I0319 11:53:14.970495 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0fd5e09-140d-49a5-b542-d2584fdffb43-ovn-node-metrics-cert\") pod \"e0fd5e09-140d-49a5-b542-d2584fdffb43\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " Mar 19 11:53:14.970730 master-0 kubenswrapper[3958]: I0319 11:53:14.970535 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e0fd5e09-140d-49a5-b542-d2584fdffb43-ovnkube-config\") pod \"e0fd5e09-140d-49a5-b542-d2584fdffb43\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " Mar 19 11:53:14.970730 master-0 kubenswrapper[3958]: I0319 11:53:14.970570 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e0fd5e09-140d-49a5-b542-d2584fdffb43-env-overrides\") pod \"e0fd5e09-140d-49a5-b542-d2584fdffb43\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " Mar 19 11:53:14.970730 master-0 kubenswrapper[3958]: I0319 11:53:14.970599 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-cni-netd\") pod \"e0fd5e09-140d-49a5-b542-d2584fdffb43\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " Mar 19 11:53:14.970730 master-0 kubenswrapper[3958]: I0319 11:53:14.970640 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-var-lib-openvswitch\") pod \"e0fd5e09-140d-49a5-b542-d2584fdffb43\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " Mar 19 11:53:14.970730 master-0 kubenswrapper[3958]: I0319 11:53:14.970674 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-run-systemd\") pod \"e0fd5e09-140d-49a5-b542-d2584fdffb43\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " Mar 19 11:53:14.970730 master-0 kubenswrapper[3958]: I0319 11:53:14.970708 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-slash\") pod \"e0fd5e09-140d-49a5-b542-d2584fdffb43\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " Mar 19 11:53:14.971004 master-0 kubenswrapper[3958]: I0319 11:53:14.970744 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhzlh\" (UniqueName: \"kubernetes.io/projected/e0fd5e09-140d-49a5-b542-d2584fdffb43-kube-api-access-qhzlh\") pod \"e0fd5e09-140d-49a5-b542-d2584fdffb43\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " Mar 19 11:53:14.971004 master-0 kubenswrapper[3958]: I0319 11:53:14.970786 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-run-ovn\") pod \"e0fd5e09-140d-49a5-b542-d2584fdffb43\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " Mar 19 11:53:14.971004 master-0 kubenswrapper[3958]: I0319 11:53:14.970855 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-var-lib-cni-networks-ovn-kubernetes\") pod \"e0fd5e09-140d-49a5-b542-d2584fdffb43\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " Mar 19 11:53:14.971004 master-0 kubenswrapper[3958]: I0319 11:53:14.970888 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-node-log\") pod \"e0fd5e09-140d-49a5-b542-d2584fdffb43\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " Mar 19 11:53:14.971004 master-0 kubenswrapper[3958]: I0319 11:53:14.970926 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-cni-bin\") pod \"e0fd5e09-140d-49a5-b542-d2584fdffb43\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " Mar 19 11:53:14.971004 master-0 kubenswrapper[3958]: I0319 11:53:14.970963 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-systemd-units\") pod \"e0fd5e09-140d-49a5-b542-d2584fdffb43\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " Mar 19 11:53:14.971004 master-0 kubenswrapper[3958]: I0319 11:53:14.970988 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-run-netns\") pod \"e0fd5e09-140d-49a5-b542-d2584fdffb43\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " Mar 19 11:53:14.971004 master-0 kubenswrapper[3958]: I0319 11:53:14.971008 3958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-log-socket\") pod \"e0fd5e09-140d-49a5-b542-d2584fdffb43\" (UID: \"e0fd5e09-140d-49a5-b542-d2584fdffb43\") " Mar 19 11:53:14.971810 master-0 kubenswrapper[3958]: I0319 11:53:14.971378 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-node-log" (OuterVolumeSpecName: "node-log") pod "e0fd5e09-140d-49a5-b542-d2584fdffb43" (UID: "e0fd5e09-140d-49a5-b542-d2584fdffb43"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:14.971810 master-0 kubenswrapper[3958]: I0319 11:53:14.971473 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "e0fd5e09-140d-49a5-b542-d2584fdffb43" (UID: "e0fd5e09-140d-49a5-b542-d2584fdffb43"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:14.973228 master-0 kubenswrapper[3958]: I0319 11:53:14.971947 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "e0fd5e09-140d-49a5-b542-d2584fdffb43" (UID: "e0fd5e09-140d-49a5-b542-d2584fdffb43"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:14.973228 master-0 kubenswrapper[3958]: I0319 11:53:14.971970 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "e0fd5e09-140d-49a5-b542-d2584fdffb43" (UID: "e0fd5e09-140d-49a5-b542-d2584fdffb43"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:14.973228 master-0 kubenswrapper[3958]: I0319 11:53:14.972005 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "e0fd5e09-140d-49a5-b542-d2584fdffb43" (UID: "e0fd5e09-140d-49a5-b542-d2584fdffb43"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:14.973228 master-0 kubenswrapper[3958]: I0319 11:53:14.972022 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "e0fd5e09-140d-49a5-b542-d2584fdffb43" (UID: "e0fd5e09-140d-49a5-b542-d2584fdffb43"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:14.973228 master-0 kubenswrapper[3958]: I0319 11:53:14.972054 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "e0fd5e09-140d-49a5-b542-d2584fdffb43" (UID: "e0fd5e09-140d-49a5-b542-d2584fdffb43"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:14.973228 master-0 kubenswrapper[3958]: I0319 11:53:14.972054 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "e0fd5e09-140d-49a5-b542-d2584fdffb43" (UID: "e0fd5e09-140d-49a5-b542-d2584fdffb43"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:14.973228 master-0 kubenswrapper[3958]: I0319 11:53:14.972087 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-log-socket" (OuterVolumeSpecName: "log-socket") pod "e0fd5e09-140d-49a5-b542-d2584fdffb43" (UID: "e0fd5e09-140d-49a5-b542-d2584fdffb43"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:14.973228 master-0 kubenswrapper[3958]: I0319 11:53:14.972099 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "e0fd5e09-140d-49a5-b542-d2584fdffb43" (UID: "e0fd5e09-140d-49a5-b542-d2584fdffb43"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:14.973228 master-0 kubenswrapper[3958]: I0319 11:53:14.972681 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0fd5e09-140d-49a5-b542-d2584fdffb43-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "e0fd5e09-140d-49a5-b542-d2584fdffb43" (UID: "e0fd5e09-140d-49a5-b542-d2584fdffb43"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:53:14.973658 master-0 kubenswrapper[3958]: I0319 11:53:14.973474 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-slash" (OuterVolumeSpecName: "host-slash") pod "e0fd5e09-140d-49a5-b542-d2584fdffb43" (UID: "e0fd5e09-140d-49a5-b542-d2584fdffb43"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:14.973658 master-0 kubenswrapper[3958]: I0319 11:53:14.973521 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "e0fd5e09-140d-49a5-b542-d2584fdffb43" (UID: "e0fd5e09-140d-49a5-b542-d2584fdffb43"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:14.973658 master-0 kubenswrapper[3958]: I0319 11:53:14.973555 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "e0fd5e09-140d-49a5-b542-d2584fdffb43" (UID: "e0fd5e09-140d-49a5-b542-d2584fdffb43"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:14.973658 master-0 kubenswrapper[3958]: I0319 11:53:14.973561 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "e0fd5e09-140d-49a5-b542-d2584fdffb43" (UID: "e0fd5e09-140d-49a5-b542-d2584fdffb43"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:14.973658 master-0 kubenswrapper[3958]: I0319 11:53:14.973586 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "e0fd5e09-140d-49a5-b542-d2584fdffb43" (UID: "e0fd5e09-140d-49a5-b542-d2584fdffb43"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:14.979293 master-0 kubenswrapper[3958]: I0319 11:53:14.979231 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0fd5e09-140d-49a5-b542-d2584fdffb43-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "e0fd5e09-140d-49a5-b542-d2584fdffb43" (UID: "e0fd5e09-140d-49a5-b542-d2584fdffb43"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:53:14.979479 master-0 kubenswrapper[3958]: I0319 11:53:14.979443 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0fd5e09-140d-49a5-b542-d2584fdffb43-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "e0fd5e09-140d-49a5-b542-d2584fdffb43" (UID: "e0fd5e09-140d-49a5-b542-d2584fdffb43"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:53:14.979659 master-0 kubenswrapper[3958]: I0319 11:53:14.979628 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0fd5e09-140d-49a5-b542-d2584fdffb43-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "e0fd5e09-140d-49a5-b542-d2584fdffb43" (UID: "e0fd5e09-140d-49a5-b542-d2584fdffb43"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 11:53:14.985706 master-0 kubenswrapper[3958]: I0319 11:53:14.985631 3958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0fd5e09-140d-49a5-b542-d2584fdffb43-kube-api-access-qhzlh" (OuterVolumeSpecName: "kube-api-access-qhzlh") pod "e0fd5e09-140d-49a5-b542-d2584fdffb43" (UID: "e0fd5e09-140d-49a5-b542-d2584fdffb43"). InnerVolumeSpecName "kube-api-access-qhzlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:53:15.014095 master-0 kubenswrapper[3958]: I0319 11:53:15.014008 3958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-wd4nx" podStartSLOduration=3.205698684 podStartE2EDuration="20.013983905s" podCreationTimestamp="2026-03-19 11:52:55 +0000 UTC" firstStartedPulling="2026-03-19 11:52:57.170028354 +0000 UTC m=+107.843749536" lastFinishedPulling="2026-03-19 11:53:13.978313555 +0000 UTC m=+124.652034757" observedRunningTime="2026-03-19 11:53:15.013272522 +0000 UTC m=+125.686993704" watchObservedRunningTime="2026-03-19 11:53:15.013983905 +0000 UTC m=+125.687705097" Mar 19 11:53:15.046260 master-0 kubenswrapper[3958]: I0319 11:53:15.046134 3958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" podStartSLOduration=3.6795990769999998 podStartE2EDuration="27.046099863s" podCreationTimestamp="2026-03-19 11:52:48 +0000 UTC" firstStartedPulling="2026-03-19 11:52:50.614743351 +0000 UTC m=+101.288464533" lastFinishedPulling="2026-03-19 11:53:13.981244137 +0000 UTC m=+124.654965319" observedRunningTime="2026-03-19 11:53:15.044746961 +0000 UTC m=+125.718468163" watchObservedRunningTime="2026-03-19 11:53:15.046099863 +0000 UTC m=+125.719821055" Mar 19 11:53:15.073630 master-0 kubenswrapper[3958]: I0319 11:53:15.072508 3958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhzlh\" (UniqueName: \"kubernetes.io/projected/e0fd5e09-140d-49a5-b542-d2584fdffb43-kube-api-access-qhzlh\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:15.073630 master-0 kubenswrapper[3958]: I0319 11:53:15.072567 3958 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:15.073630 master-0 kubenswrapper[3958]: I0319 11:53:15.072587 3958 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:15.073630 master-0 kubenswrapper[3958]: I0319 11:53:15.072619 3958 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-node-log\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:15.073630 master-0 kubenswrapper[3958]: I0319 11:53:15.072644 3958 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:15.073630 master-0 kubenswrapper[3958]: I0319 11:53:15.072661 3958 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-run-netns\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:15.073630 master-0 kubenswrapper[3958]: I0319 11:53:15.072677 3958 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-systemd-units\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:15.073630 master-0 kubenswrapper[3958]: I0319 11:53:15.072692 3958 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-log-socket\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:15.073630 master-0 kubenswrapper[3958]: I0319 11:53:15.072707 3958 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-kubelet\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:15.073630 master-0 kubenswrapper[3958]: I0319 11:53:15.072724 3958 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:15.073630 master-0 kubenswrapper[3958]: I0319 11:53:15.072742 3958 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e0fd5e09-140d-49a5-b542-d2584fdffb43-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:15.073630 master-0 kubenswrapper[3958]: I0319 11:53:15.072757 3958 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:15.073630 master-0 kubenswrapper[3958]: I0319 11:53:15.072773 3958 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:15.073630 master-0 kubenswrapper[3958]: I0319 11:53:15.072789 3958 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e0fd5e09-140d-49a5-b542-d2584fdffb43-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:15.073630 master-0 kubenswrapper[3958]: I0319 11:53:15.072825 3958 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e0fd5e09-140d-49a5-b542-d2584fdffb43-env-overrides\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:15.073630 master-0 kubenswrapper[3958]: I0319 11:53:15.072840 3958 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:15.073630 master-0 kubenswrapper[3958]: I0319 11:53:15.072855 3958 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0fd5e09-140d-49a5-b542-d2584fdffb43-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:15.073630 master-0 kubenswrapper[3958]: I0319 11:53:15.072872 3958 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:15.073630 master-0 kubenswrapper[3958]: I0319 11:53:15.072887 3958 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-run-systemd\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:15.073630 master-0 kubenswrapper[3958]: I0319 11:53:15.072905 3958 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e0fd5e09-140d-49a5-b542-d2584fdffb43-host-slash\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:15.242455 master-0 kubenswrapper[3958]: E0319 11:53:15.242380 3958 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 11:53:15.934030 master-0 kubenswrapper[3958]: I0319 11:53:15.933961 3958 generic.go:334] "Generic (PLEG): container finished" podID="7044a7b3-4fac-40af-a31c-054a1a1db26b" containerID="6bec5ff668b2f0913a9713d16292d3781feb7dfeeb82d87acec30ea3bfcbeb08" exitCode=0 Mar 19 11:53:15.934917 master-0 kubenswrapper[3958]: I0319 11:53:15.934028 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2z4h8" event={"ID":"7044a7b3-4fac-40af-a31c-054a1a1db26b","Type":"ContainerDied","Data":"6bec5ff668b2f0913a9713d16292d3781feb7dfeeb82d87acec30ea3bfcbeb08"} Mar 19 11:53:15.937685 master-0 kubenswrapper[3958]: I0319 11:53:15.937616 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" event={"ID":"e0fd5e09-140d-49a5-b542-d2584fdffb43","Type":"ContainerDied","Data":"c27e98a561ffe786fc1b95b71c3a149aa1f22e3037947fc028437c10cba9712b"} Mar 19 11:53:15.937771 master-0 kubenswrapper[3958]: I0319 11:53:15.937726 3958 scope.go:117] "RemoveContainer" containerID="716d25cb6659bbdfcd4212839d9b9a94b0e9cfbbe4aa442ba93908a0f053f9ef" Mar 19 11:53:15.938112 master-0 kubenswrapper[3958]: I0319 11:53:15.938050 3958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dfq9s" Mar 19 11:53:16.007416 master-0 kubenswrapper[3958]: I0319 11:53:16.007338 3958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dfq9s"] Mar 19 11:53:16.015634 master-0 kubenswrapper[3958]: I0319 11:53:16.015584 3958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dfq9s"] Mar 19 11:53:16.024278 master-0 kubenswrapper[3958]: I0319 11:53:16.024223 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lk9x9"] Mar 19 11:53:16.024491 master-0 kubenswrapper[3958]: E0319 11:53:16.024387 3958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0fd5e09-140d-49a5-b542-d2584fdffb43" containerName="kubecfg-setup" Mar 19 11:53:16.024491 master-0 kubenswrapper[3958]: I0319 11:53:16.024404 3958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0fd5e09-140d-49a5-b542-d2584fdffb43" containerName="kubecfg-setup" Mar 19 11:53:16.024491 master-0 kubenswrapper[3958]: I0319 11:53:16.024447 3958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0fd5e09-140d-49a5-b542-d2584fdffb43" containerName="kubecfg-setup" Mar 19 11:53:16.025211 master-0 kubenswrapper[3958]: I0319 11:53:16.025179 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.027785 master-0 kubenswrapper[3958]: I0319 11:53:16.027754 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 19 11:53:16.028791 master-0 kubenswrapper[3958]: I0319 11:53:16.028756 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 19 11:53:16.082965 master-0 kubenswrapper[3958]: I0319 11:53:16.082782 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-kubelet\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.083155 master-0 kubenswrapper[3958]: I0319 11:53:16.083059 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovnkube-script-lib\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.083155 master-0 kubenswrapper[3958]: I0319 11:53:16.083143 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-etc-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.083232 master-0 kubenswrapper[3958]: I0319 11:53:16.083186 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-slash\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.083283 master-0 kubenswrapper[3958]: I0319 11:53:16.083257 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-cni-bin\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.083336 master-0 kubenswrapper[3958]: I0319 11:53:16.083300 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-env-overrides\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.083375 master-0 kubenswrapper[3958]: I0319 11:53:16.083332 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-run-netns\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.083423 master-0 kubenswrapper[3958]: I0319 11:53:16.083395 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.083455 master-0 kubenswrapper[3958]: I0319 11:53:16.083438 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-ovn\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.083489 master-0 kubenswrapper[3958]: I0319 11:53:16.083472 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-node-log\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.083519 master-0 kubenswrapper[3958]: I0319 11:53:16.083504 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-run-ovn-kubernetes\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.083565 master-0 kubenswrapper[3958]: I0319 11:53:16.083543 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wshb2\" (UniqueName: \"kubernetes.io/projected/9d2db220-4d5b-4819-a910-b186e1e9fb3e-kube-api-access-wshb2\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.083667 master-0 kubenswrapper[3958]: I0319 11:53:16.083634 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-systemd-units\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.083755 master-0 kubenswrapper[3958]: I0319 11:53:16.083719 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-log-socket\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.083814 master-0 kubenswrapper[3958]: I0319 11:53:16.083761 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-systemd\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.083814 master-0 kubenswrapper[3958]: I0319 11:53:16.083778 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-cni-netd\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.083877 master-0 kubenswrapper[3958]: I0319 11:53:16.083813 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.083877 master-0 kubenswrapper[3958]: I0319 11:53:16.083832 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-var-lib-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.083877 master-0 kubenswrapper[3958]: I0319 11:53:16.083848 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovnkube-config\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.083963 master-0 kubenswrapper[3958]: I0319 11:53:16.083879 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovn-node-metrics-cert\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.121592 master-0 kubenswrapper[3958]: I0319 11:53:16.121535 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:16.121784 master-0 kubenswrapper[3958]: E0319 11:53:16.121737 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:53:16.121929 master-0 kubenswrapper[3958]: I0319 11:53:16.121903 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:16.122026 master-0 kubenswrapper[3958]: E0319 11:53:16.122002 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v66z4" podUID="616dbb32-6b65-4e44-a217-6b1be2844cc9" Mar 19 11:53:16.126664 master-0 kubenswrapper[3958]: I0319 11:53:16.126628 3958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0fd5e09-140d-49a5-b542-d2584fdffb43" path="/var/lib/kubelet/pods/e0fd5e09-140d-49a5-b542-d2584fdffb43/volumes" Mar 19 11:53:16.184681 master-0 kubenswrapper[3958]: I0319 11:53:16.184285 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-cni-bin\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.184681 master-0 kubenswrapper[3958]: I0319 11:53:16.184622 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-run-netns\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.184681 master-0 kubenswrapper[3958]: I0319 11:53:16.184641 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.184681 master-0 kubenswrapper[3958]: I0319 11:53:16.184445 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-cni-bin\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185060 master-0 kubenswrapper[3958]: I0319 11:53:16.184743 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-run-netns\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185060 master-0 kubenswrapper[3958]: I0319 11:53:16.184848 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185060 master-0 kubenswrapper[3958]: I0319 11:53:16.184842 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-env-overrides\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185060 master-0 kubenswrapper[3958]: I0319 11:53:16.184883 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-ovn\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185060 master-0 kubenswrapper[3958]: I0319 11:53:16.184907 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-node-log\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185060 master-0 kubenswrapper[3958]: I0319 11:53:16.184929 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-run-ovn-kubernetes\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185060 master-0 kubenswrapper[3958]: I0319 11:53:16.184958 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wshb2\" (UniqueName: \"kubernetes.io/projected/9d2db220-4d5b-4819-a910-b186e1e9fb3e-kube-api-access-wshb2\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185060 master-0 kubenswrapper[3958]: I0319 11:53:16.184976 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-ovn\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185060 master-0 kubenswrapper[3958]: I0319 11:53:16.185011 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-systemd-units\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185060 master-0 kubenswrapper[3958]: I0319 11:53:16.184989 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-systemd-units\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185060 master-0 kubenswrapper[3958]: I0319 11:53:16.185069 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-run-ovn-kubernetes\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185718 master-0 kubenswrapper[3958]: I0319 11:53:16.185080 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-node-log\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185718 master-0 kubenswrapper[3958]: I0319 11:53:16.185187 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-log-socket\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185718 master-0 kubenswrapper[3958]: I0319 11:53:16.185246 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185718 master-0 kubenswrapper[3958]: I0319 11:53:16.185304 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-systemd\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185718 master-0 kubenswrapper[3958]: I0319 11:53:16.185301 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-log-socket\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185718 master-0 kubenswrapper[3958]: I0319 11:53:16.185354 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-systemd\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185718 master-0 kubenswrapper[3958]: I0319 11:53:16.185361 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185718 master-0 kubenswrapper[3958]: I0319 11:53:16.185380 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-cni-netd\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185718 master-0 kubenswrapper[3958]: I0319 11:53:16.185401 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-cni-netd\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185718 master-0 kubenswrapper[3958]: I0319 11:53:16.185432 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-var-lib-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185718 master-0 kubenswrapper[3958]: I0319 11:53:16.185466 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovnkube-config\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185718 master-0 kubenswrapper[3958]: I0319 11:53:16.185586 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-var-lib-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.185718 master-0 kubenswrapper[3958]: I0319 11:53:16.185684 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovn-node-metrics-cert\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.186529 master-0 kubenswrapper[3958]: I0319 11:53:16.185743 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovnkube-script-lib\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.186529 master-0 kubenswrapper[3958]: I0319 11:53:16.185788 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-kubelet\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.186529 master-0 kubenswrapper[3958]: I0319 11:53:16.186007 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-kubelet\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.186529 master-0 kubenswrapper[3958]: I0319 11:53:16.186124 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-etc-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.186529 master-0 kubenswrapper[3958]: I0319 11:53:16.186209 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-etc-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.186529 master-0 kubenswrapper[3958]: I0319 11:53:16.186295 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-slash\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.186529 master-0 kubenswrapper[3958]: I0319 11:53:16.186371 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-slash\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.186529 master-0 kubenswrapper[3958]: I0319 11:53:16.186514 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovnkube-config\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.186529 master-0 kubenswrapper[3958]: I0319 11:53:16.186530 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovnkube-script-lib\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.187095 master-0 kubenswrapper[3958]: I0319 11:53:16.186739 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-env-overrides\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.189944 master-0 kubenswrapper[3958]: I0319 11:53:16.189896 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovn-node-metrics-cert\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.203547 master-0 kubenswrapper[3958]: I0319 11:53:16.203481 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wshb2\" (UniqueName: \"kubernetes.io/projected/9d2db220-4d5b-4819-a910-b186e1e9fb3e-kube-api-access-wshb2\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.343294 master-0 kubenswrapper[3958]: I0319 11:53:16.343205 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:16.356921 master-0 kubenswrapper[3958]: W0319 11:53:16.356850 3958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d2db220_4d5b_4819_a910_b186e1e9fb3e.slice/crio-06387b0da219a3280f534fa3f57451c18534c297d33ad18d4503e53efc4f6f2f WatchSource:0}: Error finding container 06387b0da219a3280f534fa3f57451c18534c297d33ad18d4503e53efc4f6f2f: Status 404 returned error can't find the container with id 06387b0da219a3280f534fa3f57451c18534c297d33ad18d4503e53efc4f6f2f Mar 19 11:53:16.944550 master-0 kubenswrapper[3958]: I0319 11:53:16.944485 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2z4h8" event={"ID":"7044a7b3-4fac-40af-a31c-054a1a1db26b","Type":"ContainerStarted","Data":"35cd08b43eae70d55b64b5230ac5a5a4490935aee7230a99cd9c622c34e6ef5e"} Mar 19 11:53:16.946027 master-0 kubenswrapper[3958]: I0319 11:53:16.945974 3958 generic.go:334] "Generic (PLEG): container finished" podID="9d2db220-4d5b-4819-a910-b186e1e9fb3e" containerID="d91c3177fcc79be021d9124f0b7323db9969b5d246ad69be6568e14b2bb1c146" exitCode=0 Mar 19 11:53:16.946144 master-0 kubenswrapper[3958]: I0319 11:53:16.946032 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" event={"ID":"9d2db220-4d5b-4819-a910-b186e1e9fb3e","Type":"ContainerDied","Data":"d91c3177fcc79be021d9124f0b7323db9969b5d246ad69be6568e14b2bb1c146"} Mar 19 11:53:16.946144 master-0 kubenswrapper[3958]: I0319 11:53:16.946056 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" event={"ID":"9d2db220-4d5b-4819-a910-b186e1e9fb3e","Type":"ContainerStarted","Data":"06387b0da219a3280f534fa3f57451c18534c297d33ad18d4503e53efc4f6f2f"} Mar 19 11:53:17.000438 master-0 kubenswrapper[3958]: I0319 11:53:17.000352 3958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-2z4h8" podStartSLOduration=3.901502089 podStartE2EDuration="41.000324727s" podCreationTimestamp="2026-03-19 11:52:36 +0000 UTC" firstStartedPulling="2026-03-19 11:52:36.844530559 +0000 UTC m=+87.518251771" lastFinishedPulling="2026-03-19 11:53:13.943353227 +0000 UTC m=+124.617074409" observedRunningTime="2026-03-19 11:53:16.966164105 +0000 UTC m=+127.639885367" watchObservedRunningTime="2026-03-19 11:53:17.000324727 +0000 UTC m=+127.674045939" Mar 19 11:53:17.956581 master-0 kubenswrapper[3958]: I0319 11:53:17.955562 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" event={"ID":"9d2db220-4d5b-4819-a910-b186e1e9fb3e","Type":"ContainerStarted","Data":"4cb58204f9a8b327ba3c3f7647fab2543e15f22f2c299ab459992a2eeef2e78a"} Mar 19 11:53:17.956581 master-0 kubenswrapper[3958]: I0319 11:53:17.955914 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" event={"ID":"9d2db220-4d5b-4819-a910-b186e1e9fb3e","Type":"ContainerStarted","Data":"180f41aa7016f79efa7aae87665866adbc779f6b422597a842485f3260a2777a"} Mar 19 11:53:17.956581 master-0 kubenswrapper[3958]: I0319 11:53:17.955932 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" event={"ID":"9d2db220-4d5b-4819-a910-b186e1e9fb3e","Type":"ContainerStarted","Data":"27fafb53ff7be25a3cafa508c91d4b493839ab6826848a76fa2b24ad4ef11c29"} Mar 19 11:53:17.956581 master-0 kubenswrapper[3958]: I0319 11:53:17.955946 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" event={"ID":"9d2db220-4d5b-4819-a910-b186e1e9fb3e","Type":"ContainerStarted","Data":"c21108c4b841128fae632387e81ec3d553892f4abda1b111fd4e203ae9d9dc4d"} Mar 19 11:53:17.956581 master-0 kubenswrapper[3958]: I0319 11:53:17.955958 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" event={"ID":"9d2db220-4d5b-4819-a910-b186e1e9fb3e","Type":"ContainerStarted","Data":"dc68ba5cca536937e55cc08609b782d10da1bde0ef1b46df93f814e7d7b49d70"} Mar 19 11:53:17.956581 master-0 kubenswrapper[3958]: I0319 11:53:17.955970 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" event={"ID":"9d2db220-4d5b-4819-a910-b186e1e9fb3e","Type":"ContainerStarted","Data":"64e281a8cc9164c46eed509bf4a0cae281d6401696e7ae261874636e7c3217d5"} Mar 19 11:53:18.121257 master-0 kubenswrapper[3958]: I0319 11:53:18.121187 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:18.121257 master-0 kubenswrapper[3958]: I0319 11:53:18.121252 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:18.121532 master-0 kubenswrapper[3958]: E0319 11:53:18.121453 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:53:18.121651 master-0 kubenswrapper[3958]: E0319 11:53:18.121595 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v66z4" podUID="616dbb32-6b65-4e44-a217-6b1be2844cc9" Mar 19 11:53:19.964991 master-0 kubenswrapper[3958]: I0319 11:53:19.964843 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" event={"ID":"9d2db220-4d5b-4819-a910-b186e1e9fb3e","Type":"ContainerStarted","Data":"7ef19a5197d4e4063df78e507089ceb55cad30dd9292eb9a964fdf47d4d5f7d9"} Mar 19 11:53:20.121457 master-0 kubenswrapper[3958]: I0319 11:53:20.121380 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:20.121667 master-0 kubenswrapper[3958]: I0319 11:53:20.121491 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:20.122663 master-0 kubenswrapper[3958]: E0319 11:53:20.122509 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v66z4" podUID="616dbb32-6b65-4e44-a217-6b1be2844cc9" Mar 19 11:53:20.123138 master-0 kubenswrapper[3958]: E0319 11:53:20.123050 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:53:20.243922 master-0 kubenswrapper[3958]: E0319 11:53:20.243843 3958 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 11:53:21.946860 master-0 kubenswrapper[3958]: I0319 11:53:21.946312 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:53:21.947751 master-0 kubenswrapper[3958]: E0319 11:53:21.946521 3958 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 11:53:21.947751 master-0 kubenswrapper[3958]: E0319 11:53:21.947014 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert podName:85912908-c447-4868-871b-82c5eadbfdbe nodeName:}" failed. No retries permitted until 2026-03-19 11:54:25.946983406 +0000 UTC m=+196.620704588 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert") pod "cluster-version-operator-56d8475767-gjj5v" (UID: "85912908-c447-4868-871b-82c5eadbfdbe") : secret "cluster-version-operator-serving-cert" not found Mar 19 11:53:21.975650 master-0 kubenswrapper[3958]: I0319 11:53:21.975621 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" event={"ID":"9d2db220-4d5b-4819-a910-b186e1e9fb3e","Type":"ContainerStarted","Data":"66d345abbdabe6f320e5c9a1ea02fe49b3b4be1fe3c31b413497559a60b4f4af"} Mar 19 11:53:21.976511 master-0 kubenswrapper[3958]: I0319 11:53:21.976491 3958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:21.976554 master-0 kubenswrapper[3958]: I0319 11:53:21.976518 3958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:21.976596 master-0 kubenswrapper[3958]: I0319 11:53:21.976562 3958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:22.000647 master-0 kubenswrapper[3958]: I0319 11:53:22.000297 3958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:22.002396 master-0 kubenswrapper[3958]: I0319 11:53:22.001725 3958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:22.009840 master-0 kubenswrapper[3958]: I0319 11:53:22.008029 3958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" podStartSLOduration=6.007990002 podStartE2EDuration="6.007990002s" podCreationTimestamp="2026-03-19 11:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:53:22.000059922 +0000 UTC m=+132.673781114" watchObservedRunningTime="2026-03-19 11:53:22.007990002 +0000 UTC m=+132.681711184" Mar 19 11:53:22.124038 master-0 kubenswrapper[3958]: I0319 11:53:22.123174 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:22.124038 master-0 kubenswrapper[3958]: I0319 11:53:22.123366 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:22.124038 master-0 kubenswrapper[3958]: E0319 11:53:22.123492 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:53:22.124038 master-0 kubenswrapper[3958]: E0319 11:53:22.123735 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v66z4" podUID="616dbb32-6b65-4e44-a217-6b1be2844cc9" Mar 19 11:53:22.137636 master-0 kubenswrapper[3958]: I0319 11:53:22.137536 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 19 11:53:23.865059 master-0 kubenswrapper[3958]: I0319 11:53:23.864994 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g6zz\" (UniqueName: \"kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz\") pod \"network-check-target-v66z4\" (UID: \"616dbb32-6b65-4e44-a217-6b1be2844cc9\") " pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:23.865820 master-0 kubenswrapper[3958]: E0319 11:53:23.865245 3958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 11:53:23.865820 master-0 kubenswrapper[3958]: E0319 11:53:23.865297 3958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 11:53:23.865820 master-0 kubenswrapper[3958]: E0319 11:53:23.865321 3958 projected.go:194] Error preparing data for projected volume kube-api-access-7g6zz for pod openshift-network-diagnostics/network-check-target-v66z4: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 11:53:23.865820 master-0 kubenswrapper[3958]: E0319 11:53:23.865428 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz podName:616dbb32-6b65-4e44-a217-6b1be2844cc9 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:55.865399346 +0000 UTC m=+166.539120568 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-7g6zz" (UniqueName: "kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz") pod "network-check-target-v66z4" (UID: "616dbb32-6b65-4e44-a217-6b1be2844cc9") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 11:53:24.121529 master-0 kubenswrapper[3958]: I0319 11:53:24.121333 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:24.121529 master-0 kubenswrapper[3958]: I0319 11:53:24.121383 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:24.121744 master-0 kubenswrapper[3958]: E0319 11:53:24.121529 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v66z4" podUID="616dbb32-6b65-4e44-a217-6b1be2844cc9" Mar 19 11:53:24.122010 master-0 kubenswrapper[3958]: E0319 11:53:24.121951 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:53:24.313360 master-0 kubenswrapper[3958]: I0319 11:53:24.313287 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-v66z4"] Mar 19 11:53:24.316670 master-0 kubenswrapper[3958]: I0319 11:53:24.316612 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6t6sn"] Mar 19 11:53:24.985997 master-0 kubenswrapper[3958]: I0319 11:53:24.985950 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:24.987057 master-0 kubenswrapper[3958]: I0319 11:53:24.985956 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:24.987057 master-0 kubenswrapper[3958]: E0319 11:53:24.986102 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:53:24.987057 master-0 kubenswrapper[3958]: E0319 11:53:24.986173 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v66z4" podUID="616dbb32-6b65-4e44-a217-6b1be2844cc9" Mar 19 11:53:25.245204 master-0 kubenswrapper[3958]: E0319 11:53:25.245052 3958 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 11:53:27.445598 master-0 kubenswrapper[3958]: I0319 11:53:27.445254 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:27.446319 master-0 kubenswrapper[3958]: E0319 11:53:27.446284 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v66z4" podUID="616dbb32-6b65-4e44-a217-6b1be2844cc9" Mar 19 11:53:27.446422 master-0 kubenswrapper[3958]: I0319 11:53:27.445268 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:27.446610 master-0 kubenswrapper[3958]: E0319 11:53:27.446588 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:53:29.122168 master-0 kubenswrapper[3958]: I0319 11:53:29.122111 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:29.122168 master-0 kubenswrapper[3958]: I0319 11:53:29.122159 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:29.122831 master-0 kubenswrapper[3958]: E0319 11:53:29.122383 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6t6sn" podUID="398bcaca-1bea-4633-a78f-717e3d015ddd" Mar 19 11:53:29.122831 master-0 kubenswrapper[3958]: E0319 11:53:29.122599 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v66z4" podUID="616dbb32-6b65-4e44-a217-6b1be2844cc9" Mar 19 11:53:31.121396 master-0 kubenswrapper[3958]: I0319 11:53:31.121310 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:31.122179 master-0 kubenswrapper[3958]: I0319 11:53:31.121316 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:31.124159 master-0 kubenswrapper[3958]: I0319 11:53:31.124108 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 19 11:53:31.124743 master-0 kubenswrapper[3958]: I0319 11:53:31.124680 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 19 11:53:31.125176 master-0 kubenswrapper[3958]: I0319 11:53:31.125121 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 19 11:53:36.702698 master-0 kubenswrapper[3958]: I0319 11:53:36.702569 3958 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Mar 19 11:53:37.540729 master-0 kubenswrapper[3958]: I0319 11:53:37.529439 3958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=15.529414953 podStartE2EDuration="15.529414953s" podCreationTimestamp="2026-03-19 11:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:53:30.138447741 +0000 UTC m=+140.812168963" watchObservedRunningTime="2026-03-19 11:53:37.529414953 +0000 UTC m=+148.203136145" Mar 19 11:53:37.540729 master-0 kubenswrapper[3958]: I0319 11:53:37.537123 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl"] Mar 19 11:53:37.540729 master-0 kubenswrapper[3958]: I0319 11:53:37.537507 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:53:37.543660 master-0 kubenswrapper[3958]: I0319 11:53:37.543583 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt"] Mar 19 11:53:37.543971 master-0 kubenswrapper[3958]: I0319 11:53:37.543921 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-z6kvm"] Mar 19 11:53:37.544291 master-0 kubenswrapper[3958]: I0319 11:53:37.544238 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:53:37.544702 master-0 kubenswrapper[3958]: I0319 11:53:37.544656 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 11:53:37.548861 master-0 kubenswrapper[3958]: I0319 11:53:37.546155 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt"] Mar 19 11:53:37.548861 master-0 kubenswrapper[3958]: I0319 11:53:37.546604 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:37.556785 master-0 kubenswrapper[3958]: I0319 11:53:37.555106 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 19 11:53:37.556785 master-0 kubenswrapper[3958]: I0319 11:53:37.555571 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-2chdm"] Mar 19 11:53:37.556785 master-0 kubenswrapper[3958]: I0319 11:53:37.555962 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-2chdm" Mar 19 11:53:37.560100 master-0 kubenswrapper[3958]: I0319 11:53:37.558863 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 19 11:53:37.560100 master-0 kubenswrapper[3958]: I0319 11:53:37.559256 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6"] Mar 19 11:53:37.560100 master-0 kubenswrapper[3958]: I0319 11:53:37.559335 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 19 11:53:37.560100 master-0 kubenswrapper[3958]: I0319 11:53:37.559651 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 19 11:53:37.560100 master-0 kubenswrapper[3958]: I0319 11:53:37.559788 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 19 11:53:37.560100 master-0 kubenswrapper[3958]: I0319 11:53:37.559854 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 19 11:53:37.560100 master-0 kubenswrapper[3958]: I0319 11:53:37.559940 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 19 11:53:37.560100 master-0 kubenswrapper[3958]: I0319 11:53:37.559951 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 19 11:53:37.560100 master-0 kubenswrapper[3958]: I0319 11:53:37.559989 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 19 11:53:37.560100 master-0 kubenswrapper[3958]: I0319 11:53:37.559937 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 19 11:53:37.560100 master-0 kubenswrapper[3958]: I0319 11:53:37.560070 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:37.560100 master-0 kubenswrapper[3958]: I0319 11:53:37.559955 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 19 11:53:37.560641 master-0 kubenswrapper[3958]: I0319 11:53:37.560209 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 19 11:53:37.560641 master-0 kubenswrapper[3958]: I0319 11:53:37.560571 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 19 11:53:37.561596 master-0 kubenswrapper[3958]: I0319 11:53:37.561572 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj"] Mar 19 11:53:37.562026 master-0 kubenswrapper[3958]: I0319 11:53:37.562009 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:53:37.564937 master-0 kubenswrapper[3958]: I0319 11:53:37.562312 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 19 11:53:37.564937 master-0 kubenswrapper[3958]: I0319 11:53:37.563714 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq"] Mar 19 11:53:37.565856 master-0 kubenswrapper[3958]: I0319 11:53:37.565304 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 19 11:53:37.569299 master-0 kubenswrapper[3958]: I0319 11:53:37.566497 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 19 11:53:37.569299 master-0 kubenswrapper[3958]: I0319 11:53:37.567743 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 19 11:53:37.569299 master-0 kubenswrapper[3958]: I0319 11:53:37.567964 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 19 11:53:37.569299 master-0 kubenswrapper[3958]: I0319 11:53:37.568255 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 19 11:53:37.569299 master-0 kubenswrapper[3958]: I0319 11:53:37.568630 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 19 11:53:37.569299 master-0 kubenswrapper[3958]: I0319 11:53:37.568774 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 19 11:53:37.569299 master-0 kubenswrapper[3958]: I0319 11:53:37.569026 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 19 11:53:37.570630 master-0 kubenswrapper[3958]: I0319 11:53:37.570495 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws"] Mar 19 11:53:37.571574 master-0 kubenswrapper[3958]: I0319 11:53:37.570774 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 11:53:37.571574 master-0 kubenswrapper[3958]: I0319 11:53:37.571315 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws" Mar 19 11:53:37.571574 master-0 kubenswrapper[3958]: I0319 11:53:37.571379 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x"] Mar 19 11:53:37.583755 master-0 kubenswrapper[3958]: I0319 11:53:37.583467 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 19 11:53:37.587082 master-0 kubenswrapper[3958]: I0319 11:53:37.585415 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 19 11:53:37.587082 master-0 kubenswrapper[3958]: I0319 11:53:37.585845 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 19 11:53:37.587082 master-0 kubenswrapper[3958]: I0319 11:53:37.585863 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 11:53:37.587082 master-0 kubenswrapper[3958]: I0319 11:53:37.586103 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq"] Mar 19 11:53:37.587082 master-0 kubenswrapper[3958]: I0319 11:53:37.586560 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:37.587082 master-0 kubenswrapper[3958]: I0319 11:53:37.586650 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d"] Mar 19 11:53:37.587082 master-0 kubenswrapper[3958]: I0319 11:53:37.587093 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:37.589928 master-0 kubenswrapper[3958]: I0319 11:53:37.589194 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 19 11:53:37.589928 master-0 kubenswrapper[3958]: I0319 11:53:37.589463 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh"] Mar 19 11:53:37.589928 master-0 kubenswrapper[3958]: I0319 11:53:37.589708 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 11:53:37.591598 master-0 kubenswrapper[3958]: I0319 11:53:37.590644 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk"] Mar 19 11:53:37.591598 master-0 kubenswrapper[3958]: I0319 11:53:37.590698 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 19 11:53:37.591598 master-0 kubenswrapper[3958]: I0319 11:53:37.591104 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 11:53:37.591598 master-0 kubenswrapper[3958]: I0319 11:53:37.591276 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 19 11:53:37.591598 master-0 kubenswrapper[3958]: I0319 11:53:37.591286 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 19 11:53:37.591598 master-0 kubenswrapper[3958]: I0319 11:53:37.591420 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 19 11:53:37.592025 master-0 kubenswrapper[3958]: I0319 11:53:37.591987 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg"] Mar 19 11:53:37.594849 master-0 kubenswrapper[3958]: I0319 11:53:37.592333 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 11:53:37.594849 master-0 kubenswrapper[3958]: I0319 11:53:37.592457 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 19 11:53:37.594849 master-0 kubenswrapper[3958]: I0319 11:53:37.592347 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b"] Mar 19 11:53:37.594849 master-0 kubenswrapper[3958]: I0319 11:53:37.593026 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 19 11:53:37.594849 master-0 kubenswrapper[3958]: I0319 11:53:37.593159 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-pr7gk"] Mar 19 11:53:37.594849 master-0 kubenswrapper[3958]: I0319 11:53:37.593521 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 19 11:53:37.594849 master-0 kubenswrapper[3958]: I0319 11:53:37.593570 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:37.594849 master-0 kubenswrapper[3958]: I0319 11:53:37.593612 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw"] Mar 19 11:53:37.594849 master-0 kubenswrapper[3958]: I0319 11:53:37.593664 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 19 11:53:37.594849 master-0 kubenswrapper[3958]: I0319 11:53:37.593847 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 19 11:53:37.594849 master-0 kubenswrapper[3958]: I0319 11:53:37.594008 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:37.594849 master-0 kubenswrapper[3958]: I0319 11:53:37.594079 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.595062 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4"] Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.595345 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.595454 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.595544 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.595590 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.595710 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.595936 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.596475 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.597084 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.597431 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.597586 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.597625 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.597831 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.597855 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.597986 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.598039 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg"] Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.598652 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4"] Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.598948 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.599048 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.599286 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8"] Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.599416 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.599454 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.599611 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.599637 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.599671 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.600037 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.600576 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.600736 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.601145 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 19 11:53:37.601188 master-0 kubenswrapper[3958]: I0319 11:53:37.601158 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 19 11:53:37.603004 master-0 kubenswrapper[3958]: I0319 11:53:37.601258 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 19 11:53:37.603004 master-0 kubenswrapper[3958]: I0319 11:53:37.601359 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 19 11:53:37.603004 master-0 kubenswrapper[3958]: I0319 11:53:37.601382 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 19 11:53:37.603004 master-0 kubenswrapper[3958]: I0319 11:53:37.601460 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 19 11:53:37.603004 master-0 kubenswrapper[3958]: I0319 11:53:37.601580 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 19 11:53:37.603004 master-0 kubenswrapper[3958]: I0319 11:53:37.601602 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 19 11:53:37.603004 master-0 kubenswrapper[3958]: I0319 11:53:37.601891 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 19 11:53:37.603004 master-0 kubenswrapper[3958]: I0319 11:53:37.602011 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 19 11:53:37.603004 master-0 kubenswrapper[3958]: I0319 11:53:37.602119 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 19 11:53:37.603004 master-0 kubenswrapper[3958]: I0319 11:53:37.602181 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 19 11:53:37.603004 master-0 kubenswrapper[3958]: I0319 11:53:37.602220 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-btppx"] Mar 19 11:53:37.603004 master-0 kubenswrapper[3958]: I0319 11:53:37.602353 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 19 11:53:37.603004 master-0 kubenswrapper[3958]: I0319 11:53:37.602425 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 19 11:53:37.603004 master-0 kubenswrapper[3958]: I0319 11:53:37.602443 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 19 11:53:37.603004 master-0 kubenswrapper[3958]: I0319 11:53:37.602739 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:37.604150 master-0 kubenswrapper[3958]: I0319 11:53:37.604097 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6"] Mar 19 11:53:37.604723 master-0 kubenswrapper[3958]: I0319 11:53:37.604663 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:37.605236 master-0 kubenswrapper[3958]: I0319 11:53:37.605188 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 19 11:53:37.606462 master-0 kubenswrapper[3958]: I0319 11:53:37.606393 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz"] Mar 19 11:53:37.607010 master-0 kubenswrapper[3958]: I0319 11:53:37.606949 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 19 11:53:37.607209 master-0 kubenswrapper[3958]: I0319 11:53:37.607087 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:37.607347 master-0 kubenswrapper[3958]: I0319 11:53:37.607300 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 19 11:53:37.607765 master-0 kubenswrapper[3958]: I0319 11:53:37.606951 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 19 11:53:37.607765 master-0 kubenswrapper[3958]: I0319 11:53:37.607494 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 19 11:53:37.608053 master-0 kubenswrapper[3958]: I0319 11:53:37.607095 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 19 11:53:37.608053 master-0 kubenswrapper[3958]: I0319 11:53:37.607717 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 19 11:53:37.620446 master-0 kubenswrapper[3958]: I0319 11:53:37.620318 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 19 11:53:37.622073 master-0 kubenswrapper[3958]: I0319 11:53:37.622007 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 19 11:53:37.622331 master-0 kubenswrapper[3958]: I0319 11:53:37.622280 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 19 11:53:37.622824 master-0 kubenswrapper[3958]: I0319 11:53:37.622755 3958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 19 11:53:37.624302 master-0 kubenswrapper[3958]: I0319 11:53:37.623925 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 19 11:53:37.624302 master-0 kubenswrapper[3958]: I0319 11:53:37.624043 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 19 11:53:37.624302 master-0 kubenswrapper[3958]: I0319 11:53:37.624107 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 19 11:53:37.625186 master-0 kubenswrapper[3958]: I0319 11:53:37.625145 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 19 11:53:37.625973 master-0 kubenswrapper[3958]: I0319 11:53:37.625945 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:37.626101 master-0 kubenswrapper[3958]: I0319 11:53:37.625987 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk\" (UID: \"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 11:53:37.626101 master-0 kubenswrapper[3958]: I0319 11:53:37.626013 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5n89\" (UniqueName: \"kubernetes.io/projected/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-kube-api-access-h5n89\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk\" (UID: \"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 11:53:37.626101 master-0 kubenswrapper[3958]: I0319 11:53:37.626038 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/661b8957-a890-4032-9e57-45e2e0b35249-config\") pod \"service-ca-operator-b865698dc-sxsxt\" (UID: \"661b8957-a890-4032-9e57-45e2e0b35249\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 11:53:37.626101 master-0 kubenswrapper[3958]: I0319 11:53:37.626061 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-images\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:37.626101 master-0 kubenswrapper[3958]: I0319 11:53:37.626084 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c2dbd8b3-0e02-4747-a166-80aa6a94b060-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-cg9pq\" (UID: \"c2dbd8b3-0e02-4747-a166-80aa6a94b060\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 11:53:37.626507 master-0 kubenswrapper[3958]: I0319 11:53:37.626108 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/c2dbd8b3-0e02-4747-a166-80aa6a94b060-operand-assets\") pod \"cluster-olm-operator-67dcd4998-cg9pq\" (UID: \"c2dbd8b3-0e02-4747-a166-80aa6a94b060\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 11:53:37.626507 master-0 kubenswrapper[3958]: I0319 11:53:37.626181 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xpc2\" (UniqueName: \"kubernetes.io/projected/19de6601-10d4-4112-a21f-0398d2b160d1-kube-api-access-6xpc2\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:37.626507 master-0 kubenswrapper[3958]: I0319 11:53:37.626210 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:37.626507 master-0 kubenswrapper[3958]: I0319 11:53:37.626314 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tqdb\" (UniqueName: \"kubernetes.io/projected/b0f5939c-48b1-4d6c-9712-9128a78d603b-kube-api-access-6tqdb\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:37.626507 master-0 kubenswrapper[3958]: I0319 11:53:37.626428 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk\" (UID: \"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 11:53:37.626871 master-0 kubenswrapper[3958]: I0319 11:53:37.626527 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:37.626871 master-0 kubenswrapper[3958]: I0319 11:53:37.626629 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/19de6601-10d4-4112-a21f-0398d2b160d1-images\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:37.626871 master-0 kubenswrapper[3958]: I0319 11:53:37.626705 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv6bc\" (UniqueName: \"kubernetes.io/projected/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-kube-api-access-pv6bc\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:37.626871 master-0 kubenswrapper[3958]: I0319 11:53:37.626776 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:53:37.629284 master-0 kubenswrapper[3958]: I0319 11:53:37.626926 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hq8f\" (UniqueName: \"kubernetes.io/projected/661b8957-a890-4032-9e57-45e2e0b35249-kube-api-access-8hq8f\") pod \"service-ca-operator-b865698dc-sxsxt\" (UID: \"661b8957-a890-4032-9e57-45e2e0b35249\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 11:53:37.629284 master-0 kubenswrapper[3958]: I0319 11:53:37.626995 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwfg5\" (UniqueName: \"kubernetes.io/projected/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-kube-api-access-hwfg5\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:53:37.629284 master-0 kubenswrapper[3958]: I0319 11:53:37.627145 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bst2w\" (UniqueName: \"kubernetes.io/projected/63c12a89-1b49-4eba-8f5a-551b10d2246b-kube-api-access-bst2w\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:37.629284 master-0 kubenswrapper[3958]: I0319 11:53:37.627187 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6wm6\" (UniqueName: \"kubernetes.io/projected/d3017b5e-178e-49de-89d2-817a18398203-kube-api-access-b6wm6\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:37.629284 master-0 kubenswrapper[3958]: I0319 11:53:37.627291 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:37.629284 master-0 kubenswrapper[3958]: I0319 11:53:37.627329 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2151eb84-177e-459c-be71-f48465323ac2-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-fjn4b\" (UID: \"2151eb84-177e-459c-be71-f48465323ac2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 11:53:37.629284 master-0 kubenswrapper[3958]: I0319 11:53:37.627371 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khv2z\" (UniqueName: \"kubernetes.io/projected/a7747954-a222-4809-8656-818203b55ee8-kube-api-access-khv2z\") pod \"csi-snapshot-controller-operator-5f5d689c6b-2chdm\" (UID: \"a7747954-a222-4809-8656-818203b55ee8\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-2chdm" Mar 19 11:53:37.629284 master-0 kubenswrapper[3958]: I0319 11:53:37.627404 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1089ea24-add9-482e-9276-e6ded12052d7-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-qv4cg\" (UID: \"1089ea24-add9-482e-9276-e6ded12052d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 11:53:37.629284 master-0 kubenswrapper[3958]: I0319 11:53:37.627456 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:53:37.629284 master-0 kubenswrapper[3958]: I0319 11:53:37.627506 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:37.629284 master-0 kubenswrapper[3958]: I0319 11:53:37.627610 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06df1b1b-154e-46f9-aee0-79a137c6c928-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-ptqdh\" (UID: \"06df1b1b-154e-46f9-aee0-79a137c6c928\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 11:53:37.629284 master-0 kubenswrapper[3958]: I0319 11:53:37.627655 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f97d998-530c-4d9d-a030-ca1d9d2d4490-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-6wzws\" (UID: \"0f97d998-530c-4d9d-a030-ca1d9d2d4490\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws" Mar 19 11:53:37.629284 master-0 kubenswrapper[3958]: I0319 11:53:37.627687 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19de6601-10d4-4112-a21f-0398d2b160d1-config\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:37.629284 master-0 kubenswrapper[3958]: I0319 11:53:37.627717 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:53:37.630517 master-0 kubenswrapper[3958]: I0319 11:53:37.627749 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/7241bf11-192e-47db-9d80-2324938ed34c-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:37.630517 master-0 kubenswrapper[3958]: I0319 11:53:37.627932 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:37.630517 master-0 kubenswrapper[3958]: I0319 11:53:37.628064 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npc2t\" (UniqueName: \"kubernetes.io/projected/c2dbd8b3-0e02-4747-a166-80aa6a94b060-kube-api-access-npc2t\") pod \"cluster-olm-operator-67dcd4998-cg9pq\" (UID: \"c2dbd8b3-0e02-4747-a166-80aa6a94b060\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 11:53:37.630517 master-0 kubenswrapper[3958]: I0319 11:53:37.628139 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:37.630517 master-0 kubenswrapper[3958]: I0319 11:53:37.628178 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5mkm\" (UniqueName: \"kubernetes.io/projected/7241bf11-192e-47db-9d80-2324938ed34c-kube-api-access-s5mkm\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:37.630517 master-0 kubenswrapper[3958]: I0319 11:53:37.628213 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-config\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:37.630517 master-0 kubenswrapper[3958]: I0319 11:53:37.628286 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:37.630517 master-0 kubenswrapper[3958]: I0319 11:53:37.628354 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f08c5930-44f0-48e4-80dd-2563f2733b2f-config\") pod \"openshift-apiserver-operator-d65958b8-mjs7x\" (UID: \"f08c5930-44f0-48e4-80dd-2563f2733b2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 11:53:37.630517 master-0 kubenswrapper[3958]: I0319 11:53:37.628424 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:37.630517 master-0 kubenswrapper[3958]: I0319 11:53:37.628490 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zntzt\" (UniqueName: \"kubernetes.io/projected/0f97d998-530c-4d9d-a030-ca1d9d2d4490-kube-api-access-zntzt\") pod \"cluster-storage-operator-7d87854d6-6wzws\" (UID: \"0f97d998-530c-4d9d-a030-ca1d9d2d4490\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws" Mar 19 11:53:37.630517 master-0 kubenswrapper[3958]: I0319 11:53:37.628546 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h84l9\" (UniqueName: \"kubernetes.io/projected/f08c5930-44f0-48e4-80dd-2563f2733b2f-kube-api-access-h84l9\") pod \"openshift-apiserver-operator-d65958b8-mjs7x\" (UID: \"f08c5930-44f0-48e4-80dd-2563f2733b2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 11:53:37.630517 master-0 kubenswrapper[3958]: I0319 11:53:37.628580 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3017b5e-178e-49de-89d2-817a18398203-serving-cert\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:37.630517 master-0 kubenswrapper[3958]: I0319 11:53:37.628612 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/63c12a89-1b49-4eba-8f5a-551b10d2246b-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:37.630517 master-0 kubenswrapper[3958]: I0319 11:53:37.628627 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 19 11:53:37.630517 master-0 kubenswrapper[3958]: I0319 11:53:37.628637 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 19 11:53:37.630517 master-0 kubenswrapper[3958]: I0319 11:53:37.628643 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mr6d\" (UniqueName: \"kubernetes.io/projected/beb562de-402b-4d9f-b5ed-090b60847a95-kube-api-access-9mr6d\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:53:37.631102 master-0 kubenswrapper[3958]: I0319 11:53:37.628708 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2151eb84-177e-459c-be71-f48465323ac2-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-fjn4b\" (UID: \"2151eb84-177e-459c-be71-f48465323ac2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 11:53:37.631102 master-0 kubenswrapper[3958]: I0319 11:53:37.628738 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2151eb84-177e-459c-be71-f48465323ac2-config\") pod \"kube-controller-manager-operator-ff989d6cc-fjn4b\" (UID: \"2151eb84-177e-459c-be71-f48465323ac2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 11:53:37.631102 master-0 kubenswrapper[3958]: I0319 11:53:37.628827 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f08c5930-44f0-48e4-80dd-2563f2733b2f-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-mjs7x\" (UID: \"f08c5930-44f0-48e4-80dd-2563f2733b2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 11:53:37.631102 master-0 kubenswrapper[3958]: I0319 11:53:37.628876 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1089ea24-add9-482e-9276-e6ded12052d7-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-qv4cg\" (UID: \"1089ea24-add9-482e-9276-e6ded12052d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 11:53:37.631102 master-0 kubenswrapper[3958]: I0319 11:53:37.628900 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:37.631102 master-0 kubenswrapper[3958]: I0319 11:53:37.628917 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1089ea24-add9-482e-9276-e6ded12052d7-config\") pod \"kube-apiserver-operator-8b68b9d9b-qv4cg\" (UID: \"1089ea24-add9-482e-9276-e6ded12052d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 11:53:37.631102 master-0 kubenswrapper[3958]: I0319 11:53:37.628967 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:37.631102 master-0 kubenswrapper[3958]: I0319 11:53:37.629001 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/661b8957-a890-4032-9e57-45e2e0b35249-serving-cert\") pod \"service-ca-operator-b865698dc-sxsxt\" (UID: \"661b8957-a890-4032-9e57-45e2e0b35249\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 11:53:37.631102 master-0 kubenswrapper[3958]: I0319 11:53:37.629047 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06df1b1b-154e-46f9-aee0-79a137c6c928-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-ptqdh\" (UID: \"06df1b1b-154e-46f9-aee0-79a137c6c928\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 11:53:37.631102 master-0 kubenswrapper[3958]: I0319 11:53:37.629070 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06df1b1b-154e-46f9-aee0-79a137c6c928-config\") pod \"openshift-kube-scheduler-operator-dddff6458-ptqdh\" (UID: \"06df1b1b-154e-46f9-aee0-79a137c6c928\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 11:53:37.631102 master-0 kubenswrapper[3958]: I0319 11:53:37.629085 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl6d7\" (UniqueName: \"kubernetes.io/projected/ab54833d-e57b-479d-b171-68155f6566f1-kube-api-access-gl6d7\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:53:37.631102 master-0 kubenswrapper[3958]: I0319 11:53:37.630030 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 19 11:53:37.729726 master-0 kubenswrapper[3958]: I0319 11:53:37.729691 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f08c5930-44f0-48e4-80dd-2563f2733b2f-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-mjs7x\" (UID: \"f08c5930-44f0-48e4-80dd-2563f2733b2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 11:53:37.730156 master-0 kubenswrapper[3958]: I0319 11:53:37.729725 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-fz8cg\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:53:37.730156 master-0 kubenswrapper[3958]: I0319 11:53:37.729748 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:37.730156 master-0 kubenswrapper[3958]: I0319 11:53:37.729813 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs4jf\" (UniqueName: \"kubernetes.io/projected/b80027fd-7b39-477a-a337-ff9bb08e7eeb-kube-api-access-hs4jf\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:37.730156 master-0 kubenswrapper[3958]: I0319 11:53:37.729851 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1089ea24-add9-482e-9276-e6ded12052d7-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-qv4cg\" (UID: \"1089ea24-add9-482e-9276-e6ded12052d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 11:53:37.730156 master-0 kubenswrapper[3958]: I0319 11:53:37.729883 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1089ea24-add9-482e-9276-e6ded12052d7-config\") pod \"kube-apiserver-operator-8b68b9d9b-qv4cg\" (UID: \"1089ea24-add9-482e-9276-e6ded12052d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 11:53:37.730156 master-0 kubenswrapper[3958]: I0319 11:53:37.730027 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:37.730156 master-0 kubenswrapper[3958]: I0319 11:53:37.730062 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpdts\" (UniqueName: \"kubernetes.io/projected/9702fc8c-4fe0-413b-b2d4-db23021d42b8-kube-api-access-tpdts\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:37.730156 master-0 kubenswrapper[3958]: I0319 11:53:37.730082 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b80027fd-7b39-477a-a337-ff9bb08e7eeb-bound-sa-token\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:37.730156 master-0 kubenswrapper[3958]: I0319 11:53:37.730116 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:37.730156 master-0 kubenswrapper[3958]: I0319 11:53:37.730135 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:37.730156 master-0 kubenswrapper[3958]: I0319 11:53:37.730152 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:53:37.732050 master-0 kubenswrapper[3958]: I0319 11:53:37.730187 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06df1b1b-154e-46f9-aee0-79a137c6c928-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-ptqdh\" (UID: \"06df1b1b-154e-46f9-aee0-79a137c6c928\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 11:53:37.732050 master-0 kubenswrapper[3958]: I0319 11:53:37.730203 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9702fc8c-4fe0-413b-b2d4-db23021d42b8-serving-cert\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:37.732050 master-0 kubenswrapper[3958]: I0319 11:53:37.730220 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/661b8957-a890-4032-9e57-45e2e0b35249-serving-cert\") pod \"service-ca-operator-b865698dc-sxsxt\" (UID: \"661b8957-a890-4032-9e57-45e2e0b35249\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 11:53:37.732050 master-0 kubenswrapper[3958]: I0319 11:53:37.730243 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5bmd\" (UniqueName: \"kubernetes.io/projected/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-kube-api-access-c5bmd\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:37.732050 master-0 kubenswrapper[3958]: I0319 11:53:37.730260 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:37.732050 master-0 kubenswrapper[3958]: I0319 11:53:37.730280 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06df1b1b-154e-46f9-aee0-79a137c6c928-config\") pod \"openshift-kube-scheduler-operator-dddff6458-ptqdh\" (UID: \"06df1b1b-154e-46f9-aee0-79a137c6c928\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 11:53:37.732050 master-0 kubenswrapper[3958]: I0319 11:53:37.730299 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl6d7\" (UniqueName: \"kubernetes.io/projected/ab54833d-e57b-479d-b171-68155f6566f1-kube-api-access-gl6d7\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:53:37.732050 master-0 kubenswrapper[3958]: I0319 11:53:37.730319 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:37.732050 master-0 kubenswrapper[3958]: I0319 11:53:37.730337 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfl29\" (UniqueName: \"kubernetes.io/projected/806a4c30-7b93-4430-86da-f9e1f4f2d206-kube-api-access-dfl29\") pod \"multus-admission-controller-5dbbb8b86f-fz8cg\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:53:37.732050 master-0 kubenswrapper[3958]: I0319 11:53:37.730353 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b80027fd-7b39-477a-a337-ff9bb08e7eeb-trusted-ca\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:37.732050 master-0 kubenswrapper[3958]: I0319 11:53:37.730372 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c2dbd8b3-0e02-4747-a166-80aa6a94b060-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-cg9pq\" (UID: \"c2dbd8b3-0e02-4747-a166-80aa6a94b060\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 11:53:37.732050 master-0 kubenswrapper[3958]: I0319 11:53:37.730390 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk\" (UID: \"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 11:53:37.732050 master-0 kubenswrapper[3958]: I0319 11:53:37.730407 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5n89\" (UniqueName: \"kubernetes.io/projected/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-kube-api-access-h5n89\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk\" (UID: \"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 11:53:37.732050 master-0 kubenswrapper[3958]: I0319 11:53:37.730421 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/661b8957-a890-4032-9e57-45e2e0b35249-config\") pod \"service-ca-operator-b865698dc-sxsxt\" (UID: \"661b8957-a890-4032-9e57-45e2e0b35249\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 11:53:37.732050 master-0 kubenswrapper[3958]: I0319 11:53:37.730435 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-images\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:37.733453 master-0 kubenswrapper[3958]: I0319 11:53:37.730456 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:37.733453 master-0 kubenswrapper[3958]: I0319 11:53:37.730474 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/c2dbd8b3-0e02-4747-a166-80aa6a94b060-operand-assets\") pod \"cluster-olm-operator-67dcd4998-cg9pq\" (UID: \"c2dbd8b3-0e02-4747-a166-80aa6a94b060\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 11:53:37.733453 master-0 kubenswrapper[3958]: I0319 11:53:37.730489 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xpc2\" (UniqueName: \"kubernetes.io/projected/19de6601-10d4-4112-a21f-0398d2b160d1-kube-api-access-6xpc2\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:37.733453 master-0 kubenswrapper[3958]: I0319 11:53:37.730508 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x252z\" (UniqueName: \"kubernetes.io/projected/aef8e03f-0363-4e13-b7ca-4fa871d77c62-kube-api-access-x252z\") pod \"openshift-config-operator-95bf4f4d-nhvl4\" (UID: \"aef8e03f-0363-4e13-b7ca-4fa871d77c62\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:53:37.733453 master-0 kubenswrapper[3958]: I0319 11:53:37.730527 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tqdb\" (UniqueName: \"kubernetes.io/projected/b0f5939c-48b1-4d6c-9712-9128a78d603b-kube-api-access-6tqdb\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:37.733453 master-0 kubenswrapper[3958]: I0319 11:53:37.730545 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk\" (UID: \"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 11:53:37.733453 master-0 kubenswrapper[3958]: I0319 11:53:37.730560 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:37.733453 master-0 kubenswrapper[3958]: I0319 11:53:37.730575 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/19de6601-10d4-4112-a21f-0398d2b160d1-images\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:37.733453 master-0 kubenswrapper[3958]: I0319 11:53:37.730592 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv6bc\" (UniqueName: \"kubernetes.io/projected/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-kube-api-access-pv6bc\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:37.733453 master-0 kubenswrapper[3958]: I0319 11:53:37.730608 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/aef8e03f-0363-4e13-b7ca-4fa871d77c62-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-nhvl4\" (UID: \"aef8e03f-0363-4e13-b7ca-4fa871d77c62\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:53:37.733453 master-0 kubenswrapper[3958]: I0319 11:53:37.730626 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwfg5\" (UniqueName: \"kubernetes.io/projected/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-kube-api-access-hwfg5\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:53:37.733453 master-0 kubenswrapper[3958]: E0319 11:53:37.730687 3958 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 19 11:53:37.733453 master-0 kubenswrapper[3958]: E0319 11:53:37.730726 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert podName:63c12a89-1b49-4eba-8f5a-551b10d2246b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:38.230712888 +0000 UTC m=+148.904434070 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-zfsqt" (UID: "63c12a89-1b49-4eba-8f5a-551b10d2246b") : secret "performance-addon-operator-webhook-cert" not found Mar 19 11:53:37.733453 master-0 kubenswrapper[3958]: E0319 11:53:37.731140 3958 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 11:53:37.733453 master-0 kubenswrapper[3958]: I0319 11:53:37.731157 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:37.734009 master-0 kubenswrapper[3958]: E0319 11:53:37.731256 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics podName:b0f5939c-48b1-4d6c-9712-9128a78d603b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:38.231220374 +0000 UTC m=+148.904941736 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-pr7gk" (UID: "b0f5939c-48b1-4d6c-9712-9128a78d603b") : secret "marketplace-operator-metrics" not found Mar 19 11:53:37.734009 master-0 kubenswrapper[3958]: I0319 11:53:37.731411 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/c2dbd8b3-0e02-4747-a166-80aa6a94b060-operand-assets\") pod \"cluster-olm-operator-67dcd4998-cg9pq\" (UID: \"c2dbd8b3-0e02-4747-a166-80aa6a94b060\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 11:53:37.734009 master-0 kubenswrapper[3958]: I0319 11:53:37.731624 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-images\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:37.734009 master-0 kubenswrapper[3958]: I0319 11:53:37.731847 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:53:37.734009 master-0 kubenswrapper[3958]: I0319 11:53:37.731897 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hq8f\" (UniqueName: \"kubernetes.io/projected/661b8957-a890-4032-9e57-45e2e0b35249-kube-api-access-8hq8f\") pod \"service-ca-operator-b865698dc-sxsxt\" (UID: \"661b8957-a890-4032-9e57-45e2e0b35249\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 11:53:37.734009 master-0 kubenswrapper[3958]: I0319 11:53:37.731918 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bst2w\" (UniqueName: \"kubernetes.io/projected/63c12a89-1b49-4eba-8f5a-551b10d2246b-kube-api-access-bst2w\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:37.734009 master-0 kubenswrapper[3958]: I0319 11:53:37.731935 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6wm6\" (UniqueName: \"kubernetes.io/projected/d3017b5e-178e-49de-89d2-817a18398203-kube-api-access-b6wm6\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:37.734009 master-0 kubenswrapper[3958]: I0319 11:53:37.731952 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:37.734009 master-0 kubenswrapper[3958]: I0319 11:53:37.732756 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1089ea24-add9-482e-9276-e6ded12052d7-config\") pod \"kube-apiserver-operator-8b68b9d9b-qv4cg\" (UID: \"1089ea24-add9-482e-9276-e6ded12052d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 11:53:37.734009 master-0 kubenswrapper[3958]: I0319 11:53:37.733028 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:37.734009 master-0 kubenswrapper[3958]: I0319 11:53:37.733322 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk\" (UID: \"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 11:53:37.734009 master-0 kubenswrapper[3958]: I0319 11:53:37.733525 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:37.734009 master-0 kubenswrapper[3958]: I0319 11:53:37.733667 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/19de6601-10d4-4112-a21f-0398d2b160d1-images\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:37.734455 master-0 kubenswrapper[3958]: I0319 11:53:37.734167 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06df1b1b-154e-46f9-aee0-79a137c6c928-config\") pod \"openshift-kube-scheduler-operator-dddff6458-ptqdh\" (UID: \"06df1b1b-154e-46f9-aee0-79a137c6c928\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 11:53:37.734455 master-0 kubenswrapper[3958]: E0319 11:53:37.734286 3958 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 11:53:37.734455 master-0 kubenswrapper[3958]: E0319 11:53:37.734334 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:38.234317111 +0000 UTC m=+148.908038313 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-operator-tls" not found Mar 19 11:53:37.734455 master-0 kubenswrapper[3958]: I0319 11:53:37.734329 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/661b8957-a890-4032-9e57-45e2e0b35249-config\") pod \"service-ca-operator-b865698dc-sxsxt\" (UID: \"661b8957-a890-4032-9e57-45e2e0b35249\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 11:53:37.735698 master-0 kubenswrapper[3958]: I0319 11:53:37.734031 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2151eb84-177e-459c-be71-f48465323ac2-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-fjn4b\" (UID: \"2151eb84-177e-459c-be71-f48465323ac2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 11:53:37.735698 master-0 kubenswrapper[3958]: I0319 11:53:37.734707 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-ca\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:37.735698 master-0 kubenswrapper[3958]: I0319 11:53:37.734750 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khv2z\" (UniqueName: \"kubernetes.io/projected/a7747954-a222-4809-8656-818203b55ee8-kube-api-access-khv2z\") pod \"csi-snapshot-controller-operator-5f5d689c6b-2chdm\" (UID: \"a7747954-a222-4809-8656-818203b55ee8\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-2chdm" Mar 19 11:53:37.735698 master-0 kubenswrapper[3958]: I0319 11:53:37.734844 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1089ea24-add9-482e-9276-e6ded12052d7-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-qv4cg\" (UID: \"1089ea24-add9-482e-9276-e6ded12052d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 11:53:37.735698 master-0 kubenswrapper[3958]: I0319 11:53:37.734927 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:53:37.735698 master-0 kubenswrapper[3958]: I0319 11:53:37.735092 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ed2dbd1-aec4-4009-917a-933533912ab5-config\") pod \"openshift-controller-manager-operator-8c94f4649-gx4w8\" (UID: \"9ed2dbd1-aec4-4009-917a-933533912ab5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 11:53:37.736296 master-0 kubenswrapper[3958]: I0319 11:53:37.736258 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:37.736667 master-0 kubenswrapper[3958]: I0319 11:53:37.736378 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06df1b1b-154e-46f9-aee0-79a137c6c928-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-ptqdh\" (UID: \"06df1b1b-154e-46f9-aee0-79a137c6c928\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 11:53:37.736667 master-0 kubenswrapper[3958]: E0319 11:53:37.735120 3958 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 11:53:37.736667 master-0 kubenswrapper[3958]: E0319 11:53:37.736444 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert podName:beb562de-402b-4d9f-b5ed-090b60847a95 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:38.236428377 +0000 UTC m=+148.910149569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6j2nj" (UID: "beb562de-402b-4d9f-b5ed-090b60847a95") : secret "package-server-manager-serving-cert" not found Mar 19 11:53:37.736667 master-0 kubenswrapper[3958]: E0319 11:53:37.735534 3958 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 11:53:37.736667 master-0 kubenswrapper[3958]: E0319 11:53:37.736484 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert podName:87a3f546-e1c1-42a1-b80e-d45b6d5c0a04 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:38.236473868 +0000 UTC m=+148.910195060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert") pod "olm-operator-5c9796789-8cldl" (UID: "87a3f546-e1c1-42a1-b80e-d45b6d5c0a04") : secret "olm-operator-serving-cert" not found Mar 19 11:53:37.736667 master-0 kubenswrapper[3958]: I0319 11:53:37.736577 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnd9c\" (UniqueName: \"kubernetes.io/projected/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-kube-api-access-jnd9c\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:53:37.736667 master-0 kubenswrapper[3958]: I0319 11:53:37.736610 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsk9d\" (UniqueName: \"kubernetes.io/projected/9ed2dbd1-aec4-4009-917a-933533912ab5-kube-api-access-gsk9d\") pod \"openshift-controller-manager-operator-8c94f4649-gx4w8\" (UID: \"9ed2dbd1-aec4-4009-917a-933533912ab5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 11:53:37.736667 master-0 kubenswrapper[3958]: I0319 11:53:37.736642 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f97d998-530c-4d9d-a030-ca1d9d2d4490-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-6wzws\" (UID: \"0f97d998-530c-4d9d-a030-ca1d9d2d4490\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws" Mar 19 11:53:37.736667 master-0 kubenswrapper[3958]: I0319 11:53:37.736666 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19de6601-10d4-4112-a21f-0398d2b160d1-config\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:37.738425 master-0 kubenswrapper[3958]: I0319 11:53:37.736689 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:53:37.738425 master-0 kubenswrapper[3958]: I0319 11:53:37.736715 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/7241bf11-192e-47db-9d80-2324938ed34c-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:37.738425 master-0 kubenswrapper[3958]: I0319 11:53:37.736746 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-client\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:37.738425 master-0 kubenswrapper[3958]: I0319 11:53:37.736769 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:37.738425 master-0 kubenswrapper[3958]: E0319 11:53:37.736782 3958 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 19 11:53:37.738425 master-0 kubenswrapper[3958]: I0319 11:53:37.736791 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-config\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:37.738425 master-0 kubenswrapper[3958]: E0319 11:53:37.736853 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls podName:ab54833d-e57b-479d-b171-68155f6566f1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:38.236835669 +0000 UTC m=+148.910556861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls") pod "dns-operator-9c5679d8f-z6kvm" (UID: "ab54833d-e57b-479d-b171-68155f6566f1") : secret "metrics-tls" not found Mar 19 11:53:37.738425 master-0 kubenswrapper[3958]: I0319 11:53:37.736903 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:37.738425 master-0 kubenswrapper[3958]: I0319 11:53:37.736946 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npc2t\" (UniqueName: \"kubernetes.io/projected/c2dbd8b3-0e02-4747-a166-80aa6a94b060-kube-api-access-npc2t\") pod \"cluster-olm-operator-67dcd4998-cg9pq\" (UID: \"c2dbd8b3-0e02-4747-a166-80aa6a94b060\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 11:53:37.738425 master-0 kubenswrapper[3958]: I0319 11:53:37.736981 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-config\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:37.738425 master-0 kubenswrapper[3958]: I0319 11:53:37.737207 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5mkm\" (UniqueName: \"kubernetes.io/projected/7241bf11-192e-47db-9d80-2324938ed34c-kube-api-access-s5mkm\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:37.738425 master-0 kubenswrapper[3958]: I0319 11:53:37.737233 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef8e03f-0363-4e13-b7ca-4fa871d77c62-serving-cert\") pod \"openshift-config-operator-95bf4f4d-nhvl4\" (UID: \"aef8e03f-0363-4e13-b7ca-4fa871d77c62\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:53:37.738425 master-0 kubenswrapper[3958]: I0319 11:53:37.737496 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19de6601-10d4-4112-a21f-0398d2b160d1-config\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:37.738425 master-0 kubenswrapper[3958]: I0319 11:53:37.737539 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:37.738425 master-0 kubenswrapper[3958]: I0319 11:53:37.738046 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/661b8957-a890-4032-9e57-45e2e0b35249-serving-cert\") pod \"service-ca-operator-b865698dc-sxsxt\" (UID: \"661b8957-a890-4032-9e57-45e2e0b35249\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 11:53:37.738425 master-0 kubenswrapper[3958]: I0319 11:53:37.738103 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/7241bf11-192e-47db-9d80-2324938ed34c-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:37.739035 master-0 kubenswrapper[3958]: I0319 11:53:37.738103 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c2dbd8b3-0e02-4747-a166-80aa6a94b060-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-cg9pq\" (UID: \"c2dbd8b3-0e02-4747-a166-80aa6a94b060\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 11:53:37.739035 master-0 kubenswrapper[3958]: I0319 11:53:37.738152 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06df1b1b-154e-46f9-aee0-79a137c6c928-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-ptqdh\" (UID: \"06df1b1b-154e-46f9-aee0-79a137c6c928\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 11:53:37.739035 master-0 kubenswrapper[3958]: I0319 11:53:37.738148 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk\" (UID: \"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 11:53:37.739035 master-0 kubenswrapper[3958]: I0319 11:53:37.738256 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:37.739035 master-0 kubenswrapper[3958]: I0319 11:53:37.738294 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f08c5930-44f0-48e4-80dd-2563f2733b2f-config\") pod \"openshift-apiserver-operator-d65958b8-mjs7x\" (UID: \"f08c5930-44f0-48e4-80dd-2563f2733b2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 11:53:37.739035 master-0 kubenswrapper[3958]: I0319 11:53:37.738324 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:37.739035 master-0 kubenswrapper[3958]: I0319 11:53:37.738353 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:37.739035 master-0 kubenswrapper[3958]: E0319 11:53:37.738357 3958 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:53:37.739035 master-0 kubenswrapper[3958]: E0319 11:53:37.738413 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:38.238400149 +0000 UTC m=+148.912121331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:53:37.739035 master-0 kubenswrapper[3958]: E0319 11:53:37.738417 3958 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 11:53:37.739035 master-0 kubenswrapper[3958]: E0319 11:53:37.738426 3958 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 11:53:37.739035 master-0 kubenswrapper[3958]: E0319 11:53:37.738470 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls podName:7241bf11-192e-47db-9d80-2324938ed34c nodeName:}" failed. No retries permitted until 2026-03-19 11:53:38.238458141 +0000 UTC m=+148.912179323 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-92c5d" (UID: "7241bf11-192e-47db-9d80-2324938ed34c") : secret "cluster-monitoring-operator-tls" not found Mar 19 11:53:37.739035 master-0 kubenswrapper[3958]: E0319 11:53:37.738491 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls podName:d3541cbe-3be0-40d3-89d2-b5937b6a8f47 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:38.238481222 +0000 UTC m=+148.912202404 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls") pod "machine-config-operator-84d549f6d5-lswqw" (UID: "d3541cbe-3be0-40d3-89d2-b5937b6a8f47") : secret "mco-proxy-tls" not found Mar 19 11:53:37.739035 master-0 kubenswrapper[3958]: E0319 11:53:37.738512 3958 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 19 11:53:37.739035 master-0 kubenswrapper[3958]: I0319 11:53:37.738540 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zntzt\" (UniqueName: \"kubernetes.io/projected/0f97d998-530c-4d9d-a030-ca1d9d2d4490-kube-api-access-zntzt\") pod \"cluster-storage-operator-7d87854d6-6wzws\" (UID: \"0f97d998-530c-4d9d-a030-ca1d9d2d4490\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws" Mar 19 11:53:37.740428 master-0 kubenswrapper[3958]: E0319 11:53:37.738544 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls podName:63c12a89-1b49-4eba-8f5a-551b10d2246b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:38.238533953 +0000 UTC m=+148.912255375 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-zfsqt" (UID: "63c12a89-1b49-4eba-8f5a-551b10d2246b") : secret "node-tuning-operator-tls" not found Mar 19 11:53:37.740428 master-0 kubenswrapper[3958]: I0319 11:53:37.738577 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h84l9\" (UniqueName: \"kubernetes.io/projected/f08c5930-44f0-48e4-80dd-2563f2733b2f-kube-api-access-h84l9\") pod \"openshift-apiserver-operator-d65958b8-mjs7x\" (UID: \"f08c5930-44f0-48e4-80dd-2563f2733b2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 11:53:37.740428 master-0 kubenswrapper[3958]: I0319 11:53:37.738599 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/63c12a89-1b49-4eba-8f5a-551b10d2246b-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:37.740428 master-0 kubenswrapper[3958]: I0319 11:53:37.738622 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3017b5e-178e-49de-89d2-817a18398203-serving-cert\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:37.740428 master-0 kubenswrapper[3958]: I0319 11:53:37.738934 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f08c5930-44f0-48e4-80dd-2563f2733b2f-config\") pod \"openshift-apiserver-operator-d65958b8-mjs7x\" (UID: \"f08c5930-44f0-48e4-80dd-2563f2733b2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 11:53:37.740428 master-0 kubenswrapper[3958]: I0319 11:53:37.739221 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1089ea24-add9-482e-9276-e6ded12052d7-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-qv4cg\" (UID: \"1089ea24-add9-482e-9276-e6ded12052d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 11:53:37.740428 master-0 kubenswrapper[3958]: I0319 11:53:37.739311 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-config\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:37.740428 master-0 kubenswrapper[3958]: I0319 11:53:37.739336 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:37.740428 master-0 kubenswrapper[3958]: I0319 11:53:37.739395 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2151eb84-177e-459c-be71-f48465323ac2-config\") pod \"kube-controller-manager-operator-ff989d6cc-fjn4b\" (UID: \"2151eb84-177e-459c-be71-f48465323ac2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 11:53:37.740428 master-0 kubenswrapper[3958]: I0319 11:53:37.739686 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f97d998-530c-4d9d-a030-ca1d9d2d4490-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-6wzws\" (UID: \"0f97d998-530c-4d9d-a030-ca1d9d2d4490\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws" Mar 19 11:53:37.740428 master-0 kubenswrapper[3958]: I0319 11:53:37.739741 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mr6d\" (UniqueName: \"kubernetes.io/projected/beb562de-402b-4d9f-b5ed-090b60847a95-kube-api-access-9mr6d\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:53:37.740428 master-0 kubenswrapper[3958]: I0319 11:53:37.739811 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ed2dbd1-aec4-4009-917a-933533912ab5-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-gx4w8\" (UID: \"9ed2dbd1-aec4-4009-917a-933533912ab5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 11:53:37.740428 master-0 kubenswrapper[3958]: I0319 11:53:37.739849 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2151eb84-177e-459c-be71-f48465323ac2-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-fjn4b\" (UID: \"2151eb84-177e-459c-be71-f48465323ac2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 11:53:37.740863 master-0 kubenswrapper[3958]: I0319 11:53:37.740540 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2151eb84-177e-459c-be71-f48465323ac2-config\") pod \"kube-controller-manager-operator-ff989d6cc-fjn4b\" (UID: \"2151eb84-177e-459c-be71-f48465323ac2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 11:53:37.741736 master-0 kubenswrapper[3958]: I0319 11:53:37.741703 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/63c12a89-1b49-4eba-8f5a-551b10d2246b-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:37.741861 master-0 kubenswrapper[3958]: I0319 11:53:37.741816 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f08c5930-44f0-48e4-80dd-2563f2733b2f-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-mjs7x\" (UID: \"f08c5930-44f0-48e4-80dd-2563f2733b2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 11:53:37.742356 master-0 kubenswrapper[3958]: I0319 11:53:37.742328 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2151eb84-177e-459c-be71-f48465323ac2-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-fjn4b\" (UID: \"2151eb84-177e-459c-be71-f48465323ac2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 11:53:37.742457 master-0 kubenswrapper[3958]: I0319 11:53:37.742433 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3017b5e-178e-49de-89d2-817a18398203-serving-cert\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:37.811216 master-0 kubenswrapper[3958]: I0319 11:53:37.754847 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl"] Mar 19 11:53:37.811216 master-0 kubenswrapper[3958]: I0319 11:53:37.801850 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt"] Mar 19 11:53:37.811216 master-0 kubenswrapper[3958]: I0319 11:53:37.804156 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-z6kvm"] Mar 19 11:53:37.811216 master-0 kubenswrapper[3958]: I0319 11:53:37.804410 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq"] Mar 19 11:53:37.811216 master-0 kubenswrapper[3958]: I0319 11:53:37.806528 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt"] Mar 19 11:53:37.811216 master-0 kubenswrapper[3958]: I0319 11:53:37.806556 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk"] Mar 19 11:53:37.811216 master-0 kubenswrapper[3958]: I0319 11:53:37.806565 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b"] Mar 19 11:53:37.811216 master-0 kubenswrapper[3958]: I0319 11:53:37.807658 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg"] Mar 19 11:53:37.813392 master-0 kubenswrapper[3958]: I0319 11:53:37.813334 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq"] Mar 19 11:53:37.818339 master-0 kubenswrapper[3958]: I0319 11:53:37.817108 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz"] Mar 19 11:53:37.820123 master-0 kubenswrapper[3958]: I0319 11:53:37.820062 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x"] Mar 19 11:53:37.825105 master-0 kubenswrapper[3958]: I0319 11:53:37.824707 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw"] Mar 19 11:53:37.827246 master-0 kubenswrapper[3958]: I0319 11:53:37.827205 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-btppx"] Mar 19 11:53:37.827683 master-0 kubenswrapper[3958]: I0319 11:53:37.827657 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d"] Mar 19 11:53:37.840515 master-0 kubenswrapper[3958]: I0319 11:53:37.840373 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-fz8cg\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:53:37.840515 master-0 kubenswrapper[3958]: I0319 11:53:37.840405 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:37.840515 master-0 kubenswrapper[3958]: I0319 11:53:37.840427 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs4jf\" (UniqueName: \"kubernetes.io/projected/b80027fd-7b39-477a-a337-ff9bb08e7eeb-kube-api-access-hs4jf\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:37.840515 master-0 kubenswrapper[3958]: I0319 11:53:37.840452 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpdts\" (UniqueName: \"kubernetes.io/projected/9702fc8c-4fe0-413b-b2d4-db23021d42b8-kube-api-access-tpdts\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:37.841092 master-0 kubenswrapper[3958]: E0319 11:53:37.841054 3958 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 11:53:37.841092 master-0 kubenswrapper[3958]: I0319 11:53:37.841071 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b80027fd-7b39-477a-a337-ff9bb08e7eeb-bound-sa-token\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:37.843385 master-0 kubenswrapper[3958]: I0319 11:53:37.841110 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:37.843385 master-0 kubenswrapper[3958]: I0319 11:53:37.841142 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:53:37.843385 master-0 kubenswrapper[3958]: E0319 11:53:37.841189 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs podName:806a4c30-7b93-4430-86da-f9e1f4f2d206 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:38.341155677 +0000 UTC m=+149.014876859 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-fz8cg" (UID: "806a4c30-7b93-4430-86da-f9e1f4f2d206") : secret "multus-admission-controller-secret" not found Mar 19 11:53:37.843385 master-0 kubenswrapper[3958]: I0319 11:53:37.841253 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9702fc8c-4fe0-413b-b2d4-db23021d42b8-serving-cert\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:37.843385 master-0 kubenswrapper[3958]: E0319 11:53:37.841267 3958 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 19 11:53:37.843385 master-0 kubenswrapper[3958]: E0319 11:53:37.841344 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls podName:82b98dca-59f9-42be-94ca-4a2a2b6fea0f nodeName:}" failed. No retries permitted until 2026-03-19 11:53:38.341326332 +0000 UTC m=+149.015047514 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-g6sn6" (UID: "82b98dca-59f9-42be-94ca-4a2a2b6fea0f") : secret "image-registry-operator-tls" not found Mar 19 11:53:37.843385 master-0 kubenswrapper[3958]: I0319 11:53:37.841282 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5bmd\" (UniqueName: \"kubernetes.io/projected/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-kube-api-access-c5bmd\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:37.843385 master-0 kubenswrapper[3958]: I0319 11:53:37.841400 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:37.843385 master-0 kubenswrapper[3958]: I0319 11:53:37.841439 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfl29\" (UniqueName: \"kubernetes.io/projected/806a4c30-7b93-4430-86da-f9e1f4f2d206-kube-api-access-dfl29\") pod \"multus-admission-controller-5dbbb8b86f-fz8cg\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:53:37.843385 master-0 kubenswrapper[3958]: I0319 11:53:37.841475 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b80027fd-7b39-477a-a337-ff9bb08e7eeb-trusted-ca\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:37.843385 master-0 kubenswrapper[3958]: I0319 11:53:37.841576 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x252z\" (UniqueName: \"kubernetes.io/projected/aef8e03f-0363-4e13-b7ca-4fa871d77c62-kube-api-access-x252z\") pod \"openshift-config-operator-95bf4f4d-nhvl4\" (UID: \"aef8e03f-0363-4e13-b7ca-4fa871d77c62\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:53:37.843385 master-0 kubenswrapper[3958]: I0319 11:53:37.841635 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/aef8e03f-0363-4e13-b7ca-4fa871d77c62-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-nhvl4\" (UID: \"aef8e03f-0363-4e13-b7ca-4fa871d77c62\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:53:37.843385 master-0 kubenswrapper[3958]: I0319 11:53:37.841705 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-ca\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:37.843385 master-0 kubenswrapper[3958]: I0319 11:53:37.842135 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ed2dbd1-aec4-4009-917a-933533912ab5-config\") pod \"openshift-controller-manager-operator-8c94f4649-gx4w8\" (UID: \"9ed2dbd1-aec4-4009-917a-933533912ab5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 11:53:37.843385 master-0 kubenswrapper[3958]: I0319 11:53:37.842488 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:37.843838 master-0 kubenswrapper[3958]: I0319 11:53:37.842562 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-ca\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:37.843838 master-0 kubenswrapper[3958]: I0319 11:53:37.842718 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b80027fd-7b39-477a-a337-ff9bb08e7eeb-trusted-ca\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:37.843838 master-0 kubenswrapper[3958]: I0319 11:53:37.842827 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsk9d\" (UniqueName: \"kubernetes.io/projected/9ed2dbd1-aec4-4009-917a-933533912ab5-kube-api-access-gsk9d\") pod \"openshift-controller-manager-operator-8c94f4649-gx4w8\" (UID: \"9ed2dbd1-aec4-4009-917a-933533912ab5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 11:53:37.843838 master-0 kubenswrapper[3958]: I0319 11:53:37.842862 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnd9c\" (UniqueName: \"kubernetes.io/projected/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-kube-api-access-jnd9c\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:53:37.843838 master-0 kubenswrapper[3958]: I0319 11:53:37.842901 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-config\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:37.843838 master-0 kubenswrapper[3958]: I0319 11:53:37.842937 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-client\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:37.843838 master-0 kubenswrapper[3958]: E0319 11:53:37.842940 3958 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 11:53:37.843838 master-0 kubenswrapper[3958]: I0319 11:53:37.842985 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef8e03f-0363-4e13-b7ca-4fa871d77c62-serving-cert\") pod \"openshift-config-operator-95bf4f4d-nhvl4\" (UID: \"aef8e03f-0363-4e13-b7ca-4fa871d77c62\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:53:37.843838 master-0 kubenswrapper[3958]: E0319 11:53:37.843257 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert podName:bdcdb23d-ef1f-45e2-b9ac-7abf707637b6 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:38.343003775 +0000 UTC m=+149.016724957 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert") pod "catalog-operator-68f85b4d6c-2trz4" (UID: "bdcdb23d-ef1f-45e2-b9ac-7abf707637b6") : secret "catalog-operator-serving-cert" not found Mar 19 11:53:37.843838 master-0 kubenswrapper[3958]: I0319 11:53:37.843317 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:37.843838 master-0 kubenswrapper[3958]: I0319 11:53:37.843363 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:37.843838 master-0 kubenswrapper[3958]: I0319 11:53:37.843389 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ed2dbd1-aec4-4009-917a-933533912ab5-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-gx4w8\" (UID: \"9ed2dbd1-aec4-4009-917a-933533912ab5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 11:53:37.843838 master-0 kubenswrapper[3958]: I0319 11:53:37.843574 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ed2dbd1-aec4-4009-917a-933533912ab5-config\") pod \"openshift-controller-manager-operator-8c94f4649-gx4w8\" (UID: \"9ed2dbd1-aec4-4009-917a-933533912ab5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 11:53:37.843838 master-0 kubenswrapper[3958]: E0319 11:53:37.843736 3958 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 11:53:37.843838 master-0 kubenswrapper[3958]: I0319 11:53:37.843762 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:37.843838 master-0 kubenswrapper[3958]: E0319 11:53:37.843777 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls podName:b80027fd-7b39-477a-a337-ff9bb08e7eeb nodeName:}" failed. No retries permitted until 2026-03-19 11:53:38.343762149 +0000 UTC m=+149.017483331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls") pod "ingress-operator-66b84d69b-btppx" (UID: "b80027fd-7b39-477a-a337-ff9bb08e7eeb") : secret "metrics-tls" not found Mar 19 11:53:37.844245 master-0 kubenswrapper[3958]: I0319 11:53:37.843893 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-config\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:37.844245 master-0 kubenswrapper[3958]: I0319 11:53:37.843993 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/aef8e03f-0363-4e13-b7ca-4fa871d77c62-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-nhvl4\" (UID: \"aef8e03f-0363-4e13-b7ca-4fa871d77c62\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:53:37.845352 master-0 kubenswrapper[3958]: I0319 11:53:37.845317 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef8e03f-0363-4e13-b7ca-4fa871d77c62-serving-cert\") pod \"openshift-config-operator-95bf4f4d-nhvl4\" (UID: \"aef8e03f-0363-4e13-b7ca-4fa871d77c62\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:53:37.846400 master-0 kubenswrapper[3958]: I0319 11:53:37.846360 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9702fc8c-4fe0-413b-b2d4-db23021d42b8-serving-cert\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:37.846480 master-0 kubenswrapper[3958]: I0319 11:53:37.846362 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-client\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:37.847685 master-0 kubenswrapper[3958]: I0319 11:53:37.847651 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ed2dbd1-aec4-4009-917a-933533912ab5-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-gx4w8\" (UID: \"9ed2dbd1-aec4-4009-917a-933533912ab5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 11:53:38.249000 master-0 kubenswrapper[3958]: I0319 11:53:38.248890 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:38.249373 master-0 kubenswrapper[3958]: E0319 11:53:38.249141 3958 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 19 11:53:38.249373 master-0 kubenswrapper[3958]: I0319 11:53:38.249234 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:38.249373 master-0 kubenswrapper[3958]: E0319 11:53:38.249255 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert podName:63c12a89-1b49-4eba-8f5a-551b10d2246b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:39.249231048 +0000 UTC m=+149.922952230 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-zfsqt" (UID: "63c12a89-1b49-4eba-8f5a-551b10d2246b") : secret "performance-addon-operator-webhook-cert" not found Mar 19 11:53:38.249373 master-0 kubenswrapper[3958]: E0319 11:53:38.249321 3958 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 11:53:38.249373 master-0 kubenswrapper[3958]: E0319 11:53:38.249347 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics podName:b0f5939c-48b1-4d6c-9712-9128a78d603b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:39.249339181 +0000 UTC m=+149.923060363 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-pr7gk" (UID: "b0f5939c-48b1-4d6c-9712-9128a78d603b") : secret "marketplace-operator-metrics" not found Mar 19 11:53:38.249598 master-0 kubenswrapper[3958]: I0319 11:53:38.249440 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:53:38.249598 master-0 kubenswrapper[3958]: I0319 11:53:38.249482 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:38.249598 master-0 kubenswrapper[3958]: I0319 11:53:38.249542 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:53:38.249731 master-0 kubenswrapper[3958]: I0319 11:53:38.249609 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:53:38.249731 master-0 kubenswrapper[3958]: I0319 11:53:38.249635 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:38.249731 master-0 kubenswrapper[3958]: I0319 11:53:38.249660 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:38.249731 master-0 kubenswrapper[3958]: I0319 11:53:38.249700 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:38.249731 master-0 kubenswrapper[3958]: I0319 11:53:38.249723 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:38.250433 master-0 kubenswrapper[3958]: E0319 11:53:38.250068 3958 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:53:38.250433 master-0 kubenswrapper[3958]: E0319 11:53:38.250101 3958 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 19 11:53:38.250433 master-0 kubenswrapper[3958]: E0319 11:53:38.250112 3958 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 19 11:53:38.250433 master-0 kubenswrapper[3958]: E0319 11:53:38.250113 3958 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 11:53:38.250433 master-0 kubenswrapper[3958]: E0319 11:53:38.250255 3958 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 11:53:38.250433 master-0 kubenswrapper[3958]: E0319 11:53:38.250122 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:39.250114516 +0000 UTC m=+149.923835698 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:53:38.250433 master-0 kubenswrapper[3958]: E0319 11:53:38.250387 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls podName:ab54833d-e57b-479d-b171-68155f6566f1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:39.250356224 +0000 UTC m=+149.924077576 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls") pod "dns-operator-9c5679d8f-z6kvm" (UID: "ab54833d-e57b-479d-b171-68155f6566f1") : secret "metrics-tls" not found Mar 19 11:53:38.250433 master-0 kubenswrapper[3958]: E0319 11:53:38.250206 3958 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 11:53:38.250739 master-0 kubenswrapper[3958]: E0319 11:53:38.250212 3958 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 11:53:38.250739 master-0 kubenswrapper[3958]: E0319 11:53:38.250260 3958 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 11:53:38.250739 master-0 kubenswrapper[3958]: E0319 11:53:38.250408 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls podName:63c12a89-1b49-4eba-8f5a-551b10d2246b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:39.250395875 +0000 UTC m=+149.924117277 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-zfsqt" (UID: "63c12a89-1b49-4eba-8f5a-551b10d2246b") : secret "node-tuning-operator-tls" not found Mar 19 11:53:38.250739 master-0 kubenswrapper[3958]: E0319 11:53:38.250675 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls podName:d3541cbe-3be0-40d3-89d2-b5937b6a8f47 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:39.250651293 +0000 UTC m=+149.924372705 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls") pod "machine-config-operator-84d549f6d5-lswqw" (UID: "d3541cbe-3be0-40d3-89d2-b5937b6a8f47") : secret "mco-proxy-tls" not found Mar 19 11:53:38.250739 master-0 kubenswrapper[3958]: E0319 11:53:38.250707 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:39.250693844 +0000 UTC m=+149.924415236 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-operator-tls" not found Mar 19 11:53:38.250739 master-0 kubenswrapper[3958]: E0319 11:53:38.250727 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert podName:87a3f546-e1c1-42a1-b80e-d45b6d5c0a04 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:39.250720485 +0000 UTC m=+149.924441867 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert") pod "olm-operator-5c9796789-8cldl" (UID: "87a3f546-e1c1-42a1-b80e-d45b6d5c0a04") : secret "olm-operator-serving-cert" not found Mar 19 11:53:38.250739 master-0 kubenswrapper[3958]: E0319 11:53:38.250747 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert podName:beb562de-402b-4d9f-b5ed-090b60847a95 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:39.250739456 +0000 UTC m=+149.924460838 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6j2nj" (UID: "beb562de-402b-4d9f-b5ed-090b60847a95") : secret "package-server-manager-serving-cert" not found Mar 19 11:53:38.251060 master-0 kubenswrapper[3958]: E0319 11:53:38.250769 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls podName:7241bf11-192e-47db-9d80-2324938ed34c nodeName:}" failed. No retries permitted until 2026-03-19 11:53:39.250762627 +0000 UTC m=+149.924484009 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-92c5d" (UID: "7241bf11-192e-47db-9d80-2324938ed34c") : secret "cluster-monitoring-operator-tls" not found Mar 19 11:53:38.311537 master-0 kubenswrapper[3958]: I0319 11:53:38.298641 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-276t5"] Mar 19 11:53:38.311537 master-0 kubenswrapper[3958]: I0319 11:53:38.300198 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-2chdm"] Mar 19 11:53:38.311537 master-0 kubenswrapper[3958]: I0319 11:53:38.300299 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 11:53:38.311537 master-0 kubenswrapper[3958]: I0319 11:53:38.303790 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-pr7gk"] Mar 19 11:53:38.311537 master-0 kubenswrapper[3958]: I0319 11:53:38.304165 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8"] Mar 19 11:53:38.311537 master-0 kubenswrapper[3958]: I0319 11:53:38.304181 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6"] Mar 19 11:53:38.311537 master-0 kubenswrapper[3958]: I0319 11:53:38.305400 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg"] Mar 19 11:53:38.311537 master-0 kubenswrapper[3958]: I0319 11:53:38.311287 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 19 11:53:38.338497 master-0 kubenswrapper[3958]: I0319 11:53:38.319220 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws"] Mar 19 11:53:38.338497 master-0 kubenswrapper[3958]: I0319 11:53:38.319275 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6"] Mar 19 11:53:38.338497 master-0 kubenswrapper[3958]: I0319 11:53:38.319291 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh"] Mar 19 11:53:38.338497 master-0 kubenswrapper[3958]: I0319 11:53:38.327513 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khv2z\" (UniqueName: \"kubernetes.io/projected/a7747954-a222-4809-8656-818203b55ee8-kube-api-access-khv2z\") pod \"csi-snapshot-controller-operator-5f5d689c6b-2chdm\" (UID: \"a7747954-a222-4809-8656-818203b55ee8\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-2chdm" Mar 19 11:53:38.338497 master-0 kubenswrapper[3958]: I0319 11:53:38.327968 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hq8f\" (UniqueName: \"kubernetes.io/projected/661b8957-a890-4032-9e57-45e2e0b35249-kube-api-access-8hq8f\") pod \"service-ca-operator-b865698dc-sxsxt\" (UID: \"661b8957-a890-4032-9e57-45e2e0b35249\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 11:53:38.338497 master-0 kubenswrapper[3958]: I0319 11:53:38.329002 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mr6d\" (UniqueName: \"kubernetes.io/projected/beb562de-402b-4d9f-b5ed-090b60847a95-kube-api-access-9mr6d\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:53:38.338497 master-0 kubenswrapper[3958]: I0319 11:53:38.332013 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06df1b1b-154e-46f9-aee0-79a137c6c928-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-ptqdh\" (UID: \"06df1b1b-154e-46f9-aee0-79a137c6c928\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 11:53:38.338497 master-0 kubenswrapper[3958]: I0319 11:53:38.334952 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h84l9\" (UniqueName: \"kubernetes.io/projected/f08c5930-44f0-48e4-80dd-2563f2733b2f-kube-api-access-h84l9\") pod \"openshift-apiserver-operator-d65958b8-mjs7x\" (UID: \"f08c5930-44f0-48e4-80dd-2563f2733b2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 11:53:38.338497 master-0 kubenswrapper[3958]: I0319 11:53:38.335430 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b80027fd-7b39-477a-a337-ff9bb08e7eeb-bound-sa-token\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:38.338497 master-0 kubenswrapper[3958]: I0319 11:53:38.335466 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj"] Mar 19 11:53:38.338497 master-0 kubenswrapper[3958]: I0319 11:53:38.336342 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4"] Mar 19 11:53:38.340357 master-0 kubenswrapper[3958]: I0319 11:53:38.339417 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4"] Mar 19 11:53:38.340357 master-0 kubenswrapper[3958]: I0319 11:53:38.339819 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zntzt\" (UniqueName: \"kubernetes.io/projected/0f97d998-530c-4d9d-a030-ca1d9d2d4490-kube-api-access-zntzt\") pod \"cluster-storage-operator-7d87854d6-6wzws\" (UID: \"0f97d998-530c-4d9d-a030-ca1d9d2d4490\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws" Mar 19 11:53:38.343825 master-0 kubenswrapper[3958]: I0319 11:53:38.341464 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl6d7\" (UniqueName: \"kubernetes.io/projected/ab54833d-e57b-479d-b171-68155f6566f1-kube-api-access-gl6d7\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:53:38.343825 master-0 kubenswrapper[3958]: I0319 11:53:38.341931 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bst2w\" (UniqueName: \"kubernetes.io/projected/63c12a89-1b49-4eba-8f5a-551b10d2246b-kube-api-access-bst2w\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:38.343825 master-0 kubenswrapper[3958]: I0319 11:53:38.342284 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsk9d\" (UniqueName: \"kubernetes.io/projected/9ed2dbd1-aec4-4009-917a-933533912ab5-kube-api-access-gsk9d\") pod \"openshift-controller-manager-operator-8c94f4649-gx4w8\" (UID: \"9ed2dbd1-aec4-4009-917a-933533912ab5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 11:53:38.343825 master-0 kubenswrapper[3958]: I0319 11:53:38.342393 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1089ea24-add9-482e-9276-e6ded12052d7-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-qv4cg\" (UID: \"1089ea24-add9-482e-9276-e6ded12052d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 11:53:38.343825 master-0 kubenswrapper[3958]: I0319 11:53:38.342480 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2151eb84-177e-459c-be71-f48465323ac2-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-fjn4b\" (UID: \"2151eb84-177e-459c-be71-f48465323ac2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 11:53:38.343825 master-0 kubenswrapper[3958]: I0319 11:53:38.343262 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tqdb\" (UniqueName: \"kubernetes.io/projected/b0f5939c-48b1-4d6c-9712-9128a78d603b-kube-api-access-6tqdb\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:38.343825 master-0 kubenswrapper[3958]: I0319 11:53:38.343593 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwfg5\" (UniqueName: \"kubernetes.io/projected/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-kube-api-access-hwfg5\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:53:38.343825 master-0 kubenswrapper[3958]: I0319 11:53:38.343641 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5n89\" (UniqueName: \"kubernetes.io/projected/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-kube-api-access-h5n89\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk\" (UID: \"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 11:53:38.344340 master-0 kubenswrapper[3958]: I0319 11:53:38.344014 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npc2t\" (UniqueName: \"kubernetes.io/projected/c2dbd8b3-0e02-4747-a166-80aa6a94b060-kube-api-access-npc2t\") pod \"cluster-olm-operator-67dcd4998-cg9pq\" (UID: \"c2dbd8b3-0e02-4747-a166-80aa6a94b060\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 11:53:38.346709 master-0 kubenswrapper[3958]: I0319 11:53:38.345774 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5mkm\" (UniqueName: \"kubernetes.io/projected/7241bf11-192e-47db-9d80-2324938ed34c-kube-api-access-s5mkm\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:38.346709 master-0 kubenswrapper[3958]: I0319 11:53:38.346483 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:38.347295 master-0 kubenswrapper[3958]: I0319 11:53:38.347257 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv6bc\" (UniqueName: \"kubernetes.io/projected/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-kube-api-access-pv6bc\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:38.347391 master-0 kubenswrapper[3958]: I0319 11:53:38.347358 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6wm6\" (UniqueName: \"kubernetes.io/projected/d3017b5e-178e-49de-89d2-817a18398203-kube-api-access-b6wm6\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:38.347961 master-0 kubenswrapper[3958]: I0319 11:53:38.347918 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfl29\" (UniqueName: \"kubernetes.io/projected/806a4c30-7b93-4430-86da-f9e1f4f2d206-kube-api-access-dfl29\") pod \"multus-admission-controller-5dbbb8b86f-fz8cg\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:53:38.349718 master-0 kubenswrapper[3958]: I0319 11:53:38.349064 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs4jf\" (UniqueName: \"kubernetes.io/projected/b80027fd-7b39-477a-a337-ff9bb08e7eeb-kube-api-access-hs4jf\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:38.351125 master-0 kubenswrapper[3958]: I0319 11:53:38.351073 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5bmd\" (UniqueName: \"kubernetes.io/projected/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-kube-api-access-c5bmd\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:38.351628 master-0 kubenswrapper[3958]: I0319 11:53:38.351574 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xpc2\" (UniqueName: \"kubernetes.io/projected/19de6601-10d4-4112-a21f-0398d2b160d1-kube-api-access-6xpc2\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:38.351700 master-0 kubenswrapper[3958]: I0319 11:53:38.351594 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdpj4\" (UniqueName: \"kubernetes.io/projected/06f67c28-34fd-4356-92f0-edd0986ad34e-kube-api-access-bdpj4\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 11:53:38.351754 master-0 kubenswrapper[3958]: I0319 11:53:38.351734 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06f67c28-34fd-4356-92f0-edd0986ad34e-host-slash\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 11:53:38.351950 master-0 kubenswrapper[3958]: I0319 11:53:38.351911 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:38.351950 master-0 kubenswrapper[3958]: I0319 11:53:38.351940 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/06f67c28-34fd-4356-92f0-edd0986ad34e-iptables-alerter-script\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 11:53:38.352052 master-0 kubenswrapper[3958]: I0319 11:53:38.351965 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-fz8cg\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:53:38.352052 master-0 kubenswrapper[3958]: I0319 11:53:38.351990 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:38.352052 master-0 kubenswrapper[3958]: I0319 11:53:38.352018 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:53:38.352167 master-0 kubenswrapper[3958]: E0319 11:53:38.352120 3958 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 11:53:38.352167 master-0 kubenswrapper[3958]: E0319 11:53:38.352159 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert podName:bdcdb23d-ef1f-45e2-b9ac-7abf707637b6 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:39.352145611 +0000 UTC m=+150.025866793 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert") pod "catalog-operator-68f85b4d6c-2trz4" (UID: "bdcdb23d-ef1f-45e2-b9ac-7abf707637b6") : secret "catalog-operator-serving-cert" not found Mar 19 11:53:38.352267 master-0 kubenswrapper[3958]: E0319 11:53:38.352203 3958 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 11:53:38.352267 master-0 kubenswrapper[3958]: E0319 11:53:38.352220 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls podName:b80027fd-7b39-477a-a337-ff9bb08e7eeb nodeName:}" failed. No retries permitted until 2026-03-19 11:53:39.352214763 +0000 UTC m=+150.025935945 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls") pod "ingress-operator-66b84d69b-btppx" (UID: "b80027fd-7b39-477a-a337-ff9bb08e7eeb") : secret "metrics-tls" not found Mar 19 11:53:38.352267 master-0 kubenswrapper[3958]: E0319 11:53:38.352258 3958 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 11:53:38.352387 master-0 kubenswrapper[3958]: E0319 11:53:38.352275 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs podName:806a4c30-7b93-4430-86da-f9e1f4f2d206 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:39.352269935 +0000 UTC m=+150.025991117 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-fz8cg" (UID: "806a4c30-7b93-4430-86da-f9e1f4f2d206") : secret "multus-admission-controller-secret" not found Mar 19 11:53:38.352387 master-0 kubenswrapper[3958]: E0319 11:53:38.352307 3958 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 19 11:53:38.352387 master-0 kubenswrapper[3958]: E0319 11:53:38.352323 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls podName:82b98dca-59f9-42be-94ca-4a2a2b6fea0f nodeName:}" failed. No retries permitted until 2026-03-19 11:53:39.352318267 +0000 UTC m=+150.026039449 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-g6sn6" (UID: "82b98dca-59f9-42be-94ca-4a2a2b6fea0f") : secret "image-registry-operator-tls" not found Mar 19 11:53:38.353168 master-0 kubenswrapper[3958]: I0319 11:53:38.353114 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x252z\" (UniqueName: \"kubernetes.io/projected/aef8e03f-0363-4e13-b7ca-4fa871d77c62-kube-api-access-x252z\") pod \"openshift-config-operator-95bf4f4d-nhvl4\" (UID: \"aef8e03f-0363-4e13-b7ca-4fa871d77c62\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:53:38.356695 master-0 kubenswrapper[3958]: I0319 11:53:38.356631 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnd9c\" (UniqueName: \"kubernetes.io/projected/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-kube-api-access-jnd9c\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:53:38.362090 master-0 kubenswrapper[3958]: I0319 11:53:38.362048 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpdts\" (UniqueName: \"kubernetes.io/projected/9702fc8c-4fe0-413b-b2d4-db23021d42b8-kube-api-access-tpdts\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:38.440459 master-0 kubenswrapper[3958]: I0319 11:53:38.440362 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 11:53:38.455518 master-0 kubenswrapper[3958]: I0319 11:53:38.453250 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06f67c28-34fd-4356-92f0-edd0986ad34e-host-slash\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 11:53:38.455518 master-0 kubenswrapper[3958]: I0319 11:53:38.453325 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06f67c28-34fd-4356-92f0-edd0986ad34e-host-slash\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 11:53:38.455518 master-0 kubenswrapper[3958]: I0319 11:53:38.453505 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/06f67c28-34fd-4356-92f0-edd0986ad34e-iptables-alerter-script\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 11:53:38.455518 master-0 kubenswrapper[3958]: I0319 11:53:38.453572 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdpj4\" (UniqueName: \"kubernetes.io/projected/06f67c28-34fd-4356-92f0-edd0986ad34e-kube-api-access-bdpj4\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 11:53:38.455518 master-0 kubenswrapper[3958]: I0319 11:53:38.454748 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/06f67c28-34fd-4356-92f0-edd0986ad34e-iptables-alerter-script\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 11:53:38.460448 master-0 kubenswrapper[3958]: I0319 11:53:38.460391 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:53:38.481640 master-0 kubenswrapper[3958]: I0319 11:53:38.481574 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 11:53:38.501279 master-0 kubenswrapper[3958]: I0319 11:53:38.501031 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:38.525299 master-0 kubenswrapper[3958]: I0319 11:53:38.525226 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 11:53:38.540280 master-0 kubenswrapper[3958]: I0319 11:53:38.539063 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-2chdm" Mar 19 11:53:38.565859 master-0 kubenswrapper[3958]: I0319 11:53:38.562917 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws" Mar 19 11:53:38.570437 master-0 kubenswrapper[3958]: I0319 11:53:38.569961 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 11:53:38.577906 master-0 kubenswrapper[3958]: I0319 11:53:38.577852 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 11:53:38.585667 master-0 kubenswrapper[3958]: I0319 11:53:38.585518 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:38.601574 master-0 kubenswrapper[3958]: I0319 11:53:38.600427 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 11:53:38.617463 master-0 kubenswrapper[3958]: I0319 11:53:38.617419 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 11:53:38.629091 master-0 kubenswrapper[3958]: I0319 11:53:38.629046 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 11:53:38.749459 master-0 kubenswrapper[3958]: I0319 11:53:38.749424 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdpj4\" (UniqueName: \"kubernetes.io/projected/06f67c28-34fd-4356-92f0-edd0986ad34e-kube-api-access-bdpj4\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 11:53:38.875563 master-0 kubenswrapper[3958]: I0319 11:53:38.874986 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8"] Mar 19 11:53:38.875563 master-0 kubenswrapper[3958]: I0319 11:53:38.875519 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz"] Mar 19 11:53:38.911213 master-0 kubenswrapper[3958]: I0319 11:53:38.911089 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b"] Mar 19 11:53:38.911742 master-0 kubenswrapper[3958]: I0319 11:53:38.911716 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4"] Mar 19 11:53:38.942016 master-0 kubenswrapper[3958]: W0319 11:53:38.941901 3958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2151eb84_177e_459c_be71_f48465323ac2.slice/crio-16106b77f2e7c1585811668327c4be2d10fe7576f2ed79d2b198fca95be86d2c WatchSource:0}: Error finding container 16106b77f2e7c1585811668327c4be2d10fe7576f2ed79d2b198fca95be86d2c: Status 404 returned error can't find the container with id 16106b77f2e7c1585811668327c4be2d10fe7576f2ed79d2b198fca95be86d2c Mar 19 11:53:38.953566 master-0 kubenswrapper[3958]: I0319 11:53:38.952105 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 11:53:38.960633 master-0 kubenswrapper[3958]: I0319 11:53:38.959523 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk"] Mar 19 11:53:38.960633 master-0 kubenswrapper[3958]: I0319 11:53:38.960179 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq"] Mar 19 11:53:38.982311 master-0 kubenswrapper[3958]: W0319 11:53:38.982223 3958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06f67c28_34fd_4356_92f0_edd0986ad34e.slice/crio-732ea3fa30562cbb548cbd63878cf98bf16844c3fd2ba6668a55873990319c2d WatchSource:0}: Error finding container 732ea3fa30562cbb548cbd63878cf98bf16844c3fd2ba6668a55873990319c2d: Status 404 returned error can't find the container with id 732ea3fa30562cbb548cbd63878cf98bf16844c3fd2ba6668a55873990319c2d Mar 19 11:53:38.997990 master-0 kubenswrapper[3958]: I0319 11:53:38.997946 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg"] Mar 19 11:53:39.002758 master-0 kubenswrapper[3958]: W0319 11:53:39.002708 3958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1089ea24_add9_482e_9276_e6ded12052d7.slice/crio-89df1c468dcab6a092003dfdb9054af97d313c0e8ba73ec68b30b7001eab90ba WatchSource:0}: Error finding container 89df1c468dcab6a092003dfdb9054af97d313c0e8ba73ec68b30b7001eab90ba: Status 404 returned error can't find the container with id 89df1c468dcab6a092003dfdb9054af97d313c0e8ba73ec68b30b7001eab90ba Mar 19 11:53:39.012426 master-0 kubenswrapper[3958]: I0319 11:53:39.012389 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq"] Mar 19 11:53:39.023865 master-0 kubenswrapper[3958]: W0319 11:53:39.019524 3958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2dbd8b3_0e02_4747_a166_80aa6a94b060.slice/crio-1fc61313284938071ce89eb1211d46af435fab3cccf3e32e3b4afcbf9419655a WatchSource:0}: Error finding container 1fc61313284938071ce89eb1211d46af435fab3cccf3e32e3b4afcbf9419655a: Status 404 returned error can't find the container with id 1fc61313284938071ce89eb1211d46af435fab3cccf3e32e3b4afcbf9419655a Mar 19 11:53:39.072892 master-0 kubenswrapper[3958]: I0319 11:53:39.072836 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-2chdm"] Mar 19 11:53:39.076157 master-0 kubenswrapper[3958]: I0319 11:53:39.076120 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws"] Mar 19 11:53:39.081423 master-0 kubenswrapper[3958]: W0319 11:53:39.081359 3958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7747954_a222_4809_8656_818203b55ee8.slice/crio-d6af7e6099bbf70f75032e24a3ecb1fbb8bf546e42f6f7c74e6f9a42396249e8 WatchSource:0}: Error finding container d6af7e6099bbf70f75032e24a3ecb1fbb8bf546e42f6f7c74e6f9a42396249e8: Status 404 returned error can't find the container with id d6af7e6099bbf70f75032e24a3ecb1fbb8bf546e42f6f7c74e6f9a42396249e8 Mar 19 11:53:39.081612 master-0 kubenswrapper[3958]: I0319 11:53:39.081585 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt"] Mar 19 11:53:39.083060 master-0 kubenswrapper[3958]: I0319 11:53:39.083010 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x"] Mar 19 11:53:39.091224 master-0 kubenswrapper[3958]: W0319 11:53:39.090657 3958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod661b8957_a890_4032_9e57_45e2e0b35249.slice/crio-58d1369a13582afcb1d55c539b0bf53f7dd57cd88bda12b4a87bc5a6b8e84cbc WatchSource:0}: Error finding container 58d1369a13582afcb1d55c539b0bf53f7dd57cd88bda12b4a87bc5a6b8e84cbc: Status 404 returned error can't find the container with id 58d1369a13582afcb1d55c539b0bf53f7dd57cd88bda12b4a87bc5a6b8e84cbc Mar 19 11:53:39.092975 master-0 kubenswrapper[3958]: W0319 11:53:39.092920 3958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf08c5930_44f0_48e4_80dd_2563f2733b2f.slice/crio-cafdcda3b6318eaf62d6677cace1d3a0a0dbcf1889d817ce08bcc768e4b05288 WatchSource:0}: Error finding container cafdcda3b6318eaf62d6677cace1d3a0a0dbcf1889d817ce08bcc768e4b05288: Status 404 returned error can't find the container with id cafdcda3b6318eaf62d6677cace1d3a0a0dbcf1889d817ce08bcc768e4b05288 Mar 19 11:53:39.095166 master-0 kubenswrapper[3958]: E0319 11:53:39.095084 3958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:service-ca-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263,Command:[service-ca-operator operator],Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{83886080 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hq8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-ca-operator-b865698dc-sxsxt_openshift-service-ca-operator(661b8957-a890-4032-9e57-45e2e0b35249): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 19 11:53:39.096114 master-0 kubenswrapper[3958]: E0319 11:53:39.096056 3958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-apiserver-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e,Command:[cluster-openshift-apiserver-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:KUBE_APISERVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h84l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-apiserver-operator-d65958b8-mjs7x_openshift-apiserver-operator(f08c5930-44f0-48e4-80dd-2563f2733b2f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 19 11:53:39.097197 master-0 kubenswrapper[3958]: E0319 11:53:39.097169 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" podUID="f08c5930-44f0-48e4-80dd-2563f2733b2f" Mar 19 11:53:39.097271 master-0 kubenswrapper[3958]: E0319 11:53:39.097189 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" podUID="661b8957-a890-4032-9e57-45e2e0b35249" Mar 19 11:53:39.145195 master-0 kubenswrapper[3958]: I0319 11:53:39.145164 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh"] Mar 19 11:53:39.148633 master-0 kubenswrapper[3958]: W0319 11:53:39.148581 3958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06df1b1b_154e_46f9_aee0_79a137c6c928.slice/crio-63407ab3b9286932d0ec228766542c2a928958a160c225ce1f7624b3e7a02447 WatchSource:0}: Error finding container 63407ab3b9286932d0ec228766542c2a928958a160c225ce1f7624b3e7a02447: Status 404 returned error can't find the container with id 63407ab3b9286932d0ec228766542c2a928958a160c225ce1f7624b3e7a02447 Mar 19 11:53:39.272430 master-0 kubenswrapper[3958]: I0319 11:53:39.272378 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:39.272551 master-0 kubenswrapper[3958]: I0319 11:53:39.272454 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:39.272645 master-0 kubenswrapper[3958]: E0319 11:53:39.272603 3958 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:53:39.272719 master-0 kubenswrapper[3958]: E0319 11:53:39.272685 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:41.272666113 +0000 UTC m=+151.946387485 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:53:39.272812 master-0 kubenswrapper[3958]: E0319 11:53:39.272746 3958 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 11:53:39.272812 master-0 kubenswrapper[3958]: E0319 11:53:39.272786 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls podName:7241bf11-192e-47db-9d80-2324938ed34c nodeName:}" failed. No retries permitted until 2026-03-19 11:53:41.272775186 +0000 UTC m=+151.946496368 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-92c5d" (UID: "7241bf11-192e-47db-9d80-2324938ed34c") : secret "cluster-monitoring-operator-tls" not found Mar 19 11:53:39.272921 master-0 kubenswrapper[3958]: I0319 11:53:39.272897 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:39.273017 master-0 kubenswrapper[3958]: E0319 11:53:39.272999 3958 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 19 11:53:39.273093 master-0 kubenswrapper[3958]: E0319 11:53:39.273038 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert podName:63c12a89-1b49-4eba-8f5a-551b10d2246b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:41.273025914 +0000 UTC m=+151.946747306 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-zfsqt" (UID: "63c12a89-1b49-4eba-8f5a-551b10d2246b") : secret "performance-addon-operator-webhook-cert" not found Mar 19 11:53:39.273093 master-0 kubenswrapper[3958]: I0319 11:53:39.273058 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:39.273093 master-0 kubenswrapper[3958]: E0319 11:53:39.273076 3958 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 11:53:39.273261 master-0 kubenswrapper[3958]: I0319 11:53:39.273107 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:53:39.273261 master-0 kubenswrapper[3958]: E0319 11:53:39.273117 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics podName:b0f5939c-48b1-4d6c-9712-9128a78d603b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:41.273105527 +0000 UTC m=+151.946826709 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-pr7gk" (UID: "b0f5939c-48b1-4d6c-9712-9128a78d603b") : secret "marketplace-operator-metrics" not found Mar 19 11:53:39.273261 master-0 kubenswrapper[3958]: E0319 11:53:39.273166 3958 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 11:53:39.273261 master-0 kubenswrapper[3958]: E0319 11:53:39.273193 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert podName:beb562de-402b-4d9f-b5ed-090b60847a95 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:41.273183808 +0000 UTC m=+151.946904980 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6j2nj" (UID: "beb562de-402b-4d9f-b5ed-090b60847a95") : secret "package-server-manager-serving-cert" not found Mar 19 11:53:39.273261 master-0 kubenswrapper[3958]: I0319 11:53:39.273232 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:39.273629 master-0 kubenswrapper[3958]: I0319 11:53:39.273271 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:53:39.273629 master-0 kubenswrapper[3958]: I0319 11:53:39.273305 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:53:39.273629 master-0 kubenswrapper[3958]: E0319 11:53:39.273458 3958 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 11:53:39.273629 master-0 kubenswrapper[3958]: I0319 11:53:39.273491 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:39.273629 master-0 kubenswrapper[3958]: E0319 11:53:39.273512 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert podName:87a3f546-e1c1-42a1-b80e-d45b6d5c0a04 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:41.273494758 +0000 UTC m=+151.947215940 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert") pod "olm-operator-5c9796789-8cldl" (UID: "87a3f546-e1c1-42a1-b80e-d45b6d5c0a04") : secret "olm-operator-serving-cert" not found Mar 19 11:53:39.273629 master-0 kubenswrapper[3958]: I0319 11:53:39.273542 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:39.273629 master-0 kubenswrapper[3958]: E0319 11:53:39.273561 3958 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 19 11:53:39.273629 master-0 kubenswrapper[3958]: E0319 11:53:39.273582 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls podName:ab54833d-e57b-479d-b171-68155f6566f1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:41.273575661 +0000 UTC m=+151.947296843 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls") pod "dns-operator-9c5679d8f-z6kvm" (UID: "ab54833d-e57b-479d-b171-68155f6566f1") : secret "metrics-tls" not found Mar 19 11:53:39.274087 master-0 kubenswrapper[3958]: E0319 11:53:39.273660 3958 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 11:53:39.274087 master-0 kubenswrapper[3958]: E0319 11:53:39.273691 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls podName:d3541cbe-3be0-40d3-89d2-b5937b6a8f47 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:41.273682244 +0000 UTC m=+151.947403426 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls") pod "machine-config-operator-84d549f6d5-lswqw" (UID: "d3541cbe-3be0-40d3-89d2-b5937b6a8f47") : secret "mco-proxy-tls" not found Mar 19 11:53:39.274087 master-0 kubenswrapper[3958]: E0319 11:53:39.273708 3958 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 19 11:53:39.274087 master-0 kubenswrapper[3958]: E0319 11:53:39.273751 3958 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 11:53:39.274087 master-0 kubenswrapper[3958]: E0319 11:53:39.273770 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls podName:63c12a89-1b49-4eba-8f5a-551b10d2246b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:41.273750016 +0000 UTC m=+151.947471238 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-zfsqt" (UID: "63c12a89-1b49-4eba-8f5a-551b10d2246b") : secret "node-tuning-operator-tls" not found Mar 19 11:53:39.274087 master-0 kubenswrapper[3958]: E0319 11:53:39.273805 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:41.273782287 +0000 UTC m=+151.947503689 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-operator-tls" not found Mar 19 11:53:39.375264 master-0 kubenswrapper[3958]: I0319 11:53:39.375179 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:39.375264 master-0 kubenswrapper[3958]: I0319 11:53:39.375258 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-fz8cg\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:53:39.375489 master-0 kubenswrapper[3958]: E0319 11:53:39.375415 3958 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 11:53:39.375582 master-0 kubenswrapper[3958]: I0319 11:53:39.375533 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:39.375668 master-0 kubenswrapper[3958]: I0319 11:53:39.375615 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:53:39.375699 master-0 kubenswrapper[3958]: E0319 11:53:39.375683 3958 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 11:53:39.375830 master-0 kubenswrapper[3958]: E0319 11:53:39.375761 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs podName:806a4c30-7b93-4430-86da-f9e1f4f2d206 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:41.375736991 +0000 UTC m=+152.049458173 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-fz8cg" (UID: "806a4c30-7b93-4430-86da-f9e1f4f2d206") : secret "multus-admission-controller-secret" not found Mar 19 11:53:39.376058 master-0 kubenswrapper[3958]: E0319 11:53:39.375820 3958 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 19 11:53:39.376058 master-0 kubenswrapper[3958]: E0319 11:53:39.375857 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls podName:b80027fd-7b39-477a-a337-ff9bb08e7eeb nodeName:}" failed. No retries permitted until 2026-03-19 11:53:41.375846484 +0000 UTC m=+152.049567656 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls") pod "ingress-operator-66b84d69b-btppx" (UID: "b80027fd-7b39-477a-a337-ff9bb08e7eeb") : secret "metrics-tls" not found Mar 19 11:53:39.376058 master-0 kubenswrapper[3958]: E0319 11:53:39.375869 3958 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 11:53:39.376058 master-0 kubenswrapper[3958]: E0319 11:53:39.375940 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls podName:82b98dca-59f9-42be-94ca-4a2a2b6fea0f nodeName:}" failed. No retries permitted until 2026-03-19 11:53:41.375907826 +0000 UTC m=+152.049629178 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-g6sn6" (UID: "82b98dca-59f9-42be-94ca-4a2a2b6fea0f") : secret "image-registry-operator-tls" not found Mar 19 11:53:39.376058 master-0 kubenswrapper[3958]: E0319 11:53:39.375971 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert podName:bdcdb23d-ef1f-45e2-b9ac-7abf707637b6 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:41.375958368 +0000 UTC m=+152.049679770 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert") pod "catalog-operator-68f85b4d6c-2trz4" (UID: "bdcdb23d-ef1f-45e2-b9ac-7abf707637b6") : secret "catalog-operator-serving-cert" not found Mar 19 11:53:39.495283 master-0 kubenswrapper[3958]: I0319 11:53:39.495221 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" event={"ID":"f08c5930-44f0-48e4-80dd-2563f2733b2f","Type":"ContainerStarted","Data":"cafdcda3b6318eaf62d6677cace1d3a0a0dbcf1889d817ce08bcc768e4b05288"} Mar 19 11:53:39.499808 master-0 kubenswrapper[3958]: E0319 11:53:39.496826 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" podUID="f08c5930-44f0-48e4-80dd-2563f2733b2f" Mar 19 11:53:39.499808 master-0 kubenswrapper[3958]: I0319 11:53:39.498640 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" event={"ID":"9702fc8c-4fe0-413b-b2d4-db23021d42b8","Type":"ContainerStarted","Data":"657e67ca992e83dd97b428ec2664479ed04815d8dada9aa63b0bd9e585d0e3d7"} Mar 19 11:53:39.499896 master-0 kubenswrapper[3958]: I0319 11:53:39.499882 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" event={"ID":"d3017b5e-178e-49de-89d2-817a18398203","Type":"ContainerStarted","Data":"24de2a964d2fa28c5bff828df5f742d99916541dc1152f4dcdf6f4231784eba1"} Mar 19 11:53:39.500915 master-0 kubenswrapper[3958]: I0319 11:53:39.500896 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" event={"ID":"2151eb84-177e-459c-be71-f48465323ac2","Type":"ContainerStarted","Data":"16106b77f2e7c1585811668327c4be2d10fe7576f2ed79d2b198fca95be86d2c"} Mar 19 11:53:39.502201 master-0 kubenswrapper[3958]: I0319 11:53:39.502171 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" event={"ID":"9ed2dbd1-aec4-4009-917a-933533912ab5","Type":"ContainerStarted","Data":"7da5b8963c0c07bf615297cea6af913ce19795e600e076c4d580e948922fa865"} Mar 19 11:53:39.503102 master-0 kubenswrapper[3958]: I0319 11:53:39.503076 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" event={"ID":"c2dbd8b3-0e02-4747-a166-80aa6a94b060","Type":"ContainerStarted","Data":"1fc61313284938071ce89eb1211d46af435fab3cccf3e32e3b4afcbf9419655a"} Mar 19 11:53:39.504452 master-0 kubenswrapper[3958]: I0319 11:53:39.504358 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-2chdm" event={"ID":"a7747954-a222-4809-8656-818203b55ee8","Type":"ContainerStarted","Data":"d6af7e6099bbf70f75032e24a3ecb1fbb8bf546e42f6f7c74e6f9a42396249e8"} Mar 19 11:53:39.506136 master-0 kubenswrapper[3958]: I0319 11:53:39.506111 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" event={"ID":"06df1b1b-154e-46f9-aee0-79a137c6c928","Type":"ContainerStarted","Data":"63407ab3b9286932d0ec228766542c2a928958a160c225ce1f7624b3e7a02447"} Mar 19 11:53:39.507654 master-0 kubenswrapper[3958]: I0319 11:53:39.507607 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" event={"ID":"661b8957-a890-4032-9e57-45e2e0b35249","Type":"ContainerStarted","Data":"58d1369a13582afcb1d55c539b0bf53f7dd57cd88bda12b4a87bc5a6b8e84cbc"} Mar 19 11:53:39.509736 master-0 kubenswrapper[3958]: E0319 11:53:39.509251 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"\"" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" podUID="661b8957-a890-4032-9e57-45e2e0b35249" Mar 19 11:53:39.509736 master-0 kubenswrapper[3958]: I0319 11:53:39.509262 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws" event={"ID":"0f97d998-530c-4d9d-a030-ca1d9d2d4490","Type":"ContainerStarted","Data":"84ed2f0d88ece07075010bba0c167b7f10255b8043408ff95f1958cee576a4a0"} Mar 19 11:53:39.510592 master-0 kubenswrapper[3958]: I0319 11:53:39.510522 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-276t5" event={"ID":"06f67c28-34fd-4356-92f0-edd0986ad34e","Type":"ContainerStarted","Data":"732ea3fa30562cbb548cbd63878cf98bf16844c3fd2ba6668a55873990319c2d"} Mar 19 11:53:39.530724 master-0 kubenswrapper[3958]: I0319 11:53:39.530615 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" event={"ID":"1089ea24-add9-482e-9276-e6ded12052d7","Type":"ContainerStarted","Data":"a04e94059c93f3fb95feb69e0b122c65aebac1f390cdd0cf514b18a508325ef8"} Mar 19 11:53:39.530724 master-0 kubenswrapper[3958]: I0319 11:53:39.530663 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" event={"ID":"1089ea24-add9-482e-9276-e6ded12052d7","Type":"ContainerStarted","Data":"89df1c468dcab6a092003dfdb9054af97d313c0e8ba73ec68b30b7001eab90ba"} Mar 19 11:53:39.550078 master-0 kubenswrapper[3958]: I0319 11:53:39.549997 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" event={"ID":"aef8e03f-0363-4e13-b7ca-4fa871d77c62","Type":"ContainerStarted","Data":"37b898c3ae24210a5aa4f86ab00e075925f0f6e4fde94632405ba19b0f9e0d1d"} Mar 19 11:53:39.551210 master-0 kubenswrapper[3958]: I0319 11:53:39.550315 3958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" podStartSLOduration=114.550299018 podStartE2EDuration="1m54.550299018s" podCreationTimestamp="2026-03-19 11:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:53:39.545347273 +0000 UTC m=+150.219068475" watchObservedRunningTime="2026-03-19 11:53:39.550299018 +0000 UTC m=+150.224020200" Mar 19 11:53:39.554519 master-0 kubenswrapper[3958]: I0319 11:53:39.554338 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" event={"ID":"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f","Type":"ContainerStarted","Data":"2ca9e696adafe66b3ba3814f26ea9bb916ca5c1804785c0e742201ad82ee9c18"} Mar 19 11:53:40.609821 master-0 kubenswrapper[3958]: E0319 11:53:40.597073 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"\"" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" podUID="661b8957-a890-4032-9e57-45e2e0b35249" Mar 19 11:53:40.609821 master-0 kubenswrapper[3958]: E0319 11:53:40.597398 3958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" podUID="f08c5930-44f0-48e4-80dd-2563f2733b2f" Mar 19 11:53:41.197701 master-0 kubenswrapper[3958]: I0319 11:53:41.197367 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:41.197953 master-0 kubenswrapper[3958]: E0319 11:53:41.197554 3958 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 19 11:53:41.197953 master-0 kubenswrapper[3958]: E0319 11:53:41.197825 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs podName:398bcaca-1bea-4633-a78f-717e3d015ddd nodeName:}" failed. No retries permitted until 2026-03-19 11:54:45.197806878 +0000 UTC m=+215.871528050 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs") pod "network-metrics-daemon-6t6sn" (UID: "398bcaca-1bea-4633-a78f-717e3d015ddd") : secret "metrics-daemon-secret" not found Mar 19 11:53:41.298346 master-0 kubenswrapper[3958]: I0319 11:53:41.298270 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:41.298643 master-0 kubenswrapper[3958]: E0319 11:53:41.298482 3958 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 19 11:53:41.298643 master-0 kubenswrapper[3958]: I0319 11:53:41.298583 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:41.298643 master-0 kubenswrapper[3958]: I0319 11:53:41.298640 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:53:41.298977 master-0 kubenswrapper[3958]: E0319 11:53:41.298706 3958 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 11:53:41.298977 master-0 kubenswrapper[3958]: E0319 11:53:41.298735 3958 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 11:53:41.298977 master-0 kubenswrapper[3958]: E0319 11:53:41.298842 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert podName:63c12a89-1b49-4eba-8f5a-551b10d2246b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:45.298694768 +0000 UTC m=+155.972415990 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-zfsqt" (UID: "63c12a89-1b49-4eba-8f5a-551b10d2246b") : secret "performance-addon-operator-webhook-cert" not found Mar 19 11:53:41.299095 master-0 kubenswrapper[3958]: I0319 11:53:41.299017 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:41.299095 master-0 kubenswrapper[3958]: I0319 11:53:41.299071 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:53:41.299168 master-0 kubenswrapper[3958]: I0319 11:53:41.299125 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:53:41.299168 master-0 kubenswrapper[3958]: I0319 11:53:41.299151 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:41.299345 master-0 kubenswrapper[3958]: E0319 11:53:41.299221 3958 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 11:53:41.299345 master-0 kubenswrapper[3958]: E0319 11:53:41.299235 3958 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 11:53:41.299345 master-0 kubenswrapper[3958]: E0319 11:53:41.299257 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls podName:d3541cbe-3be0-40d3-89d2-b5937b6a8f47 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:45.299242265 +0000 UTC m=+155.972963657 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls") pod "machine-config-operator-84d549f6d5-lswqw" (UID: "d3541cbe-3be0-40d3-89d2-b5937b6a8f47") : secret "mco-proxy-tls" not found Mar 19 11:53:41.299345 master-0 kubenswrapper[3958]: E0319 11:53:41.299279 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert podName:beb562de-402b-4d9f-b5ed-090b60847a95 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:45.299270806 +0000 UTC m=+155.972992228 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6j2nj" (UID: "beb562de-402b-4d9f-b5ed-090b60847a95") : secret "package-server-manager-serving-cert" not found Mar 19 11:53:41.299345 master-0 kubenswrapper[3958]: E0319 11:53:41.299296 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics podName:b0f5939c-48b1-4d6c-9712-9128a78d603b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:45.299287067 +0000 UTC m=+155.973008499 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-pr7gk" (UID: "b0f5939c-48b1-4d6c-9712-9128a78d603b") : secret "marketplace-operator-metrics" not found Mar 19 11:53:41.299345 master-0 kubenswrapper[3958]: I0319 11:53:41.299327 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:41.299567 master-0 kubenswrapper[3958]: E0319 11:53:41.299397 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:45.299371729 +0000 UTC m=+155.973092951 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-operator-tls" not found Mar 19 11:53:41.299636 master-0 kubenswrapper[3958]: E0319 11:53:41.299583 3958 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 19 11:53:41.299636 master-0 kubenswrapper[3958]: E0319 11:53:41.299629 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls podName:ab54833d-e57b-479d-b171-68155f6566f1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:45.299611317 +0000 UTC m=+155.973332549 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls") pod "dns-operator-9c5679d8f-z6kvm" (UID: "ab54833d-e57b-479d-b171-68155f6566f1") : secret "metrics-tls" not found Mar 19 11:53:41.299729 master-0 kubenswrapper[3958]: I0319 11:53:41.299667 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:41.299729 master-0 kubenswrapper[3958]: E0319 11:53:41.299654 3958 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 11:53:41.299729 master-0 kubenswrapper[3958]: I0319 11:53:41.299719 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:41.299928 master-0 kubenswrapper[3958]: E0319 11:53:41.299905 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert podName:87a3f546-e1c1-42a1-b80e-d45b6d5c0a04 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:45.299826714 +0000 UTC m=+155.973547996 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert") pod "olm-operator-5c9796789-8cldl" (UID: "87a3f546-e1c1-42a1-b80e-d45b6d5c0a04") : secret "olm-operator-serving-cert" not found Mar 19 11:53:41.299983 master-0 kubenswrapper[3958]: E0319 11:53:41.299944 3958 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 11:53:41.300028 master-0 kubenswrapper[3958]: E0319 11:53:41.300012 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls podName:7241bf11-192e-47db-9d80-2324938ed34c nodeName:}" failed. No retries permitted until 2026-03-19 11:53:45.299989879 +0000 UTC m=+155.973711201 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-92c5d" (UID: "7241bf11-192e-47db-9d80-2324938ed34c") : secret "cluster-monitoring-operator-tls" not found Mar 19 11:53:41.300366 master-0 kubenswrapper[3958]: E0319 11:53:41.300318 3958 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:53:41.300416 master-0 kubenswrapper[3958]: E0319 11:53:41.300353 3958 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 19 11:53:41.300544 master-0 kubenswrapper[3958]: E0319 11:53:41.300399 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:45.30037806 +0000 UTC m=+155.974099452 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:53:41.301178 master-0 kubenswrapper[3958]: E0319 11:53:41.300622 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls podName:63c12a89-1b49-4eba-8f5a-551b10d2246b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:45.300551756 +0000 UTC m=+155.974273088 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-zfsqt" (UID: "63c12a89-1b49-4eba-8f5a-551b10d2246b") : secret "node-tuning-operator-tls" not found Mar 19 11:53:41.401259 master-0 kubenswrapper[3958]: I0319 11:53:41.401170 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:53:41.401490 master-0 kubenswrapper[3958]: I0319 11:53:41.401399 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:41.401490 master-0 kubenswrapper[3958]: I0319 11:53:41.401435 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-fz8cg\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:53:41.401490 master-0 kubenswrapper[3958]: I0319 11:53:41.401467 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:41.401647 master-0 kubenswrapper[3958]: E0319 11:53:41.401614 3958 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 11:53:41.401702 master-0 kubenswrapper[3958]: E0319 11:53:41.401686 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert podName:bdcdb23d-ef1f-45e2-b9ac-7abf707637b6 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:45.401660373 +0000 UTC m=+156.075381555 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert") pod "catalog-operator-68f85b4d6c-2trz4" (UID: "bdcdb23d-ef1f-45e2-b9ac-7abf707637b6") : secret "catalog-operator-serving-cert" not found Mar 19 11:53:41.401772 master-0 kubenswrapper[3958]: E0319 11:53:41.401749 3958 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 11:53:41.401885 master-0 kubenswrapper[3958]: E0319 11:53:41.401781 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs podName:806a4c30-7b93-4430-86da-f9e1f4f2d206 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:45.401772736 +0000 UTC m=+156.075493928 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-fz8cg" (UID: "806a4c30-7b93-4430-86da-f9e1f4f2d206") : secret "multus-admission-controller-secret" not found Mar 19 11:53:41.401959 master-0 kubenswrapper[3958]: E0319 11:53:41.401925 3958 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 19 11:53:41.402139 master-0 kubenswrapper[3958]: E0319 11:53:41.401924 3958 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 11:53:41.402200 master-0 kubenswrapper[3958]: E0319 11:53:41.401961 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls podName:82b98dca-59f9-42be-94ca-4a2a2b6fea0f nodeName:}" failed. No retries permitted until 2026-03-19 11:53:45.401951972 +0000 UTC m=+156.075673154 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-g6sn6" (UID: "82b98dca-59f9-42be-94ca-4a2a2b6fea0f") : secret "image-registry-operator-tls" not found Mar 19 11:53:41.402319 master-0 kubenswrapper[3958]: E0319 11:53:41.402238 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls podName:b80027fd-7b39-477a-a337-ff9bb08e7eeb nodeName:}" failed. No retries permitted until 2026-03-19 11:53:45.402184329 +0000 UTC m=+156.075905681 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls") pod "ingress-operator-66b84d69b-btppx" (UID: "b80027fd-7b39-477a-a337-ff9bb08e7eeb") : secret "metrics-tls" not found Mar 19 11:53:45.348643 master-0 kubenswrapper[3958]: I0319 11:53:45.348551 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:45.348643 master-0 kubenswrapper[3958]: I0319 11:53:45.348625 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:45.348643 master-0 kubenswrapper[3958]: I0319 11:53:45.348647 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:53:45.349721 master-0 kubenswrapper[3958]: I0319 11:53:45.348676 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:45.349721 master-0 kubenswrapper[3958]: I0319 11:53:45.348695 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:53:45.349721 master-0 kubenswrapper[3958]: I0319 11:53:45.348719 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:53:45.349721 master-0 kubenswrapper[3958]: I0319 11:53:45.348736 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:45.349721 master-0 kubenswrapper[3958]: I0319 11:53:45.348752 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:45.349721 master-0 kubenswrapper[3958]: I0319 11:53:45.348768 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:45.349721 master-0 kubenswrapper[3958]: I0319 11:53:45.348785 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:45.349721 master-0 kubenswrapper[3958]: E0319 11:53:45.348953 3958 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 11:53:45.349721 master-0 kubenswrapper[3958]: E0319 11:53:45.349005 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls podName:7241bf11-192e-47db-9d80-2324938ed34c nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.348991304 +0000 UTC m=+164.022712486 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-92c5d" (UID: "7241bf11-192e-47db-9d80-2324938ed34c") : secret "cluster-monitoring-operator-tls" not found Mar 19 11:53:45.349721 master-0 kubenswrapper[3958]: E0319 11:53:45.349356 3958 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 19 11:53:45.349721 master-0 kubenswrapper[3958]: E0319 11:53:45.349378 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert podName:63c12a89-1b49-4eba-8f5a-551b10d2246b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.349371396 +0000 UTC m=+164.023092578 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-zfsqt" (UID: "63c12a89-1b49-4eba-8f5a-551b10d2246b") : secret "performance-addon-operator-webhook-cert" not found Mar 19 11:53:45.349721 master-0 kubenswrapper[3958]: E0319 11:53:45.349411 3958 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 11:53:45.349721 master-0 kubenswrapper[3958]: E0319 11:53:45.349429 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics podName:b0f5939c-48b1-4d6c-9712-9128a78d603b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.349422377 +0000 UTC m=+164.023143559 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-pr7gk" (UID: "b0f5939c-48b1-4d6c-9712-9128a78d603b") : secret "marketplace-operator-metrics" not found Mar 19 11:53:45.349721 master-0 kubenswrapper[3958]: E0319 11:53:45.349460 3958 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 11:53:45.350360 master-0 kubenswrapper[3958]: E0319 11:53:45.349478 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert podName:beb562de-402b-4d9f-b5ed-090b60847a95 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.349470919 +0000 UTC m=+164.023192101 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6j2nj" (UID: "beb562de-402b-4d9f-b5ed-090b60847a95") : secret "package-server-manager-serving-cert" not found Mar 19 11:53:45.350360 master-0 kubenswrapper[3958]: E0319 11:53:45.349507 3958 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 11:53:45.350360 master-0 kubenswrapper[3958]: E0319 11:53:45.349525 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.34951998 +0000 UTC m=+164.023241162 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-operator-tls" not found Mar 19 11:53:45.350360 master-0 kubenswrapper[3958]: E0319 11:53:45.349554 3958 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 11:53:45.350360 master-0 kubenswrapper[3958]: E0319 11:53:45.349569 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert podName:87a3f546-e1c1-42a1-b80e-d45b6d5c0a04 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.349563682 +0000 UTC m=+164.023284864 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert") pod "olm-operator-5c9796789-8cldl" (UID: "87a3f546-e1c1-42a1-b80e-d45b6d5c0a04") : secret "olm-operator-serving-cert" not found Mar 19 11:53:45.350360 master-0 kubenswrapper[3958]: E0319 11:53:45.349596 3958 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 19 11:53:45.350360 master-0 kubenswrapper[3958]: E0319 11:53:45.349613 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls podName:ab54833d-e57b-479d-b171-68155f6566f1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.349606613 +0000 UTC m=+164.023327795 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls") pod "dns-operator-9c5679d8f-z6kvm" (UID: "ab54833d-e57b-479d-b171-68155f6566f1") : secret "metrics-tls" not found Mar 19 11:53:45.350360 master-0 kubenswrapper[3958]: E0319 11:53:45.349643 3958 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 11:53:45.350360 master-0 kubenswrapper[3958]: E0319 11:53:45.349661 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls podName:d3541cbe-3be0-40d3-89d2-b5937b6a8f47 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.349656145 +0000 UTC m=+164.023377327 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls") pod "machine-config-operator-84d549f6d5-lswqw" (UID: "d3541cbe-3be0-40d3-89d2-b5937b6a8f47") : secret "mco-proxy-tls" not found Mar 19 11:53:45.350360 master-0 kubenswrapper[3958]: E0319 11:53:45.349687 3958 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 19 11:53:45.350360 master-0 kubenswrapper[3958]: E0319 11:53:45.349704 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls podName:63c12a89-1b49-4eba-8f5a-551b10d2246b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.349699326 +0000 UTC m=+164.023420508 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-zfsqt" (UID: "63c12a89-1b49-4eba-8f5a-551b10d2246b") : secret "node-tuning-operator-tls" not found Mar 19 11:53:45.350360 master-0 kubenswrapper[3958]: E0319 11:53:45.349732 3958 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:53:45.350360 master-0 kubenswrapper[3958]: E0319 11:53:45.349747 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.349742238 +0000 UTC m=+164.023463420 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:53:45.450062 master-0 kubenswrapper[3958]: I0319 11:53:45.450017 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:45.450062 master-0 kubenswrapper[3958]: I0319 11:53:45.450065 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-fz8cg\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:53:45.450342 master-0 kubenswrapper[3958]: E0319 11:53:45.450195 3958 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 11:53:45.450342 master-0 kubenswrapper[3958]: I0319 11:53:45.450259 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:45.450342 master-0 kubenswrapper[3958]: E0319 11:53:45.450281 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls podName:b80027fd-7b39-477a-a337-ff9bb08e7eeb nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.450259976 +0000 UTC m=+164.123981388 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls") pod "ingress-operator-66b84d69b-btppx" (UID: "b80027fd-7b39-477a-a337-ff9bb08e7eeb") : secret "metrics-tls" not found Mar 19 11:53:45.450342 master-0 kubenswrapper[3958]: I0319 11:53:45.450304 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:53:45.450523 master-0 kubenswrapper[3958]: E0319 11:53:45.450413 3958 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 19 11:53:45.450523 master-0 kubenswrapper[3958]: E0319 11:53:45.450463 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls podName:82b98dca-59f9-42be-94ca-4a2a2b6fea0f nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.450447752 +0000 UTC m=+164.124169154 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-g6sn6" (UID: "82b98dca-59f9-42be-94ca-4a2a2b6fea0f") : secret "image-registry-operator-tls" not found Mar 19 11:53:45.450523 master-0 kubenswrapper[3958]: E0319 11:53:45.450413 3958 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 11:53:45.450523 master-0 kubenswrapper[3958]: E0319 11:53:45.450494 3958 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 11:53:45.450523 master-0 kubenswrapper[3958]: E0319 11:53:45.450501 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs podName:806a4c30-7b93-4430-86da-f9e1f4f2d206 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.450493223 +0000 UTC m=+164.124214635 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-fz8cg" (UID: "806a4c30-7b93-4430-86da-f9e1f4f2d206") : secret "multus-admission-controller-secret" not found Mar 19 11:53:45.450660 master-0 kubenswrapper[3958]: E0319 11:53:45.450535 3958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert podName:bdcdb23d-ef1f-45e2-b9ac-7abf707637b6 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.450525644 +0000 UTC m=+164.124246826 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert") pod "catalog-operator-68f85b4d6c-2trz4" (UID: "bdcdb23d-ef1f-45e2-b9ac-7abf707637b6") : secret "catalog-operator-serving-cert" not found Mar 19 11:53:46.363075 master-0 kubenswrapper[3958]: I0319 11:53:46.362953 3958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:48.614823 master-0 kubenswrapper[3958]: I0319 11:53:48.614286 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-2chdm" event={"ID":"a7747954-a222-4809-8656-818203b55ee8","Type":"ContainerStarted","Data":"d95294b0488d96a75c9e573656fe717c4295cca501788c5d65f233cbaba4be9d"} Mar 19 11:53:48.616168 master-0 kubenswrapper[3958]: I0319 11:53:48.615750 3958 generic.go:334] "Generic (PLEG): container finished" podID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerID="583df0d35b75cdd42a8c5d73920d4fc8b3684739b4fbdc9aa3860b1cc1087eeb" exitCode=0 Mar 19 11:53:48.616168 master-0 kubenswrapper[3958]: I0319 11:53:48.615784 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" event={"ID":"aef8e03f-0363-4e13-b7ca-4fa871d77c62","Type":"ContainerDied","Data":"583df0d35b75cdd42a8c5d73920d4fc8b3684739b4fbdc9aa3860b1cc1087eeb"} Mar 19 11:53:48.618226 master-0 kubenswrapper[3958]: I0319 11:53:48.618202 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" event={"ID":"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f","Type":"ContainerStarted","Data":"9dbaaa2ce519ab256717766bb8d971f864766afcc411753d09c087dd190cf903"} Mar 19 11:53:48.624231 master-0 kubenswrapper[3958]: I0319 11:53:48.624172 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" event={"ID":"d3017b5e-178e-49de-89d2-817a18398203","Type":"ContainerStarted","Data":"ec99e0001708bd8c36619c411325f2d4bdab0ecd7770deeae64fffd8bdf90881"} Mar 19 11:53:48.637809 master-0 kubenswrapper[3958]: I0319 11:53:48.636066 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws" event={"ID":"0f97d998-530c-4d9d-a030-ca1d9d2d4490","Type":"ContainerStarted","Data":"fe8804b9f205d5f40aba452ae8167e7ca2d2057bbd5a93b9e42d8ec2d88c8b07"} Mar 19 11:53:48.637809 master-0 kubenswrapper[3958]: I0319 11:53:48.637718 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" event={"ID":"9ed2dbd1-aec4-4009-917a-933533912ab5","Type":"ContainerStarted","Data":"fc5332ce9b6e52d47f6ebb8b58ad2c77aaab22f1f6505f1913fed9b59e6a2824"} Mar 19 11:53:48.639988 master-0 kubenswrapper[3958]: I0319 11:53:48.639359 3958 generic.go:334] "Generic (PLEG): container finished" podID="c2dbd8b3-0e02-4747-a166-80aa6a94b060" containerID="58b2ce2cf7ade5f0117d8bf2599516b6d2046b5a2b2cff339f1186030594c1b8" exitCode=0 Mar 19 11:53:48.639988 master-0 kubenswrapper[3958]: I0319 11:53:48.639385 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" event={"ID":"c2dbd8b3-0e02-4747-a166-80aa6a94b060","Type":"ContainerDied","Data":"58b2ce2cf7ade5f0117d8bf2599516b6d2046b5a2b2cff339f1186030594c1b8"} Mar 19 11:53:48.660936 master-0 kubenswrapper[3958]: I0319 11:53:48.660835 3958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-2chdm" podStartSLOduration=115.498403842 podStartE2EDuration="2m4.660764459s" podCreationTimestamp="2026-03-19 11:51:44 +0000 UTC" firstStartedPulling="2026-03-19 11:53:39.089079592 +0000 UTC m=+149.762800774" lastFinishedPulling="2026-03-19 11:53:48.251440209 +0000 UTC m=+158.925161391" observedRunningTime="2026-03-19 11:53:48.64291666 +0000 UTC m=+159.316637852" watchObservedRunningTime="2026-03-19 11:53:48.660764459 +0000 UTC m=+159.334485661" Mar 19 11:53:48.715271 master-0 kubenswrapper[3958]: I0319 11:53:48.715218 3958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws" podStartSLOduration=115.509390776 podStartE2EDuration="2m4.715203284s" podCreationTimestamp="2026-03-19 11:51:44 +0000 UTC" firstStartedPulling="2026-03-19 11:53:39.088936808 +0000 UTC m=+149.762657990" lastFinishedPulling="2026-03-19 11:53:48.294749316 +0000 UTC m=+158.968470498" observedRunningTime="2026-03-19 11:53:48.685519794 +0000 UTC m=+159.359240976" watchObservedRunningTime="2026-03-19 11:53:48.715203284 +0000 UTC m=+159.388924466" Mar 19 11:53:48.752981 master-0 kubenswrapper[3958]: I0319 11:53:48.751998 3958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" podStartSLOduration=114.403345566 podStartE2EDuration="2m3.751980036s" podCreationTimestamp="2026-03-19 11:51:45 +0000 UTC" firstStartedPulling="2026-03-19 11:53:38.902729326 +0000 UTC m=+149.576450508" lastFinishedPulling="2026-03-19 11:53:48.251363786 +0000 UTC m=+158.925084978" observedRunningTime="2026-03-19 11:53:48.750903362 +0000 UTC m=+159.424624564" watchObservedRunningTime="2026-03-19 11:53:48.751980036 +0000 UTC m=+159.425701218" Mar 19 11:53:48.820835 master-0 kubenswrapper[3958]: I0319 11:53:48.820100 3958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" podStartSLOduration=114.498683572 podStartE2EDuration="2m3.820057118s" podCreationTimestamp="2026-03-19 11:51:45 +0000 UTC" firstStartedPulling="2026-03-19 11:53:38.972202982 +0000 UTC m=+149.645924164" lastFinishedPulling="2026-03-19 11:53:48.293576528 +0000 UTC m=+158.967297710" observedRunningTime="2026-03-19 11:53:48.81852598 +0000 UTC m=+159.492247172" watchObservedRunningTime="2026-03-19 11:53:48.820057118 +0000 UTC m=+159.493778320" Mar 19 11:53:48.913824 master-0 kubenswrapper[3958]: I0319 11:53:48.912245 3958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" podStartSLOduration=115.581643259 podStartE2EDuration="2m4.912218544s" podCreationTimestamp="2026-03-19 11:51:44 +0000 UTC" firstStartedPulling="2026-03-19 11:53:38.972039927 +0000 UTC m=+149.645761109" lastFinishedPulling="2026-03-19 11:53:48.302615212 +0000 UTC m=+158.976336394" observedRunningTime="2026-03-19 11:53:48.911245644 +0000 UTC m=+159.584966826" watchObservedRunningTime="2026-03-19 11:53:48.912218544 +0000 UTC m=+159.585939726" Mar 19 11:53:49.585176 master-0 kubenswrapper[3958]: I0319 11:53:49.585098 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654"] Mar 19 11:53:49.585764 master-0 kubenswrapper[3958]: I0319 11:53:49.585735 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" Mar 19 11:53:49.610505 master-0 kubenswrapper[3958]: I0319 11:53:49.610447 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654"] Mar 19 11:53:49.619785 master-0 kubenswrapper[3958]: I0319 11:53:49.619711 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2mdn\" (UniqueName: \"kubernetes.io/projected/944eac68-e72b-4aed-b5dc-d7d9703178a3-kube-api-access-m2mdn\") pod \"csi-snapshot-controller-64854d9cff-6m654\" (UID: \"944eac68-e72b-4aed-b5dc-d7d9703178a3\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" Mar 19 11:53:49.671213 master-0 kubenswrapper[3958]: I0319 11:53:49.671136 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" event={"ID":"06df1b1b-154e-46f9-aee0-79a137c6c928","Type":"ContainerStarted","Data":"136228bc884d9d84e6c34125e85b6f53a4eb9c869542bab1b85def5ce8ff08ff"} Mar 19 11:53:49.672573 master-0 kubenswrapper[3958]: I0319 11:53:49.672508 3958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-99fgs"] Mar 19 11:53:49.673287 master-0 kubenswrapper[3958]: I0319 11:53:49.673246 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-8487694857-99fgs" Mar 19 11:53:49.675660 master-0 kubenswrapper[3958]: I0319 11:53:49.675608 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 19 11:53:49.676331 master-0 kubenswrapper[3958]: I0319 11:53:49.676299 3958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 19 11:53:49.677052 master-0 kubenswrapper[3958]: I0319 11:53:49.677013 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" event={"ID":"9702fc8c-4fe0-413b-b2d4-db23021d42b8","Type":"ContainerStarted","Data":"6c3d43a01987e52cadf8e3819b9c184c46b6535cb510d14c96117eed3c48a981"} Mar 19 11:53:49.687824 master-0 kubenswrapper[3958]: I0319 11:53:49.685019 3958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-99fgs"] Mar 19 11:53:49.693981 master-0 kubenswrapper[3958]: I0319 11:53:49.693926 3958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" event={"ID":"2151eb84-177e-459c-be71-f48465323ac2","Type":"ContainerStarted","Data":"76df0534cc0fd6a5cc55f7565b57a91fd38d7e12169a76c5133f215b1479d2db"} Mar 19 11:53:49.698584 master-0 kubenswrapper[3958]: I0319 11:53:49.697688 3958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" podStartSLOduration=115.554157259 podStartE2EDuration="2m4.697663095s" podCreationTimestamp="2026-03-19 11:51:45 +0000 UTC" firstStartedPulling="2026-03-19 11:53:39.150779225 +0000 UTC m=+149.824500407" lastFinishedPulling="2026-03-19 11:53:48.294285051 +0000 UTC m=+158.968006243" observedRunningTime="2026-03-19 11:53:49.696319372 +0000 UTC m=+160.370040554" watchObservedRunningTime="2026-03-19 11:53:49.697663095 +0000 UTC m=+160.371384287" Mar 19 11:53:49.718298 master-0 kubenswrapper[3958]: I0319 11:53:49.718223 3958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" podStartSLOduration=115.382284925 podStartE2EDuration="2m4.718205549s" podCreationTimestamp="2026-03-19 11:51:45 +0000 UTC" firstStartedPulling="2026-03-19 11:53:38.951600666 +0000 UTC m=+149.625321848" lastFinishedPulling="2026-03-19 11:53:48.28752129 +0000 UTC m=+158.961242472" observedRunningTime="2026-03-19 11:53:49.717241898 +0000 UTC m=+160.390963080" watchObservedRunningTime="2026-03-19 11:53:49.718205549 +0000 UTC m=+160.391926731" Mar 19 11:53:49.723915 master-0 kubenswrapper[3958]: I0319 11:53:49.720666 3958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86r6z\" (UniqueName: \"kubernetes.io/projected/d975e831-7348-41b9-9622-f4a503674c38-kube-api-access-86r6z\") pod \"migrator-8487694857-99fgs\" (UID: \"d975e831-7348-41b9-9622-f4a503674c38\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-99fgs" Mar 19 11:53:49.723915 master-0 kubenswrapper[3958]: I0319 11:53:49.720742 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2mdn\" (UniqueName: \"kubernetes.io/projected/944eac68-e72b-4aed-b5dc-d7d9703178a3-kube-api-access-m2mdn\") pod \"csi-snapshot-controller-64854d9cff-6m654\" (UID: \"944eac68-e72b-4aed-b5dc-d7d9703178a3\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" Mar 19 11:53:49.751410 master-0 kubenswrapper[3958]: I0319 11:53:49.751352 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2mdn\" (UniqueName: \"kubernetes.io/projected/944eac68-e72b-4aed-b5dc-d7d9703178a3-kube-api-access-m2mdn\") pod \"csi-snapshot-controller-64854d9cff-6m654\" (UID: \"944eac68-e72b-4aed-b5dc-d7d9703178a3\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" Mar 19 11:53:49.752165 master-0 kubenswrapper[3958]: I0319 11:53:49.752071 3958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" podStartSLOduration=115.355718644 podStartE2EDuration="2m4.752048758s" podCreationTimestamp="2026-03-19 11:51:45 +0000 UTC" firstStartedPulling="2026-03-19 11:53:38.902648564 +0000 UTC m=+149.576369756" lastFinishedPulling="2026-03-19 11:53:48.298978688 +0000 UTC m=+158.972699870" observedRunningTime="2026-03-19 11:53:49.75050161 +0000 UTC m=+160.424222792" watchObservedRunningTime="2026-03-19 11:53:49.752048758 +0000 UTC m=+160.425769960" Mar 19 11:53:49.830010 master-0 kubenswrapper[3958]: I0319 11:53:49.829278 3958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86r6z\" (UniqueName: \"kubernetes.io/projected/d975e831-7348-41b9-9622-f4a503674c38-kube-api-access-86r6z\") pod \"migrator-8487694857-99fgs\" (UID: \"d975e831-7348-41b9-9622-f4a503674c38\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-99fgs" Mar 19 11:53:49.851252 master-0 kubenswrapper[3958]: I0319 11:53:49.851205 3958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86r6z\" (UniqueName: \"kubernetes.io/projected/d975e831-7348-41b9-9622-f4a503674c38-kube-api-access-86r6z\") pod \"migrator-8487694857-99fgs\" (UID: \"d975e831-7348-41b9-9622-f4a503674c38\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-99fgs" Mar 19 11:53:50.240868 master-0 kubenswrapper[3958]: I0319 11:53:50.240343 3958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" Mar 19 11:53:50.242562 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 19 11:53:50.274454 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:53:50.274685 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 19 11:53:50.276169 master-0 systemd[1]: kubelet.service: Consumed 9.711s CPU time. Mar 19 11:53:50.287804 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 19 11:53:50.442616 master-0 kubenswrapper[7454]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:53:50.442616 master-0 kubenswrapper[7454]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 19 11:53:50.442616 master-0 kubenswrapper[7454]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:53:50.443237 master-0 kubenswrapper[7454]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:53:50.443237 master-0 kubenswrapper[7454]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 11:53:50.443237 master-0 kubenswrapper[7454]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:53:50.443237 master-0 kubenswrapper[7454]: I0319 11:53:50.442744 7454 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:53:50.447846 master-0 kubenswrapper[7454]: W0319 11:53:50.447816 7454 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 11:53:50.447846 master-0 kubenswrapper[7454]: W0319 11:53:50.447836 7454 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 11:53:50.447846 master-0 kubenswrapper[7454]: W0319 11:53:50.447842 7454 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 11:53:50.447846 master-0 kubenswrapper[7454]: W0319 11:53:50.447847 7454 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 11:53:50.447846 master-0 kubenswrapper[7454]: W0319 11:53:50.447852 7454 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 11:53:50.448089 master-0 kubenswrapper[7454]: W0319 11:53:50.447857 7454 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 11:53:50.448089 master-0 kubenswrapper[7454]: W0319 11:53:50.447861 7454 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 11:53:50.448089 master-0 kubenswrapper[7454]: W0319 11:53:50.447866 7454 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 11:53:50.448089 master-0 kubenswrapper[7454]: W0319 11:53:50.447871 7454 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 11:53:50.448089 master-0 kubenswrapper[7454]: W0319 11:53:50.447875 7454 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 11:53:50.448089 master-0 kubenswrapper[7454]: W0319 11:53:50.447879 7454 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 11:53:50.448089 master-0 kubenswrapper[7454]: W0319 11:53:50.447884 7454 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 11:53:50.448089 master-0 kubenswrapper[7454]: W0319 11:53:50.447888 7454 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 11:53:50.448089 master-0 kubenswrapper[7454]: W0319 11:53:50.447892 7454 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 11:53:50.448089 master-0 kubenswrapper[7454]: W0319 11:53:50.447897 7454 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 11:53:50.448089 master-0 kubenswrapper[7454]: W0319 11:53:50.447901 7454 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 11:53:50.448089 master-0 kubenswrapper[7454]: W0319 11:53:50.447906 7454 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 11:53:50.448089 master-0 kubenswrapper[7454]: W0319 11:53:50.447910 7454 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 11:53:50.448089 master-0 kubenswrapper[7454]: W0319 11:53:50.447914 7454 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 11:53:50.448089 master-0 kubenswrapper[7454]: W0319 11:53:50.447919 7454 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 11:53:50.448089 master-0 kubenswrapper[7454]: W0319 11:53:50.447932 7454 feature_gate.go:330] unrecognized feature gate: Example Mar 19 11:53:50.448089 master-0 kubenswrapper[7454]: W0319 11:53:50.447937 7454 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 11:53:50.448089 master-0 kubenswrapper[7454]: W0319 11:53:50.447941 7454 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 11:53:50.448089 master-0 kubenswrapper[7454]: W0319 11:53:50.447946 7454 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 11:53:50.448089 master-0 kubenswrapper[7454]: W0319 11:53:50.447951 7454 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 11:53:50.448767 master-0 kubenswrapper[7454]: W0319 11:53:50.447955 7454 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 11:53:50.448767 master-0 kubenswrapper[7454]: W0319 11:53:50.447960 7454 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 11:53:50.448767 master-0 kubenswrapper[7454]: W0319 11:53:50.447965 7454 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 11:53:50.448767 master-0 kubenswrapper[7454]: W0319 11:53:50.447969 7454 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 11:53:50.448767 master-0 kubenswrapper[7454]: W0319 11:53:50.447974 7454 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 11:53:50.448767 master-0 kubenswrapper[7454]: W0319 11:53:50.447978 7454 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 11:53:50.448767 master-0 kubenswrapper[7454]: W0319 11:53:50.447983 7454 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 11:53:50.448767 master-0 kubenswrapper[7454]: W0319 11:53:50.447989 7454 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 11:53:50.448767 master-0 kubenswrapper[7454]: W0319 11:53:50.447993 7454 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 11:53:50.448767 master-0 kubenswrapper[7454]: W0319 11:53:50.447998 7454 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 11:53:50.448767 master-0 kubenswrapper[7454]: W0319 11:53:50.448002 7454 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 11:53:50.448767 master-0 kubenswrapper[7454]: W0319 11:53:50.448008 7454 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 11:53:50.448767 master-0 kubenswrapper[7454]: W0319 11:53:50.448013 7454 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 11:53:50.448767 master-0 kubenswrapper[7454]: W0319 11:53:50.448022 7454 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 11:53:50.448767 master-0 kubenswrapper[7454]: W0319 11:53:50.448028 7454 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 11:53:50.448767 master-0 kubenswrapper[7454]: W0319 11:53:50.448033 7454 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 11:53:50.448767 master-0 kubenswrapper[7454]: W0319 11:53:50.448038 7454 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 11:53:50.448767 master-0 kubenswrapper[7454]: W0319 11:53:50.448043 7454 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 11:53:50.448767 master-0 kubenswrapper[7454]: W0319 11:53:50.448049 7454 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 11:53:50.448767 master-0 kubenswrapper[7454]: W0319 11:53:50.448054 7454 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 11:53:50.449468 master-0 kubenswrapper[7454]: W0319 11:53:50.448058 7454 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 11:53:50.449468 master-0 kubenswrapper[7454]: W0319 11:53:50.448063 7454 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 11:53:50.449468 master-0 kubenswrapper[7454]: W0319 11:53:50.448067 7454 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 11:53:50.449468 master-0 kubenswrapper[7454]: W0319 11:53:50.448072 7454 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 11:53:50.449468 master-0 kubenswrapper[7454]: W0319 11:53:50.448076 7454 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 11:53:50.449468 master-0 kubenswrapper[7454]: W0319 11:53:50.448083 7454 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 11:53:50.449468 master-0 kubenswrapper[7454]: W0319 11:53:50.448088 7454 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 11:53:50.449468 master-0 kubenswrapper[7454]: W0319 11:53:50.448093 7454 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 11:53:50.449468 master-0 kubenswrapper[7454]: W0319 11:53:50.448099 7454 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 11:53:50.449468 master-0 kubenswrapper[7454]: W0319 11:53:50.448104 7454 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 11:53:50.449468 master-0 kubenswrapper[7454]: W0319 11:53:50.448109 7454 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 11:53:50.449468 master-0 kubenswrapper[7454]: W0319 11:53:50.448117 7454 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 11:53:50.449468 master-0 kubenswrapper[7454]: W0319 11:53:50.448122 7454 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 11:53:50.449468 master-0 kubenswrapper[7454]: W0319 11:53:50.448127 7454 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 11:53:50.449468 master-0 kubenswrapper[7454]: W0319 11:53:50.448132 7454 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 11:53:50.449468 master-0 kubenswrapper[7454]: W0319 11:53:50.448136 7454 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 11:53:50.449468 master-0 kubenswrapper[7454]: W0319 11:53:50.448141 7454 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 11:53:50.449468 master-0 kubenswrapper[7454]: W0319 11:53:50.448147 7454 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: W0319 11:53:50.448153 7454 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: W0319 11:53:50.448158 7454 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: W0319 11:53:50.448164 7454 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: W0319 11:53:50.448169 7454 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: W0319 11:53:50.448174 7454 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: W0319 11:53:50.448180 7454 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: W0319 11:53:50.448185 7454 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: W0319 11:53:50.448195 7454 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: W0319 11:53:50.448200 7454 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: I0319 11:53:50.448320 7454 flags.go:64] FLAG: --address="0.0.0.0" Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: I0319 11:53:50.448333 7454 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: I0319 11:53:50.448343 7454 flags.go:64] FLAG: --anonymous-auth="true" Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: I0319 11:53:50.448350 7454 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: I0319 11:53:50.448357 7454 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: I0319 11:53:50.448363 7454 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: I0319 11:53:50.448371 7454 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: I0319 11:53:50.448377 7454 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: I0319 11:53:50.448383 7454 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: I0319 11:53:50.448389 7454 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: I0319 11:53:50.448395 7454 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: I0319 11:53:50.448401 7454 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 19 11:53:50.450237 master-0 kubenswrapper[7454]: I0319 11:53:50.448406 7454 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448412 7454 flags.go:64] FLAG: --cgroup-root="" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448416 7454 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448422 7454 flags.go:64] FLAG: --client-ca-file="" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448427 7454 flags.go:64] FLAG: --cloud-config="" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448432 7454 flags.go:64] FLAG: --cloud-provider="" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448438 7454 flags.go:64] FLAG: --cluster-dns="[]" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448444 7454 flags.go:64] FLAG: --cluster-domain="" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448448 7454 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448454 7454 flags.go:64] FLAG: --config-dir="" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448460 7454 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448466 7454 flags.go:64] FLAG: --container-log-max-files="5" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448473 7454 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448478 7454 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448484 7454 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448489 7454 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448495 7454 flags.go:64] FLAG: --contention-profiling="false" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448500 7454 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448509 7454 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448515 7454 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448522 7454 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448529 7454 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448534 7454 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448540 7454 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448545 7454 flags.go:64] FLAG: --enable-load-reader="false" Mar 19 11:53:50.451434 master-0 kubenswrapper[7454]: I0319 11:53:50.448551 7454 flags.go:64] FLAG: --enable-server="true" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448556 7454 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448563 7454 flags.go:64] FLAG: --event-burst="100" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448569 7454 flags.go:64] FLAG: --event-qps="50" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448574 7454 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448580 7454 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448585 7454 flags.go:64] FLAG: --eviction-hard="" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448592 7454 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448597 7454 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448603 7454 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448609 7454 flags.go:64] FLAG: --eviction-soft="" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448614 7454 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448619 7454 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448625 7454 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448630 7454 flags.go:64] FLAG: --experimental-mounter-path="" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448635 7454 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448641 7454 flags.go:64] FLAG: --fail-swap-on="true" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448646 7454 flags.go:64] FLAG: --feature-gates="" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448653 7454 flags.go:64] FLAG: --file-check-frequency="20s" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448659 7454 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448664 7454 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448670 7454 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448675 7454 flags.go:64] FLAG: --healthz-port="10248" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448682 7454 flags.go:64] FLAG: --help="false" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448688 7454 flags.go:64] FLAG: --hostname-override="" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448696 7454 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 19 11:53:50.452988 master-0 kubenswrapper[7454]: I0319 11:53:50.448702 7454 flags.go:64] FLAG: --http-check-frequency="20s" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448708 7454 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448713 7454 flags.go:64] FLAG: --image-credential-provider-config="" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448719 7454 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448724 7454 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448731 7454 flags.go:64] FLAG: --image-service-endpoint="" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448737 7454 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448743 7454 flags.go:64] FLAG: --kube-api-burst="100" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448749 7454 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448755 7454 flags.go:64] FLAG: --kube-api-qps="50" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448760 7454 flags.go:64] FLAG: --kube-reserved="" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448766 7454 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448771 7454 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448777 7454 flags.go:64] FLAG: --kubelet-cgroups="" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448782 7454 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448788 7454 flags.go:64] FLAG: --lock-file="" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448827 7454 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448835 7454 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448841 7454 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448850 7454 flags.go:64] FLAG: --log-json-split-stream="false" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448856 7454 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448861 7454 flags.go:64] FLAG: --log-text-split-stream="false" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448866 7454 flags.go:64] FLAG: --logging-format="text" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448872 7454 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448878 7454 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 19 11:53:50.454764 master-0 kubenswrapper[7454]: I0319 11:53:50.448883 7454 flags.go:64] FLAG: --manifest-url="" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.448888 7454 flags.go:64] FLAG: --manifest-url-header="" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.448896 7454 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.448901 7454 flags.go:64] FLAG: --max-open-files="1000000" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.448909 7454 flags.go:64] FLAG: --max-pods="110" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.448915 7454 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.448924 7454 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.448930 7454 flags.go:64] FLAG: --memory-manager-policy="None" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.448935 7454 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.448941 7454 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.448947 7454 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.448953 7454 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.448966 7454 flags.go:64] FLAG: --node-status-max-images="50" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.448973 7454 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.448978 7454 flags.go:64] FLAG: --oom-score-adj="-999" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.448984 7454 flags.go:64] FLAG: --pod-cidr="" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.448997 7454 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.449006 7454 flags.go:64] FLAG: --pod-manifest-path="" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.449011 7454 flags.go:64] FLAG: --pod-max-pids="-1" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.449017 7454 flags.go:64] FLAG: --pods-per-core="0" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.449023 7454 flags.go:64] FLAG: --port="10250" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.449028 7454 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.449034 7454 flags.go:64] FLAG: --provider-id="" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.449039 7454 flags.go:64] FLAG: --qos-reserved="" Mar 19 11:53:50.455420 master-0 kubenswrapper[7454]: I0319 11:53:50.449045 7454 flags.go:64] FLAG: --read-only-port="10255" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449051 7454 flags.go:64] FLAG: --register-node="true" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449056 7454 flags.go:64] FLAG: --register-schedulable="true" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449061 7454 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449078 7454 flags.go:64] FLAG: --registry-burst="10" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449083 7454 flags.go:64] FLAG: --registry-qps="5" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449089 7454 flags.go:64] FLAG: --reserved-cpus="" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449094 7454 flags.go:64] FLAG: --reserved-memory="" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449101 7454 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449107 7454 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449113 7454 flags.go:64] FLAG: --rotate-certificates="false" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449119 7454 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449124 7454 flags.go:64] FLAG: --runonce="false" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449130 7454 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449139 7454 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449145 7454 flags.go:64] FLAG: --seccomp-default="false" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449151 7454 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449156 7454 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449162 7454 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449168 7454 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449174 7454 flags.go:64] FLAG: --storage-driver-password="root" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449180 7454 flags.go:64] FLAG: --storage-driver-secure="false" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449185 7454 flags.go:64] FLAG: --storage-driver-table="stats" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449191 7454 flags.go:64] FLAG: --storage-driver-user="root" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449196 7454 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 19 11:53:50.456003 master-0 kubenswrapper[7454]: I0319 11:53:50.449202 7454 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: I0319 11:53:50.449208 7454 flags.go:64] FLAG: --system-cgroups="" Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: I0319 11:53:50.449213 7454 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: I0319 11:53:50.449225 7454 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: I0319 11:53:50.449230 7454 flags.go:64] FLAG: --tls-cert-file="" Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: I0319 11:53:50.449236 7454 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: I0319 11:53:50.449242 7454 flags.go:64] FLAG: --tls-min-version="" Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: I0319 11:53:50.449248 7454 flags.go:64] FLAG: --tls-private-key-file="" Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: I0319 11:53:50.449253 7454 flags.go:64] FLAG: --topology-manager-policy="none" Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: I0319 11:53:50.449259 7454 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: I0319 11:53:50.449265 7454 flags.go:64] FLAG: --topology-manager-scope="container" Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: I0319 11:53:50.449270 7454 flags.go:64] FLAG: --v="2" Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: I0319 11:53:50.449278 7454 flags.go:64] FLAG: --version="false" Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: I0319 11:53:50.449286 7454 flags.go:64] FLAG: --vmodule="" Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: I0319 11:53:50.449293 7454 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: I0319 11:53:50.449299 7454 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: W0319 11:53:50.449438 7454 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: W0319 11:53:50.449446 7454 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: W0319 11:53:50.449451 7454 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: W0319 11:53:50.449457 7454 feature_gate.go:330] unrecognized feature gate: Example Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: W0319 11:53:50.449462 7454 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: W0319 11:53:50.449469 7454 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: W0319 11:53:50.449473 7454 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 11:53:50.456664 master-0 kubenswrapper[7454]: W0319 11:53:50.449478 7454 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 11:53:50.458234 master-0 kubenswrapper[7454]: W0319 11:53:50.449482 7454 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 11:53:50.458234 master-0 kubenswrapper[7454]: W0319 11:53:50.449488 7454 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 11:53:50.458234 master-0 kubenswrapper[7454]: W0319 11:53:50.449494 7454 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 11:53:50.458234 master-0 kubenswrapper[7454]: W0319 11:53:50.449499 7454 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 11:53:50.458234 master-0 kubenswrapper[7454]: W0319 11:53:50.449504 7454 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 11:53:50.458234 master-0 kubenswrapper[7454]: W0319 11:53:50.449509 7454 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 11:53:50.458234 master-0 kubenswrapper[7454]: W0319 11:53:50.449514 7454 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 11:53:50.458234 master-0 kubenswrapper[7454]: W0319 11:53:50.449519 7454 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 11:53:50.458234 master-0 kubenswrapper[7454]: W0319 11:53:50.449523 7454 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 11:53:50.458234 master-0 kubenswrapper[7454]: W0319 11:53:50.449528 7454 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 11:53:50.458234 master-0 kubenswrapper[7454]: W0319 11:53:50.449532 7454 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 11:53:50.458234 master-0 kubenswrapper[7454]: W0319 11:53:50.449537 7454 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 11:53:50.458234 master-0 kubenswrapper[7454]: W0319 11:53:50.449541 7454 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 11:53:50.458234 master-0 kubenswrapper[7454]: W0319 11:53:50.449546 7454 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 11:53:50.458234 master-0 kubenswrapper[7454]: W0319 11:53:50.449550 7454 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 11:53:50.458234 master-0 kubenswrapper[7454]: W0319 11:53:50.449556 7454 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 11:53:50.458234 master-0 kubenswrapper[7454]: W0319 11:53:50.449561 7454 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 11:53:50.458234 master-0 kubenswrapper[7454]: W0319 11:53:50.449565 7454 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 11:53:50.458234 master-0 kubenswrapper[7454]: W0319 11:53:50.449570 7454 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 11:53:50.458810 master-0 kubenswrapper[7454]: W0319 11:53:50.449574 7454 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 11:53:50.458810 master-0 kubenswrapper[7454]: W0319 11:53:50.449578 7454 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 11:53:50.458810 master-0 kubenswrapper[7454]: W0319 11:53:50.449590 7454 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 11:53:50.458810 master-0 kubenswrapper[7454]: W0319 11:53:50.449594 7454 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 11:53:50.458810 master-0 kubenswrapper[7454]: W0319 11:53:50.449599 7454 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 11:53:50.458810 master-0 kubenswrapper[7454]: W0319 11:53:50.449603 7454 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 11:53:50.458810 master-0 kubenswrapper[7454]: W0319 11:53:50.449608 7454 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 11:53:50.458810 master-0 kubenswrapper[7454]: W0319 11:53:50.449612 7454 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 11:53:50.458810 master-0 kubenswrapper[7454]: W0319 11:53:50.449616 7454 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 11:53:50.458810 master-0 kubenswrapper[7454]: W0319 11:53:50.449621 7454 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 11:53:50.458810 master-0 kubenswrapper[7454]: W0319 11:53:50.449637 7454 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 11:53:50.458810 master-0 kubenswrapper[7454]: W0319 11:53:50.449641 7454 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 11:53:50.458810 master-0 kubenswrapper[7454]: W0319 11:53:50.449646 7454 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 11:53:50.458810 master-0 kubenswrapper[7454]: W0319 11:53:50.449652 7454 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 11:53:50.458810 master-0 kubenswrapper[7454]: W0319 11:53:50.449658 7454 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 11:53:50.458810 master-0 kubenswrapper[7454]: W0319 11:53:50.449664 7454 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 11:53:50.458810 master-0 kubenswrapper[7454]: W0319 11:53:50.449669 7454 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 11:53:50.458810 master-0 kubenswrapper[7454]: W0319 11:53:50.449674 7454 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 11:53:50.458810 master-0 kubenswrapper[7454]: W0319 11:53:50.449679 7454 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 11:53:50.458810 master-0 kubenswrapper[7454]: W0319 11:53:50.449684 7454 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 11:53:50.459527 master-0 kubenswrapper[7454]: W0319 11:53:50.449689 7454 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 11:53:50.459527 master-0 kubenswrapper[7454]: W0319 11:53:50.449694 7454 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 11:53:50.459527 master-0 kubenswrapper[7454]: W0319 11:53:50.449698 7454 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 11:53:50.459527 master-0 kubenswrapper[7454]: W0319 11:53:50.449703 7454 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 11:53:50.459527 master-0 kubenswrapper[7454]: W0319 11:53:50.449708 7454 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 11:53:50.459527 master-0 kubenswrapper[7454]: W0319 11:53:50.449712 7454 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 11:53:50.459527 master-0 kubenswrapper[7454]: W0319 11:53:50.449717 7454 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 11:53:50.459527 master-0 kubenswrapper[7454]: W0319 11:53:50.449722 7454 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 11:53:50.459527 master-0 kubenswrapper[7454]: W0319 11:53:50.449727 7454 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 11:53:50.459527 master-0 kubenswrapper[7454]: W0319 11:53:50.449731 7454 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 11:53:50.459527 master-0 kubenswrapper[7454]: W0319 11:53:50.449736 7454 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 11:53:50.459527 master-0 kubenswrapper[7454]: W0319 11:53:50.449740 7454 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 11:53:50.459527 master-0 kubenswrapper[7454]: W0319 11:53:50.449746 7454 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 11:53:50.459527 master-0 kubenswrapper[7454]: W0319 11:53:50.449751 7454 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 11:53:50.459527 master-0 kubenswrapper[7454]: W0319 11:53:50.449759 7454 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 11:53:50.459527 master-0 kubenswrapper[7454]: W0319 11:53:50.449765 7454 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 11:53:50.459527 master-0 kubenswrapper[7454]: W0319 11:53:50.449770 7454 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 11:53:50.459527 master-0 kubenswrapper[7454]: W0319 11:53:50.449775 7454 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 11:53:50.459527 master-0 kubenswrapper[7454]: W0319 11:53:50.449780 7454 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 11:53:50.461122 master-0 kubenswrapper[7454]: W0319 11:53:50.449786 7454 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 11:53:50.461122 master-0 kubenswrapper[7454]: W0319 11:53:50.449811 7454 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 11:53:50.461122 master-0 kubenswrapper[7454]: W0319 11:53:50.449819 7454 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 11:53:50.461122 master-0 kubenswrapper[7454]: W0319 11:53:50.449826 7454 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 11:53:50.461122 master-0 kubenswrapper[7454]: W0319 11:53:50.449832 7454 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 11:53:50.461122 master-0 kubenswrapper[7454]: W0319 11:53:50.449837 7454 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 11:53:50.461122 master-0 kubenswrapper[7454]: I0319 11:53:50.449846 7454 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 19 11:53:50.462356 master-0 kubenswrapper[7454]: I0319 11:53:50.462286 7454 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 19 11:53:50.462356 master-0 kubenswrapper[7454]: I0319 11:53:50.462322 7454 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:53:50.462459 master-0 kubenswrapper[7454]: W0319 11:53:50.462412 7454 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 11:53:50.462459 master-0 kubenswrapper[7454]: W0319 11:53:50.462423 7454 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 11:53:50.462459 master-0 kubenswrapper[7454]: W0319 11:53:50.462428 7454 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 11:53:50.462459 master-0 kubenswrapper[7454]: W0319 11:53:50.462433 7454 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 11:53:50.462459 master-0 kubenswrapper[7454]: W0319 11:53:50.462438 7454 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 11:53:50.462459 master-0 kubenswrapper[7454]: W0319 11:53:50.462443 7454 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 11:53:50.462459 master-0 kubenswrapper[7454]: W0319 11:53:50.462448 7454 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 11:53:50.462459 master-0 kubenswrapper[7454]: W0319 11:53:50.462453 7454 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 11:53:50.462459 master-0 kubenswrapper[7454]: W0319 11:53:50.462458 7454 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 11:53:50.462459 master-0 kubenswrapper[7454]: W0319 11:53:50.462463 7454 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 11:53:50.462793 master-0 kubenswrapper[7454]: W0319 11:53:50.462468 7454 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 11:53:50.462793 master-0 kubenswrapper[7454]: W0319 11:53:50.462653 7454 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 11:53:50.462793 master-0 kubenswrapper[7454]: W0319 11:53:50.462661 7454 feature_gate.go:330] unrecognized feature gate: Example Mar 19 11:53:50.462793 master-0 kubenswrapper[7454]: W0319 11:53:50.462667 7454 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 11:53:50.462793 master-0 kubenswrapper[7454]: W0319 11:53:50.462672 7454 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 11:53:50.462793 master-0 kubenswrapper[7454]: W0319 11:53:50.462677 7454 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 11:53:50.462793 master-0 kubenswrapper[7454]: W0319 11:53:50.462682 7454 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 11:53:50.462793 master-0 kubenswrapper[7454]: W0319 11:53:50.462689 7454 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 11:53:50.462793 master-0 kubenswrapper[7454]: W0319 11:53:50.462700 7454 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 11:53:50.462793 master-0 kubenswrapper[7454]: W0319 11:53:50.462705 7454 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 11:53:50.462793 master-0 kubenswrapper[7454]: W0319 11:53:50.462712 7454 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 11:53:50.462793 master-0 kubenswrapper[7454]: W0319 11:53:50.462718 7454 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 11:53:50.462793 master-0 kubenswrapper[7454]: W0319 11:53:50.462723 7454 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 11:53:50.462793 master-0 kubenswrapper[7454]: W0319 11:53:50.462727 7454 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 11:53:50.462793 master-0 kubenswrapper[7454]: W0319 11:53:50.462732 7454 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 11:53:50.462793 master-0 kubenswrapper[7454]: W0319 11:53:50.462737 7454 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 11:53:50.462793 master-0 kubenswrapper[7454]: W0319 11:53:50.462742 7454 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 11:53:50.462793 master-0 kubenswrapper[7454]: W0319 11:53:50.462747 7454 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 11:53:50.462793 master-0 kubenswrapper[7454]: W0319 11:53:50.462751 7454 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 11:53:50.463668 master-0 kubenswrapper[7454]: W0319 11:53:50.462756 7454 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 11:53:50.463668 master-0 kubenswrapper[7454]: W0319 11:53:50.462760 7454 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 11:53:50.463668 master-0 kubenswrapper[7454]: W0319 11:53:50.462765 7454 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 11:53:50.463668 master-0 kubenswrapper[7454]: W0319 11:53:50.462769 7454 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 11:53:50.463668 master-0 kubenswrapper[7454]: W0319 11:53:50.462774 7454 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 11:53:50.463668 master-0 kubenswrapper[7454]: W0319 11:53:50.462780 7454 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 11:53:50.463668 master-0 kubenswrapper[7454]: W0319 11:53:50.462785 7454 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 11:53:50.463668 master-0 kubenswrapper[7454]: W0319 11:53:50.462805 7454 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 11:53:50.463668 master-0 kubenswrapper[7454]: W0319 11:53:50.462811 7454 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 11:53:50.463668 master-0 kubenswrapper[7454]: W0319 11:53:50.462817 7454 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 11:53:50.463668 master-0 kubenswrapper[7454]: W0319 11:53:50.462822 7454 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 11:53:50.463668 master-0 kubenswrapper[7454]: W0319 11:53:50.462828 7454 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 11:53:50.463668 master-0 kubenswrapper[7454]: W0319 11:53:50.462834 7454 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 11:53:50.463668 master-0 kubenswrapper[7454]: W0319 11:53:50.462840 7454 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 11:53:50.463668 master-0 kubenswrapper[7454]: W0319 11:53:50.462845 7454 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 11:53:50.463668 master-0 kubenswrapper[7454]: W0319 11:53:50.462850 7454 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 11:53:50.463668 master-0 kubenswrapper[7454]: W0319 11:53:50.462855 7454 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 11:53:50.463668 master-0 kubenswrapper[7454]: W0319 11:53:50.462860 7454 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 11:53:50.463668 master-0 kubenswrapper[7454]: W0319 11:53:50.462865 7454 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 11:53:50.464311 master-0 kubenswrapper[7454]: W0319 11:53:50.462869 7454 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 11:53:50.464311 master-0 kubenswrapper[7454]: W0319 11:53:50.462874 7454 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 11:53:50.464311 master-0 kubenswrapper[7454]: W0319 11:53:50.462879 7454 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 11:53:50.464311 master-0 kubenswrapper[7454]: W0319 11:53:50.462884 7454 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 11:53:50.464311 master-0 kubenswrapper[7454]: W0319 11:53:50.462888 7454 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 11:53:50.464311 master-0 kubenswrapper[7454]: W0319 11:53:50.462892 7454 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 11:53:50.464311 master-0 kubenswrapper[7454]: W0319 11:53:50.462897 7454 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 11:53:50.464311 master-0 kubenswrapper[7454]: W0319 11:53:50.462901 7454 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 11:53:50.464311 master-0 kubenswrapper[7454]: W0319 11:53:50.462906 7454 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 11:53:50.464311 master-0 kubenswrapper[7454]: W0319 11:53:50.462910 7454 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 11:53:50.464311 master-0 kubenswrapper[7454]: W0319 11:53:50.462915 7454 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 11:53:50.464311 master-0 kubenswrapper[7454]: W0319 11:53:50.462950 7454 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 11:53:50.464311 master-0 kubenswrapper[7454]: W0319 11:53:50.462955 7454 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 11:53:50.464311 master-0 kubenswrapper[7454]: W0319 11:53:50.462959 7454 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 11:53:50.464311 master-0 kubenswrapper[7454]: W0319 11:53:50.462967 7454 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 11:53:50.464311 master-0 kubenswrapper[7454]: W0319 11:53:50.462972 7454 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 11:53:50.464311 master-0 kubenswrapper[7454]: W0319 11:53:50.462977 7454 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 11:53:50.464311 master-0 kubenswrapper[7454]: W0319 11:53:50.462981 7454 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 11:53:50.464311 master-0 kubenswrapper[7454]: W0319 11:53:50.462986 7454 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 11:53:50.464311 master-0 kubenswrapper[7454]: W0319 11:53:50.462990 7454 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 11:53:50.464311 master-0 kubenswrapper[7454]: W0319 11:53:50.462995 7454 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 11:53:50.464998 master-0 kubenswrapper[7454]: W0319 11:53:50.463000 7454 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 11:53:50.464998 master-0 kubenswrapper[7454]: W0319 11:53:50.463006 7454 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 11:53:50.464998 master-0 kubenswrapper[7454]: W0319 11:53:50.463011 7454 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 11:53:50.464998 master-0 kubenswrapper[7454]: I0319 11:53:50.463020 7454 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 19 11:53:50.464998 master-0 kubenswrapper[7454]: W0319 11:53:50.463165 7454 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 11:53:50.464998 master-0 kubenswrapper[7454]: W0319 11:53:50.463176 7454 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 11:53:50.464998 master-0 kubenswrapper[7454]: W0319 11:53:50.463182 7454 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 11:53:50.464998 master-0 kubenswrapper[7454]: W0319 11:53:50.463187 7454 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 11:53:50.464998 master-0 kubenswrapper[7454]: W0319 11:53:50.463192 7454 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 11:53:50.464998 master-0 kubenswrapper[7454]: W0319 11:53:50.463197 7454 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 11:53:50.464998 master-0 kubenswrapper[7454]: W0319 11:53:50.463201 7454 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 11:53:50.464998 master-0 kubenswrapper[7454]: W0319 11:53:50.463206 7454 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 11:53:50.464998 master-0 kubenswrapper[7454]: W0319 11:53:50.463211 7454 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 11:53:50.464998 master-0 kubenswrapper[7454]: W0319 11:53:50.463216 7454 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 11:53:50.464998 master-0 kubenswrapper[7454]: W0319 11:53:50.463221 7454 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 11:53:50.465563 master-0 kubenswrapper[7454]: W0319 11:53:50.463226 7454 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 11:53:50.465563 master-0 kubenswrapper[7454]: W0319 11:53:50.463231 7454 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 11:53:50.465563 master-0 kubenswrapper[7454]: W0319 11:53:50.463236 7454 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 11:53:50.465563 master-0 kubenswrapper[7454]: W0319 11:53:50.463242 7454 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 11:53:50.465563 master-0 kubenswrapper[7454]: W0319 11:53:50.463248 7454 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 11:53:50.465563 master-0 kubenswrapper[7454]: W0319 11:53:50.463256 7454 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 11:53:50.465563 master-0 kubenswrapper[7454]: W0319 11:53:50.463261 7454 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 11:53:50.465563 master-0 kubenswrapper[7454]: W0319 11:53:50.463266 7454 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 11:53:50.465563 master-0 kubenswrapper[7454]: W0319 11:53:50.463271 7454 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 11:53:50.465563 master-0 kubenswrapper[7454]: W0319 11:53:50.463276 7454 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 11:53:50.465563 master-0 kubenswrapper[7454]: W0319 11:53:50.463281 7454 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 11:53:50.465563 master-0 kubenswrapper[7454]: W0319 11:53:50.463286 7454 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 11:53:50.465563 master-0 kubenswrapper[7454]: W0319 11:53:50.463291 7454 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 11:53:50.465563 master-0 kubenswrapper[7454]: W0319 11:53:50.463296 7454 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 11:53:50.465563 master-0 kubenswrapper[7454]: W0319 11:53:50.463300 7454 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 11:53:50.465563 master-0 kubenswrapper[7454]: W0319 11:53:50.463306 7454 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 11:53:50.465563 master-0 kubenswrapper[7454]: W0319 11:53:50.463311 7454 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 11:53:50.465563 master-0 kubenswrapper[7454]: W0319 11:53:50.463315 7454 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 11:53:50.465563 master-0 kubenswrapper[7454]: W0319 11:53:50.463319 7454 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 11:53:50.465563 master-0 kubenswrapper[7454]: W0319 11:53:50.463324 7454 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 11:53:50.466352 master-0 kubenswrapper[7454]: W0319 11:53:50.463329 7454 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 11:53:50.466352 master-0 kubenswrapper[7454]: W0319 11:53:50.463333 7454 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 11:53:50.466352 master-0 kubenswrapper[7454]: W0319 11:53:50.463339 7454 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 11:53:50.466352 master-0 kubenswrapper[7454]: W0319 11:53:50.463344 7454 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 11:53:50.466352 master-0 kubenswrapper[7454]: W0319 11:53:50.463349 7454 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 11:53:50.466352 master-0 kubenswrapper[7454]: W0319 11:53:50.463353 7454 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 11:53:50.466352 master-0 kubenswrapper[7454]: W0319 11:53:50.463358 7454 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 11:53:50.466352 master-0 kubenswrapper[7454]: W0319 11:53:50.463363 7454 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 11:53:50.466352 master-0 kubenswrapper[7454]: W0319 11:53:50.463367 7454 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 11:53:50.466352 master-0 kubenswrapper[7454]: W0319 11:53:50.463372 7454 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 11:53:50.466352 master-0 kubenswrapper[7454]: W0319 11:53:50.463377 7454 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 11:53:50.466352 master-0 kubenswrapper[7454]: W0319 11:53:50.463381 7454 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 11:53:50.466352 master-0 kubenswrapper[7454]: W0319 11:53:50.463387 7454 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 11:53:50.466352 master-0 kubenswrapper[7454]: W0319 11:53:50.463392 7454 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 11:53:50.466352 master-0 kubenswrapper[7454]: W0319 11:53:50.463397 7454 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 11:53:50.466352 master-0 kubenswrapper[7454]: W0319 11:53:50.463402 7454 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 11:53:50.466352 master-0 kubenswrapper[7454]: W0319 11:53:50.463408 7454 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 11:53:50.466352 master-0 kubenswrapper[7454]: W0319 11:53:50.463413 7454 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 11:53:50.466352 master-0 kubenswrapper[7454]: W0319 11:53:50.463417 7454 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 11:53:50.470446 master-0 kubenswrapper[7454]: W0319 11:53:50.463422 7454 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 11:53:50.470446 master-0 kubenswrapper[7454]: W0319 11:53:50.463428 7454 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 11:53:50.470446 master-0 kubenswrapper[7454]: W0319 11:53:50.463432 7454 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 11:53:50.470446 master-0 kubenswrapper[7454]: W0319 11:53:50.463438 7454 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 11:53:50.470446 master-0 kubenswrapper[7454]: W0319 11:53:50.463443 7454 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 11:53:50.470446 master-0 kubenswrapper[7454]: W0319 11:53:50.463447 7454 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 11:53:50.470446 master-0 kubenswrapper[7454]: W0319 11:53:50.463453 7454 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 11:53:50.470446 master-0 kubenswrapper[7454]: W0319 11:53:50.463459 7454 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 11:53:50.470446 master-0 kubenswrapper[7454]: W0319 11:53:50.463464 7454 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 11:53:50.470446 master-0 kubenswrapper[7454]: W0319 11:53:50.463468 7454 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 11:53:50.470446 master-0 kubenswrapper[7454]: W0319 11:53:50.463473 7454 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 11:53:50.470446 master-0 kubenswrapper[7454]: W0319 11:53:50.463477 7454 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 11:53:50.470446 master-0 kubenswrapper[7454]: W0319 11:53:50.463482 7454 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 11:53:50.470446 master-0 kubenswrapper[7454]: W0319 11:53:50.463487 7454 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 11:53:50.470446 master-0 kubenswrapper[7454]: W0319 11:53:50.463492 7454 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 11:53:50.470446 master-0 kubenswrapper[7454]: W0319 11:53:50.463496 7454 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 11:53:50.470446 master-0 kubenswrapper[7454]: W0319 11:53:50.463501 7454 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 11:53:50.470446 master-0 kubenswrapper[7454]: W0319 11:53:50.463506 7454 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 11:53:50.470446 master-0 kubenswrapper[7454]: W0319 11:53:50.463511 7454 feature_gate.go:330] unrecognized feature gate: Example Mar 19 11:53:50.470446 master-0 kubenswrapper[7454]: W0319 11:53:50.463516 7454 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 11:53:50.471182 master-0 kubenswrapper[7454]: W0319 11:53:50.463521 7454 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 11:53:50.471182 master-0 kubenswrapper[7454]: W0319 11:53:50.463527 7454 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 11:53:50.471182 master-0 kubenswrapper[7454]: I0319 11:53:50.463537 7454 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 19 11:53:50.471182 master-0 kubenswrapper[7454]: I0319 11:53:50.463747 7454 server.go:940] "Client rotation is on, will bootstrap in background" Mar 19 11:53:50.476573 master-0 kubenswrapper[7454]: I0319 11:53:50.476031 7454 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 19 11:53:50.476573 master-0 kubenswrapper[7454]: I0319 11:53:50.476210 7454 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 19 11:53:50.479905 master-0 kubenswrapper[7454]: I0319 11:53:50.479853 7454 server.go:997] "Starting client certificate rotation" Mar 19 11:53:50.479905 master-0 kubenswrapper[7454]: I0319 11:53:50.479877 7454 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 19 11:53:50.480728 master-0 kubenswrapper[7454]: I0319 11:53:50.480069 7454 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-20 11:43:21 +0000 UTC, rotation deadline is 2026-03-20 08:02:47.946520769 +0000 UTC Mar 19 11:53:50.480784 master-0 kubenswrapper[7454]: I0319 11:53:50.480723 7454 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h8m57.465806129s for next certificate rotation Mar 19 11:53:50.481911 master-0 kubenswrapper[7454]: I0319 11:53:50.481875 7454 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 19 11:53:50.484129 master-0 kubenswrapper[7454]: I0319 11:53:50.484078 7454 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 19 11:53:50.487829 master-0 kubenswrapper[7454]: I0319 11:53:50.487786 7454 log.go:25] "Validated CRI v1 runtime API" Mar 19 11:53:50.491253 master-0 kubenswrapper[7454]: I0319 11:53:50.491220 7454 log.go:25] "Validated CRI v1 image API" Mar 19 11:53:50.495904 master-0 kubenswrapper[7454]: I0319 11:53:50.495862 7454 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 19 11:53:50.503945 master-0 kubenswrapper[7454]: I0319 11:53:50.503881 7454 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 f39678f0-0749-4469-b061-899c5a9052e6:/dev/vda3] Mar 19 11:53:50.505181 master-0 kubenswrapper[7454]: I0319 11:53:50.503932 7454 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/06387b0da219a3280f534fa3f57451c18534c297d33ad18d4503e53efc4f6f2f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/06387b0da219a3280f534fa3f57451c18534c297d33ad18d4503e53efc4f6f2f/userdata/shm major:0 minor:153 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/16106b77f2e7c1585811668327c4be2d10fe7576f2ed79d2b198fca95be86d2c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/16106b77f2e7c1585811668327c4be2d10fe7576f2ed79d2b198fca95be86d2c/userdata/shm major:0 minor:252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1fc61313284938071ce89eb1211d46af435fab3cccf3e32e3b4afcbf9419655a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1fc61313284938071ce89eb1211d46af435fab3cccf3e32e3b4afcbf9419655a/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/24de2a964d2fa28c5bff828df5f742d99916541dc1152f4dcdf6f4231784eba1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/24de2a964d2fa28c5bff828df5f742d99916541dc1152f4dcdf6f4231784eba1/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2ca9e696adafe66b3ba3814f26ea9bb916ca5c1804785c0e742201ad82ee9c18/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2ca9e696adafe66b3ba3814f26ea9bb916ca5c1804785c0e742201ad82ee9c18/userdata/shm major:0 minor:274 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/37b898c3ae24210a5aa4f86ab00e075925f0f6e4fde94632405ba19b0f9e0d1d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/37b898c3ae24210a5aa4f86ab00e075925f0f6e4fde94632405ba19b0f9e0d1d/userdata/shm major:0 minor:254 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/48efbe72c10829dd5908b740a4651763088ff7358d327f0b015844979a99b5dd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/48efbe72c10829dd5908b740a4651763088ff7358d327f0b015844979a99b5dd/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5396ef64e03af5cd8fbb98838e00f4f08020d9b7b41c5ccef26950f1e41fec60/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5396ef64e03af5cd8fbb98838e00f4f08020d9b7b41c5ccef26950f1e41fec60/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/58d1369a13582afcb1d55c539b0bf53f7dd57cd88bda12b4a87bc5a6b8e84cbc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/58d1369a13582afcb1d55c539b0bf53f7dd57cd88bda12b4a87bc5a6b8e84cbc/userdata/shm major:0 minor:260 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/63407ab3b9286932d0ec228766542c2a928958a160c225ce1f7624b3e7a02447/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/63407ab3b9286932d0ec228766542c2a928958a160c225ce1f7624b3e7a02447/userdata/shm major:0 minor:272 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/657e67ca992e83dd97b428ec2664479ed04815d8dada9aa63b0bd9e585d0e3d7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/657e67ca992e83dd97b428ec2664479ed04815d8dada9aa63b0bd9e585d0e3d7/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/732ea3fa30562cbb548cbd63878cf98bf16844c3fd2ba6668a55873990319c2d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/732ea3fa30562cbb548cbd63878cf98bf16844c3fd2ba6668a55873990319c2d/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7da5b8963c0c07bf615297cea6af913ce19795e600e076c4d580e948922fa865/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7da5b8963c0c07bf615297cea6af913ce19795e600e076c4d580e948922fa865/userdata/shm major:0 minor:256 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/842d46230cd4097ecd49786313f777a88243300f4db6d95963150d13dc2d40af/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/842d46230cd4097ecd49786313f777a88243300f4db6d95963150d13dc2d40af/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/84ed2f0d88ece07075010bba0c167b7f10255b8043408ff95f1958cee576a4a0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/84ed2f0d88ece07075010bba0c167b7f10255b8043408ff95f1958cee576a4a0/userdata/shm major:0 minor:264 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/89df1c468dcab6a092003dfdb9054af97d313c0e8ba73ec68b30b7001eab90ba/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/89df1c468dcab6a092003dfdb9054af97d313c0e8ba73ec68b30b7001eab90ba/userdata/shm major:0 minor:276 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a1783b50c5a08e2a42241bed3f2df9ef9e7315549e4393a5e98fdcdce6ecef6e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a1783b50c5a08e2a42241bed3f2df9ef9e7315549e4393a5e98fdcdce6ecef6e/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b08462654300221b81e734b82711f8871d4674a9fca01ad1cc20011ae2d1abfa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b08462654300221b81e734b82711f8871d4674a9fca01ad1cc20011ae2d1abfa/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b4bf04cf4a874cdad02ad51f153ce323d3cc5a93749aa40aeabd5ac11d70f65e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b4bf04cf4a874cdad02ad51f153ce323d3cc5a93749aa40aeabd5ac11d70f65e/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b9477b33d342b45771f3690cbbe221e1438e0d225ffd950edeb419c6de979401/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b9477b33d342b45771f3690cbbe221e1438e0d225ffd950edeb419c6de979401/userdata/shm major:0 minor:106 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cafdcda3b6318eaf62d6677cace1d3a0a0dbcf1889d817ce08bcc768e4b05288/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cafdcda3b6318eaf62d6677cace1d3a0a0dbcf1889d817ce08bcc768e4b05288/userdata/shm major:0 minor:268 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d6af7e6099bbf70f75032e24a3ecb1fbb8bf546e42f6f7c74e6f9a42396249e8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d6af7e6099bbf70f75032e24a3ecb1fbb8bf546e42f6f7c74e6f9a42396249e8/userdata/shm major:0 minor:261 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/de69f46fe324ea455cfa701ce77df2ff17c8f9f38f189dbc84ded004836d5af0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/de69f46fe324ea455cfa701ce77df2ff17c8f9f38f189dbc84ded004836d5af0/userdata/shm major:0 minor:109 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e6ef8104a726a85f4fa80186a64ea3c00a2cbb1be2c668fb9e94709c10d980c0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e6ef8104a726a85f4fa80186a64ea3c00a2cbb1be2c668fb9e94709c10d980c0/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e95d63ff24c648e1680e38f824e92cec0fcea7bc20cdade312b98b0468aad916/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e95d63ff24c648e1680e38f824e92cec0fcea7bc20cdade312b98b0468aad916/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fcd57352498da84e6fbc9969ab5176b5b32433301a69ada5c5c0571371a536da/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fcd57352498da84e6fbc9969ab5176b5b32433301a69ada5c5c0571371a536da/userdata/shm major:0 minor:368 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06df1b1b-154e-46f9-aee0-79a137c6c928/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/06df1b1b-154e-46f9-aee0-79a137c6c928/volumes/kubernetes.io~projected/kube-api-access major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06df1b1b-154e-46f9-aee0-79a137c6c928/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/06df1b1b-154e-46f9-aee0-79a137c6c928/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06f67c28-34fd-4356-92f0-edd0986ad34e/volumes/kubernetes.io~projected/kube-api-access-bdpj4:{mountpoint:/var/lib/kubelet/pods/06f67c28-34fd-4356-92f0-edd0986ad34e/volumes/kubernetes.io~projected/kube-api-access-bdpj4 major:0 minor:278 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0f97d998-530c-4d9d-a030-ca1d9d2d4490/volumes/kubernetes.io~projected/kube-api-access-zntzt:{mountpoint:/var/lib/kubelet/pods/0f97d998-530c-4d9d-a030-ca1d9d2d4490/volumes/kubernetes.io~projected/kube-api-access-zntzt major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0f97d998-530c-4d9d-a030-ca1d9d2d4490/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/0f97d998-530c-4d9d-a030-ca1d9d2d4490/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1089ea24-add9-482e-9276-e6ded12052d7/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/1089ea24-add9-482e-9276-e6ded12052d7/volumes/kubernetes.io~projected/kube-api-access major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1089ea24-add9-482e-9276-e6ded12052d7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1089ea24-add9-482e-9276-e6ded12052d7/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/19de6601-10d4-4112-a21f-0398d2b160d1/volumes/kubernetes.io~projected/kube-api-access-6xpc2:{mountpoint:/var/lib/kubelet/pods/19de6601-10d4-4112-a21f-0398d2b160d1/volumes/kubernetes.io~projected/kube-api-access-6xpc2 major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2151eb84-177e-459c-be71-f48465323ac2/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/2151eb84-177e-459c-be71-f48465323ac2/volumes/kubernetes.io~projected/kube-api-access major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2151eb84-177e-459c-be71-f48465323ac2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/2151eb84-177e-459c-be71-f48465323ac2/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/284768b8-9d70-4cf7-bace-8adc6b587186/volumes/kubernetes.io~projected/kube-api-access-8p6vn:{mountpoint:/var/lib/kubelet/pods/284768b8-9d70-4cf7-bace-8adc6b587186/volumes/kubernetes.io~projected/kube-api-access-8p6vn major:0 minor:104 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/284768b8-9d70-4cf7-bace-8adc6b587186/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/284768b8-9d70-4cf7-bace-8adc6b587186/volumes/kubernetes.io~secret/metrics-tls major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/398bcaca-1bea-4633-a78f-717e3d015ddd/volumes/kubernetes.io~projected/kube-api-access-fhqhb:{mountpoint:/var/lib/kubelet/pods/398bcaca-1bea-4633-a78f-717e3d015ddd/volumes/kubernetes.io~projected/kube-api-access-fhqhb major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/63c12a89-1b49-4eba-8f5a-551b10d2246b/volumes/kubernetes.io~projected/kube-api-access-bst2w:{mountpoint:/var/lib/kubelet/pods/63c12a89-1b49-4eba-8f5a-551b10d2246b/volumes/kubernetes.io~projected/kube-api-access-bst2w major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/661b8957-a890-4032-9e57-45e2e0b35249/volumes/kubernetes.io~projected/kube-api-access-8hq8f:{mountpoint:/var/lib/kubelet/pods/661b8957-a890-4032-9e57-45e2e0b35249/volumes/kubernetes.io~projected/kube-api-access-8hq8f major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/661b8957-a890-4032-9e57-45e2e0b35249/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/661b8957-a890-4032-9e57-45e2e0b35249/volumes/kubernetes.io~secret/serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7044a7b3-4fac-40af-a31c-054a1a1db26b/volumes/kubernetes.io~projected/kube-api-access-shfs6:{mountpoint:/var/lib/kubelet/pods/7044a7b3-4fac-40af-a31c-054a1a1db26b/volumes/kubernetes.io~projected/kube-api-access-shfs6 major:0 minor:105 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7241bf11-192e-47db-9d80-2324938ed34c/volumes/kubernetes.io~projected/kube-api-access-s5mkm:{mountpoint:/var/lib/kubelet/pods/7241bf11-192e-47db-9d80-2324938ed34c/volumes/kubernetes.io~projected/kube-api-access-s5mkm major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/806a4c30-7b93-4430-86da-f9e1f4f2d206/volumes/kubernetes.io~projected/kube-api-access-dfl29:{mountpoint:/var/lib/kubelet/pods/806a4c30-7b93-4430-86da-f9e1f4f2d206/volumes/kubernetes.io~projected/kube-api-access-dfl29 major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/82b98dca-59f9-42be-94ca-4a2a2b6fea0f/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/82b98dca-59f9-42be-94ca-4a2a2b6fea0f/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/82b98dca-59f9-42be-94ca-4a2a2b6fea0f/volumes/kubernetes.io~projected/kube-api-access-c5bmd:{mountpoint:/var/lib/kubelet/pods/82b98dca-59f9-42be-94ca-4a2a2b6fea0f/volumes/kubernetes.io~projected/kube-api-access-c5bmd major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8414b6b0-ee16-47a5-982b-ee58b136cfcf/volumes/kubernetes.io~projected/kube-api-access-864rg:{mountpoint:/var/lib/kubelet/pods/8414b6b0-ee16-47a5-982b-ee58b136cfcf/volumes/kubernetes.io~projected/kube-api-access-864rg major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8414b6b0-ee16-47a5-982b-ee58b136cfcf/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/8414b6b0-ee16-47a5-982b-ee58b136cfcf/volumes/kubernetes.io~secret/webhook-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/85912908-c447-4868-871b-82c5eadbfdbe/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/85912908-c447-4868-871b-82c5eadbfdbe/volumes/kubernetes.io~projected/kube-api-access major:0 minor:102 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04/volumes/kubernetes.io~projected/kube-api-access-hwfg5:{mountpoint:/var/lib/kubelet/pods/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04/volumes/kubernetes.io~projected/kube-api-access-hwfg5 major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/944eac68-e72b-4aed-b5dc-d7d9703178a3/volumes/kubernetes.io~projected/kube-api-access-m2mdn:{mountpoint:/var/lib/kubelet/pods/944eac68-e72b-4aed-b5dc-d7d9703178a3/volumes/kubernetes.io~projected/kube-api-access-m2mdn major:0 minor:318 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9702fc8c-4fe0-413b-b2d4-db23021d42b8/volumes/kubernetes.io~projected/kube-api-access-tpdts:{mountpoint:/var/lib/kubelet/pods/9702fc8c-4fe0-413b-b2d4-db23021d42b8/volumes/kubernetes.io~projected/kube-api-access-tpdts major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9702fc8c-4fe0-413b-b2d4-db23021d42b8/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/9702fc8c-4fe0-413b-b2d4-db23021d42b8/volumes/kubernetes.io~secret/etcd-client major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9702fc8c-4fe0-413b-b2d4-db23021d42b8/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/9702fc8c-4fe0-413b-b2d4-db23021d42b8/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d2db220-4d5b-4819-a910-b186e1e9fb3e/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/9d2db220-4d5b-4819-a910-b186e1e9fb3e/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d2db220-4d5b-4819-a910-b186e1e9fb3e/volumes/kubernetes.io~projected/kube-api-access-wshb2:{mountpoint:/var/lib/kubelet/pods/9d2db220-4d5b-4819-a910-b186e1e9fb3e/volumes/kubernetes.io~projected/kube-api-access-wshb2 major:0 minor:152 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d2db220-4d5b-4819-a910-b186e1e9fb3e/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/9d2db220-4d5b-4819-a910-b186e1e9fb3e/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:129 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9ed2dbd1-aec4-4009-917a-933533912ab5/volumes/kubernetes.io~projected/kube-api-access-gsk9d:{mountpoint:/var/lib/kubelet/pods/9ed2dbd1-aec4-4009-917a-933533912ab5/volumes/kubernetes.io~projected/kube-api-access-gsk9d major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9ed2dbd1-aec4-4009-917a-933533912ab5/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/9ed2dbd1-aec4-4009-917a-933533912ab5/volumes/kubernetes.io~secret/serving-cert major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a7747954-a222-4809-8656-818203b55ee8/volumes/kubernetes.io~projected/kube-api-access-khv2z:{mountpoint:/var/lib/kubelet/pods/a7747954-a222-4809-8656-818203b55ee8/volumes/kubernetes.io~projected/kube-api-access-khv2z major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab54833d-e57b-479d-b171-68155f6566f1/volumes/kubernetes.io~projected/kube-api-access-gl6d7:{mountpoint:/var/lib/kubelet/pods/ab54833d-e57b-479d-b171-68155f6566f1/volumes/kubernetes.io~projected/kube-api-access-gl6d7 major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aef8e03f-0363-4e13-b7ca-4fa871d77c62/volumes/kubernetes.io~projected/kube-api-access-x252z:{mountpoint:/var/lib/kubelet/pods/aef8e03f-0363-4e13-b7ca-4fa871d77c62/volumes/kubernetes.io~projected/kube-api-access-x252z major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aef8e03f-0363-4e13-b7ca-4fa871d77c62/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/aef8e03f-0363-4e13-b7ca-4fa871d77c62/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b0f5939c-48b1-4d6c-9712-9128a78d603b/volumes/kubernetes.io~projected/kube-api-access-6tqdb:{mountpoint:/var/lib/kubelet/pods/b0f5939c-48b1-4d6c-9712-9128a78d603b/volumes/kubernetes.io~projected/kube-api-access-6tqdb major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b80027fd-7b39-477a-a337-ff9bb08e7eeb/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/b80027fd-7b39-477a-a337-ff9bb08e7eeb/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b80027fd-7b39-477a-a337-ff9bb08e7eeb/volumes/kubernetes.io~projected/kube-api-access-hs4jf:{mountpoint:/var/lib/kubelet/pods/b80027fd-7b39-477a-a337-ff9bb08e7eeb/volumes/kubernetes.io~projected/kube-api-access-hs4jf major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6/volumes/kubernetes.io~projected/kube-api-access-jnd9c:{mountpoint:/var/lib/kubelet/pods/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6/volumes/kubernetes.io~projected/kube-api-access-jnd9c major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/beb562de-402b-4d9f-b5ed-090b60847a95/volumes/kubernetes.io~projected/kube-api-access-9mr6d:{mountpoint:/var/lib/kubelet/pods/beb562de-402b-4d9f-b5ed-090b60847a95/volumes/kubernetes.io~projected/kube-api-access-9mr6d major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf226d89-450d-4876-a113-345632b94ee9/volumes/kubernetes.io~projected/kube-api-access-wcxqj:{mountpoint:/var/lib/kubelet/pods/bf226d89-450d-4876-a113-345632b94ee9/volumes/kubernetes.io~projected/kube-api-access-wcxqj major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf226d89-450d-4876-a113-345632b94ee9/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/bf226d89-450d-4876-a113-345632b94ee9/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c2dbd8b3-0e02-4747-a166-80aa6a94b060/volumes/kubernetes.io~projected/kube-api-access-npc2t:{mountpoint:/var/lib/kubelet/pods/c2dbd8b3-0e02-4747-a166-80aa6a94b060/volumes/kubernetes.io~projected/kube-api-access-npc2t major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c2dbd8b3-0e02-4747-a166-80aa6a94b060/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/c2dbd8b3-0e02-4747-a166-80aa6a94b060/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d3017b5e-178e-49de-89d2-817a18398203/volumes/kubernetes.io~projected/kube-api-access-b6wm6:{mountpoint:/var/lib/kubelet/pods/d3017b5e-178e-49de-89d2-817a18398203/volumes/kubernetes.io~projected/kube-api-access-b6wm6 major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d3017b5e-178e-49de-89d2-817a18398203/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d3017b5e-178e-49de-89d2-817a18398203/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d3541cbe-3be0-40d3-89d2-b5937b6a8f47/volumes/kubernetes.io~projected/kube-api-access-pv6bc:{mountpoint:/var/lib/kubelet/pods/d3541cbe-3be0-40d3-89d2-b5937b6a8f47/volumes/kubernetes.io~projected/kube-api-access-pv6bc major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d975e831-7348-41b9-9622-f4a503674c38/volumes/kubernetes.io~projected/kube-api-access-86r6z:{mountpoint:/var/lib/kubelet/pods/d975e831-7348-41b9-9622-f4a503674c38/volumes/kubernetes.io~projected/kube-api-access-86r6z major:0 minor:336 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f/volumes/kubernetes.io~projected/kube-api-access-h5n89:{mountpoint:/var/lib/kubelet/pods/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f/volumes/kubernetes.io~projected/kube-api-access-h5n89 major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f08c5930-44f0-48e4-80dd-2563f2733b2f/volumes/kubernetes.io~projected/kube-api-access-h84l9:{mountpoint:/var/lib/kubelet/pods/f08c5930-44f0-48e4-80dd-2563f2733b2f/volumes/kubernetes.io~projected/kube-api-access-h84l9 major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f08c5930-44f0-48e4-80dd-2563f2733b2f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f08c5930-44f0-48e4-80dd-2563f2733b2f/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fe245927-c937-4ec7-ab83-4900bade72cf/volumes/kubernetes.io~projected/kube-api-access-s4hsp:{mountpoint:/var/lib/kubelet/pods/fe245927-c937-4ec7-ab83-4900bade72cf/volumes/kubernetes.io~projected/kube-api-access-s4hsp major:0 minor:103 fsType:tmpfs blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/ecf692be3b78290dcdf4c82e2eb5e2ed7c6e331ee23889990fe4ca7a85f983a0/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-111:{mountpoint:/var/lib/containers/storage/overlay/e220523a2d53fd495bcbb7a62de408ad62cb4e62a31cd38c77272b0b8a1a140d/merged major:0 minor:111 fsType:overlay blockSize:0} overlay_0-113:{mountpoint:/var/lib/containers/storage/overlay/f08334c321674bc7cf31d2f63a28df2dd0d8706adad27d2837d1050107a20680/merged major:0 minor:113 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/c9d27d37f7250150bb839a44efdef83fd3fe90ecf8b77edae7070b1c4c09b61c/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-126:{mountpoint:/var/lib/containers/storage/overlay/892bc13c78e401901d8ef365c496647bded5b98498dcbbd68b084ac315b52874/merged major:0 minor:126 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/1a0443ac3276617f024da16855b77b2d50a065b295368280197bc91740653702/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/30111bbfd50c9149bc58099f2617fed8c2cbebd6170279ec78105f34c281f5da/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/18402c8b507950c24090f28fae386548feb6559d374343c512e10be10a6a1fc3/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/dd6946b2ec2fbda7561c7fba2f9e0ce23c4fa24b048da3b3bb0615eec482a321/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/5bc60cd9a71e9d21884538e57090caec022c18bda07b5feadc6c83503b5004aa/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/16e470c26a576e79333622a0a04cff0fe4e3237592b1b84794db47ef3b34c213/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/e369820c3c6bced7b0c2390586c01775de4ef13460b57139d27feb01197f884a/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-161:{mountpoint:/var/lib/containers/storage/overlay/c126e75cc76d7741fa1c10d4164db1fb89575b0051e443d0ba6a21994cc61ea1/merged major:0 minor:161 fsType:overlay blockSize:0} overlay_0-167:{mountpoint:/var/lib/containers/storage/overlay/edc082dfa2d788394fabf6bf6b43dd6e4c61ea81bf2c7f104b917488d65de141/merged major:0 minor:167 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/07590a31a39b68d54b622df0025917b371f9f8791471e669656122714f90bf01/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/cabc7d191441052a51f0a03367865ea04dd8d019563e22eb22ad697b824549b1/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/39c3ff17de2c5500f0e3d32a115f2f13c97fc8e7d4be67721009d8ecf2df78ea/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/4115b27a698a7323a28c415de21d18f4d81d86eb323bcc4225550c78d95319d4/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/d027220cf8dd405aa9efa9fd32991ae77175a41244240dc10b49fc6790f3e225/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-199:{mountpoint:/var/lib/containers/storage/overlay/6e9e87da54ea6e2b54f28a020cb3bf87ea2dbfc11c7c966d6483631db751f52e/merged major:0 minor:199 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/792ab5476b2326d07eed54e40f3482cddbac14ff0e5c8f8aa418707be175a286/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/ba2af495bce4842a7131a6af8de4564f2f191e25cadab2ba84b3d494ea83d702/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-281:{mountpoint:/var/lib/containers/storage/overlay/507de9fa5e57dd4db66c193bca13709ce214418fe56eba86f35217b6174489b1/merged major:0 minor:281 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/76c433a023f5caf6b6825a4198b8a972b04260c82c10e943e3ad850d540a6768/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/8af32abfcee24b35421c2da474a6ce0f227b98d05347260f5506ec966f4252d1/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-288:{mountpoint:/var/lib/containers/storage/overlay/6e3bfe7bc51c866bc6402550199d82db681e19b7bc85d552da6e35d3ab3060a5/merged major:0 minor:288 fsType:overlay blockSize:0} overlay_0-291:{mountpoint:/var/lib/containers/storage/overlay/a5feaaca37e3604d80fdef2bc2eb10e33c23809313321f5da391124020e520c5/merged major:0 minor:291 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/d579692b80bea7cabc02437fdcb0f5b4990eb5f4892adf3c2562b6882f8815c2/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/244f0ddd816c23163b6832046b4023b0b351c0a88a0a6a77d7efe809b4abc347/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/2cd4b4c692480765f89f325e56df5214c4f04e433dd202c8c9eb0b9f63623ad3/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/9f4de040ba9606cb0559db644192fbbd550e0299245315f6f6b14943402764ff/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/feb4d03c8d2600bf10ded9cefe7585cdb8dab377aa8d31248b8d46d8081f66c5/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/1b0a243e6fd60ad62daaba3bdbc5798534f082c959bad5db7d2d12a93f03ea57/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/9a5450203615ac3984920733127f8fe769ef97b9de1d708f94db96e802c27e46/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/00d2c3dbb87af8b1a65d795cfa4cc2b317f2fa2f3b3ce65ebb35364e2df8fb83/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/edf11c44c47fef2e84ea2bdee603223b55f3653ab26fa0e9593f233c29cc1d68/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/a0577ac9d229c7ed1ca47f0ac9c096e4a965a2baf18097781d40e234a2a44b53/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-321:{mountpoint:/var/lib/containers/storage/overlay/398151f25079a7ddf10381f7a791ffe3f0e6e288dc478498232370462b66ab8b/merged major:0 minor:321 fsType:overlay blockSize:0} overlay_0-323:{mountpoint:/var/lib/containers/storage/overlay/fa02cdcbb50eb593f33b59ae99173b3327c3eece0c6493cbfa5aec20e269a147/merged major:0 minor:323 fsType:overlay blockSize:0} overlay_0-325:{mountpoint:/var/lib/containers/storage/overlay/9dc7b5df10e2479e8d609a2b0a603f47994234e1da0081b7596acc3021944184/merged major:0 minor:325 fsType:overlay blockSize:0} overlay_0-327:{mountpoint:/var/lib/containers/storage/overlay/5f66b1d84d33bfc3bd2d85f75fd60925148d51a3f1956e813ed1e27399fd27c7/merged major:0 minor:327 fsType:overlay blockSize:0} overlay_0-330:{mountpoint:/var/lib/containers/storage/overlay/0f74c12cfdc946851c205f3885ba3fd7df72aca1d73f19b1d5c34589a7caf20e/merged major:0 minor:330 fsType:overlay blockSize:0} overlay_0-332:{mountpoint:/var/lib/containers/storage/overlay/45c27542a633a8eec32e7f9f415c9e71243702efe10f85574af4aeb98538a47b/merged major:0 minor:332 fsType:overlay blockSize:0} overlay_0-334:{mountpoint:/var/lib/containers/storage/overlay/e7fd96321a65b5a70ce06989dbc8d0fe19cf18dd77882e2d3f0cc4fd66ba5b7d/merged major:0 minor:334 fsType:overlay blockSize:0} overlay_0-340:{mountpoint:/var/lib/containers/storage/overlay/1a534c64e69fa2ffec334b2673b9d99c2a331197920b0c9d7b4320a44efe6458/merged major:0 minor:340 fsType:overlay blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/9a1402447cca1c8f29baf7bc9c228342345f18b2670037d4671e68f154462f35/merged major:0 minor:43 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/867e4e79ba92b7144bb0cb5f94e726f7c0c91eed43551efb1e9233d55bbfaec3/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/c733271b9faa9e0fd3c2854f565a9c1f864dd03d42758d5c25995acb22670553/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/02337080bb73d89e8daab7735421a099bc5a18d621f43f04da960bcc428a8498/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/2fd1193e7138c396bab2e4cf5fbf6330b269c2f199c83abd3261ae069f7096f5/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/92e241d2faca5df66e41e22eee11cbdcc201f35c9fccc18762d4a301906d827c/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/6db37860844b3ea5955f267d99d15413cf75ea6a1382b516a428235ccfa39d1a/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/2169c3ec1433fb3f18b4a0d628a17820c026531bdc4b154b15cbb808bb4f2d32/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/6a5f093033aa10b879d370ee52d4fb89a861b5156a6c277ebaaab2a137c7b197/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/305dfb9d4ac14f45709060c5fdceb05587d98d47526b9f832bc822692139ba40/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-77:{mountpoint:/var/lib/containers/storage/overlay/f45758297f7ef41452c4ba2bc18be50cff151ab2d93e66e2897aee60ddf75980/merged major:0 minor:77 fsType:overlay blockSize:0} overlay_0-81:{mountpoint:/var/lib/containers/storage/overlay/e98ea31524a0dbb6b2d70962fa4bbf512f5de314eb570878d1b95fdef84883a3/merged major:0 minor:81 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/332c6482be7fd9fecfe0fe5f16c344c4ba0448bded54ce6404abaff5f3ae1c1f/merged major:0 minor:89 fsType:overlay blockSize:0}] Mar 19 11:53:50.549484 master-0 kubenswrapper[7454]: I0319 11:53:50.548569 7454 manager.go:217] Machine: {Timestamp:2026-03-19 11:53:50.546849542 +0000 UTC m=+0.177315485 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:42c922df40e540ac85bfc55dec643ba0 SystemUUID:42c922df-40e5-40ac-85bf-c55dec643ba0 BootID:56867831-7a09-49d8-8c88-5a315bbf793a Filesystems:[{Device:/run/containers/storage/overlay-containers/48efbe72c10829dd5908b740a4651763088ff7358d327f0b015844979a99b5dd/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8414b6b0-ee16-47a5-982b-ee58b136cfcf/volumes/kubernetes.io~projected/kube-api-access-864rg DeviceMajor:0 DeviceMinor:139 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/842d46230cd4097ecd49786313f777a88243300f4db6d95963150d13dc2d40af/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-126 DeviceMajor:0 DeviceMinor:126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/657e67ca992e83dd97b428ec2664479ed04815d8dada9aa63b0bd9e585d0e3d7/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/732ea3fa30562cbb548cbd63878cf98bf16844c3fd2ba6668a55873990319c2d/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/fe245927-c937-4ec7-ab83-4900bade72cf/volumes/kubernetes.io~projected/kube-api-access-s4hsp DeviceMajor:0 DeviceMinor:103 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/398bcaca-1bea-4633-a78f-717e3d015ddd/volumes/kubernetes.io~projected/kube-api-access-fhqhb DeviceMajor:0 DeviceMinor:123 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-161 DeviceMajor:0 DeviceMinor:161 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1089ea24-add9-482e-9276-e6ded12052d7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/63c12a89-1b49-4eba-8f5a-551b10d2246b/volumes/kubernetes.io~projected/kube-api-access-bst2w DeviceMajor:0 DeviceMinor:238 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-81 DeviceMajor:0 DeviceMinor:81 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7044a7b3-4fac-40af-a31c-054a1a1db26b/volumes/kubernetes.io~projected/kube-api-access-shfs6 DeviceMajor:0 DeviceMinor:105 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d2db220-4d5b-4819-a910-b186e1e9fb3e/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:129 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-167 DeviceMajor:0 DeviceMinor:167 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f08c5930-44f0-48e4-80dd-2563f2733b2f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2151eb84-177e-459c-be71-f48465323ac2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/19de6601-10d4-4112-a21f-0398d2b160d1/volumes/kubernetes.io~projected/kube-api-access-6xpc2 DeviceMajor:0 DeviceMinor:249 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/37b898c3ae24210a5aa4f86ab00e075925f0f6e4fde94632405ba19b0f9e0d1d/userdata/shm DeviceMajor:0 DeviceMinor:254 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1fc61313284938071ce89eb1211d46af435fab3cccf3e32e3b4afcbf9419655a/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8414b6b0-ee16-47a5-982b-ee58b136cfcf/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:138 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0f97d998-530c-4d9d-a030-ca1d9d2d4490/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9ed2dbd1-aec4-4009-917a-933533912ab5/volumes/kubernetes.io~projected/kube-api-access-gsk9d DeviceMajor:0 DeviceMinor:240 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-281 DeviceMajor:0 DeviceMinor:281 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bf226d89-450d-4876-a113-345632b94ee9/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d3017b5e-178e-49de-89d2-817a18398203/volumes/kubernetes.io~projected/kube-api-access-b6wm6 DeviceMajor:0 DeviceMinor:245 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b08462654300221b81e734b82711f8871d4674a9fca01ad1cc20011ae2d1abfa/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9ed2dbd1-aec4-4009-917a-933533912ab5/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:224 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/c2dbd8b3-0e02-4747-a166-80aa6a94b060/volumes/kubernetes.io~projected/kube-api-access-npc2t DeviceMajor:0 DeviceMinor:243 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/82b98dca-59f9-42be-94ca-4a2a2b6fea0f/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:244 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/84ed2f0d88ece07075010bba0c167b7f10255b8043408ff95f1958cee576a4a0/userdata/shm DeviceMajor:0 DeviceMinor:264 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/24de2a964d2fa28c5bff828df5f742d99916541dc1152f4dcdf6f4231784eba1/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-330 DeviceMajor:0 DeviceMinor:330 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/944eac68-e72b-4aed-b5dc-d7d9703178a3/volumes/kubernetes.io~projected/kube-api-access-m2mdn DeviceMajor:0 DeviceMinor:318 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ab54833d-e57b-479d-b171-68155f6566f1/volumes/kubernetes.io~projected/kube-api-access-gl6d7 DeviceMajor:0 DeviceMinor:232 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/82b98dca-59f9-42be-94ca-4a2a2b6fea0f/volumes/kubernetes.io~projected/kube-api-access-c5bmd DeviceMajor:0 DeviceMinor:248 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-323 DeviceMajor:0 DeviceMinor:323 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f/volumes/kubernetes.io~projected/kube-api-access-h5n89 DeviceMajor:0 DeviceMinor:226 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d3017b5e-178e-49de-89d2-817a18398203/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/b0f5939c-48b1-4d6c-9712-9128a78d603b/volumes/kubernetes.io~projected/kube-api-access-6tqdb DeviceMajor:0 DeviceMinor:237 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/58d1369a13582afcb1d55c539b0bf53f7dd57cd88bda12b4a87bc5a6b8e84cbc/userdata/shm DeviceMajor:0 DeviceMinor:260 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-325 DeviceMajor:0 DeviceMinor:325 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-199 DeviceMajor:0 DeviceMinor:199 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d2db220-4d5b-4819-a910-b186e1e9fb3e/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/9702fc8c-4fe0-413b-b2d4-db23021d42b8/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/beb562de-402b-4d9f-b5ed-090b60847a95/volumes/kubernetes.io~projected/kube-api-access-9mr6d DeviceMajor:0 DeviceMinor:229 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2151eb84-177e-459c-be71-f48465323ac2/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:234 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b80027fd-7b39-477a-a337-ff9bb08e7eeb/volumes/kubernetes.io~projected/kube-api-access-hs4jf DeviceMajor:0 DeviceMinor:242 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7da5b8963c0c07bf615297cea6af913ce19795e600e076c4d580e948922fa865/userdata/shm DeviceMajor:0 DeviceMinor:256 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fcd57352498da84e6fbc9969ab5176b5b32433301a69ada5c5c0571371a536da/userdata/shm DeviceMajor:0 DeviceMinor:368 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/806a4c30-7b93-4430-86da-f9e1f4f2d206/volumes/kubernetes.io~projected/kube-api-access-dfl29 DeviceMajor:0 DeviceMinor:246 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/16106b77f2e7c1585811668327c4be2d10fe7576f2ed79d2b198fca95be86d2c/userdata/shm DeviceMajor:0 DeviceMinor:252 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-111 DeviceMajor:0 DeviceMinor:111 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7241bf11-192e-47db-9d80-2324938ed34c/volumes/kubernetes.io~projected/kube-api-access-s5mkm DeviceMajor:0 DeviceMinor:231 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d3541cbe-3be0-40d3-89d2-b5937b6a8f47/volumes/kubernetes.io~projected/kube-api-access-pv6bc DeviceMajor:0 DeviceMinor:239 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cafdcda3b6318eaf62d6677cace1d3a0a0dbcf1889d817ce08bcc768e4b05288/userdata/shm DeviceMajor:0 DeviceMinor:268 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/85912908-c447-4868-871b-82c5eadbfdbe/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:102 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/aef8e03f-0363-4e13-b7ca-4fa871d77c62/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2ca9e696adafe66b3ba3814f26ea9bb916ca5c1804785c0e742201ad82ee9c18/userdata/shm DeviceMajor:0 DeviceMinor:274 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b4bf04cf4a874cdad02ad51f153ce323d3cc5a93749aa40aeabd5ac11d70f65e/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bf226d89-450d-4876-a113-345632b94ee9/volumes/kubernetes.io~projected/kube-api-access-wcxqj DeviceMajor:0 DeviceMinor:125 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e6ef8104a726a85f4fa80186a64ea3c00a2cbb1be2c668fb9e94709c10d980c0/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-332 DeviceMajor:0 DeviceMinor:332 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/284768b8-9d70-4cf7-bace-8adc6b587186/volumes/kubernetes.io~projected/kube-api-access-8p6vn DeviceMajor:0 DeviceMinor:104 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a7747954-a222-4809-8656-818203b55ee8/volumes/kubernetes.io~projected/kube-api-access-khv2z DeviceMajor:0 DeviceMinor:225 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/89df1c468dcab6a092003dfdb9054af97d313c0e8ba73ec68b30b7001eab90ba/userdata/shm DeviceMajor:0 DeviceMinor:276 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-340 DeviceMajor:0 DeviceMinor:340 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-113 DeviceMajor:0 DeviceMinor:113 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/06df1b1b-154e-46f9-aee0-79a137c6c928/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/aef8e03f-0363-4e13-b7ca-4fa871d77c62/volumes/kubernetes.io~projected/kube-api-access-x252z DeviceMajor:0 DeviceMinor:247 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/63407ab3b9286932d0ec228766542c2a928958a160c225ce1f7624b3e7a02447/userdata/shm DeviceMajor:0 DeviceMinor:272 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-77 DeviceMajor:0 DeviceMinor:77 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b80027fd-7b39-477a-a337-ff9bb08e7eeb/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:228 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d6af7e6099bbf70f75032e24a3ecb1fbb8bf546e42f6f7c74e6f9a42396249e8/userdata/shm DeviceMajor:0 DeviceMinor:261 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/06f67c28-34fd-4356-92f0-edd0986ad34e/volumes/kubernetes.io~projected/kube-api-access-bdpj4 DeviceMajor:0 DeviceMinor:278 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1089ea24-add9-482e-9276-e6ded12052d7/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:236 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/kubelet/pods/284768b8-9d70-4cf7-bace-8adc6b587186/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:98 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9702fc8c-4fe0-413b-b2d4-db23021d42b8/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:223 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0f97d998-530c-4d9d-a030-ca1d9d2d4490/volumes/kubernetes.io~projected/kube-api-access-zntzt DeviceMajor:0 DeviceMinor:241 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/de69f46fe324ea455cfa701ce77df2ff17c8f9f38f189dbc84ded004836d5af0/userdata/shm DeviceMajor:0 DeviceMinor:109 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/661b8957-a890-4032-9e57-45e2e0b35249/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/06387b0da219a3280f534fa3f57451c18534c297d33ad18d4503e53efc4f6f2f/userdata/shm DeviceMajor:0 DeviceMinor:153 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/06df1b1b-154e-46f9-aee0-79a137c6c928/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:233 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04/volumes/kubernetes.io~projected/kube-api-access-hwfg5 DeviceMajor:0 DeviceMinor:235 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-321 DeviceMajor:0 DeviceMinor:321 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-334 DeviceMajor:0 DeviceMinor:334 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6/volumes/kubernetes.io~projected/kube-api-access-jnd9c DeviceMajor:0 DeviceMinor:250 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-327 DeviceMajor:0 DeviceMinor:327 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a1783b50c5a08e2a42241bed3f2df9ef9e7315549e4393a5e98fdcdce6ecef6e/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5396ef64e03af5cd8fbb98838e00f4f08020d9b7b41c5ccef26950f1e41fec60/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9d2db220-4d5b-4819-a910-b186e1e9fb3e/volumes/kubernetes.io~projected/kube-api-access-wshb2 DeviceMajor:0 DeviceMinor:152 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/661b8957-a890-4032-9e57-45e2e0b35249/volumes/kubernetes.io~projected/kube-api-access-8hq8f DeviceMajor:0 DeviceMinor:227 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-291 DeviceMajor:0 DeviceMinor:291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9702fc8c-4fe0-413b-b2d4-db23021d42b8/volumes/kubernetes.io~projected/kube-api-access-tpdts DeviceMajor:0 DeviceMinor:251 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-288 DeviceMajor:0 DeviceMinor:288 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d975e831-7348-41b9-9622-f4a503674c38/volumes/kubernetes.io~projected/kube-api-access-86r6z DeviceMajor:0 DeviceMinor:336 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e95d63ff24c648e1680e38f824e92cec0fcea7bc20cdade312b98b0468aad916/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b9477b33d342b45771f3690cbbe221e1438e0d225ffd950edeb419c6de979401/userdata/shm DeviceMajor:0 DeviceMinor:106 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/c2dbd8b3-0e02-4747-a166-80aa6a94b060/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/f08c5930-44f0-48e4-80dd-2563f2733b2f/volumes/kubernetes.io~projected/kube-api-access-h84l9 DeviceMajor:0 DeviceMinor:230 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:16106b77f2e7c15 MacAddress:86:4d:6d:cd:dd:23 Speed:10000 Mtu:8900} {Name:1fc613132849380 MacAddress:fe:80:4c:2b:6d:b8 Speed:10000 Mtu:8900} {Name:24de2a964d2fa28 MacAddress:0e:ea:01:17:d7:a7 Speed:10000 Mtu:8900} {Name:2ca9e696adafe66 MacAddress:02:5f:4c:90:6f:0d Speed:10000 Mtu:8900} {Name:37b898c3ae24210 MacAddress:2e:71:06:d2:63:bd Speed:10000 Mtu:8900} {Name:58d1369a13582af MacAddress:e2:ef:f6:c3:18:3f Speed:10000 Mtu:8900} {Name:63407ab3b928693 MacAddress:66:e1:49:14:69:14 Speed:10000 Mtu:8900} {Name:657e67ca992e83d MacAddress:86:df:40:e2:f8:7d Speed:10000 Mtu:8900} {Name:7da5b8963c0c07b MacAddress:d2:60:6a:35:68:d8 Speed:10000 Mtu:8900} {Name:84ed2f0d88ece07 MacAddress:f6:69:ce:7d:f9:e8 Speed:10000 Mtu:8900} {Name:89df1c468dcab6a MacAddress:5e:62:53:79:20:c1 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:2a:8b:8d:f6:5c:b3 Speed:0 Mtu:8900} {Name:cafdcda3b6318ea MacAddress:5e:f4:f2:e3:2a:8e Speed:10000 Mtu:8900} {Name:d6af7e6099bbf70 MacAddress:6e:6d:c8:ae:c6:c8 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:0b:8e:2e Speed:-1 Mtu:9000} {Name:fcd57352498da84 MacAddress:42:95:40:db:53:02 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:be:b5:64:e8:21:b9 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 19 11:53:50.549484 master-0 kubenswrapper[7454]: I0319 11:53:50.549471 7454 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 19 11:53:50.549849 master-0 kubenswrapper[7454]: I0319 11:53:50.549688 7454 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 19 11:53:50.563471 master-0 kubenswrapper[7454]: I0319 11:53:50.563421 7454 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 19 11:53:50.563646 master-0 kubenswrapper[7454]: I0319 11:53:50.563605 7454 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:53:50.564020 master-0 kubenswrapper[7454]: I0319 11:53:50.563643 7454 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 11:53:50.564132 master-0 kubenswrapper[7454]: I0319 11:53:50.564032 7454 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:53:50.564132 master-0 kubenswrapper[7454]: I0319 11:53:50.564043 7454 container_manager_linux.go:303] "Creating device plugin manager" Mar 19 11:53:50.564132 master-0 kubenswrapper[7454]: I0319 11:53:50.564053 7454 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 19 11:53:50.564132 master-0 kubenswrapper[7454]: I0319 11:53:50.564090 7454 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 19 11:53:50.564306 master-0 kubenswrapper[7454]: I0319 11:53:50.564294 7454 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:53:50.564502 master-0 kubenswrapper[7454]: I0319 11:53:50.564475 7454 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 19 11:53:50.564548 master-0 kubenswrapper[7454]: I0319 11:53:50.564537 7454 kubelet.go:418] "Attempting to sync node with API server" Mar 19 11:53:50.564584 master-0 kubenswrapper[7454]: I0319 11:53:50.564550 7454 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:53:50.564584 master-0 kubenswrapper[7454]: I0319 11:53:50.564564 7454 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 19 11:53:50.564584 master-0 kubenswrapper[7454]: I0319 11:53:50.564576 7454 kubelet.go:324] "Adding apiserver pod source" Mar 19 11:53:50.564683 master-0 kubenswrapper[7454]: I0319 11:53:50.564591 7454 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:53:50.567958 master-0 kubenswrapper[7454]: I0319 11:53:50.567100 7454 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 19 11:53:50.567958 master-0 kubenswrapper[7454]: I0319 11:53:50.567329 7454 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 19 11:53:50.567958 master-0 kubenswrapper[7454]: I0319 11:53:50.567706 7454 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:53:50.567958 master-0 kubenswrapper[7454]: I0319 11:53:50.567940 7454 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 19 11:53:50.567958 master-0 kubenswrapper[7454]: I0319 11:53:50.567958 7454 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 19 11:53:50.568417 master-0 kubenswrapper[7454]: I0319 11:53:50.567983 7454 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 19 11:53:50.568417 master-0 kubenswrapper[7454]: I0319 11:53:50.567991 7454 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 19 11:53:50.568417 master-0 kubenswrapper[7454]: I0319 11:53:50.567998 7454 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 19 11:53:50.568417 master-0 kubenswrapper[7454]: I0319 11:53:50.568004 7454 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 19 11:53:50.568417 master-0 kubenswrapper[7454]: I0319 11:53:50.568012 7454 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 19 11:53:50.568417 master-0 kubenswrapper[7454]: I0319 11:53:50.568020 7454 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 19 11:53:50.568417 master-0 kubenswrapper[7454]: I0319 11:53:50.568031 7454 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 19 11:53:50.568417 master-0 kubenswrapper[7454]: I0319 11:53:50.568039 7454 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 19 11:53:50.568417 master-0 kubenswrapper[7454]: I0319 11:53:50.568051 7454 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 19 11:53:50.568417 master-0 kubenswrapper[7454]: I0319 11:53:50.568065 7454 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 19 11:53:50.568417 master-0 kubenswrapper[7454]: I0319 11:53:50.568093 7454 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 19 11:53:50.568417 master-0 kubenswrapper[7454]: I0319 11:53:50.568433 7454 server.go:1280] "Started kubelet" Mar 19 11:53:50.569964 master-0 kubenswrapper[7454]: I0319 11:53:50.568561 7454 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:53:50.569964 master-0 kubenswrapper[7454]: I0319 11:53:50.568661 7454 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:53:50.569964 master-0 kubenswrapper[7454]: I0319 11:53:50.568722 7454 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 19 11:53:50.569964 master-0 kubenswrapper[7454]: I0319 11:53:50.569187 7454 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:53:50.569429 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 19 11:53:50.570345 master-0 kubenswrapper[7454]: I0319 11:53:50.570127 7454 server.go:449] "Adding debug handlers to kubelet server" Mar 19 11:53:50.574120 master-0 kubenswrapper[7454]: I0319 11:53:50.574040 7454 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 19 11:53:50.574232 master-0 kubenswrapper[7454]: I0319 11:53:50.574135 7454 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.575286 7454 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.575317 7454 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.575309 7454 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-20 11:43:21 +0000 UTC, rotation deadline is 2026-03-20 09:05:02.51147875 +0000 UTC Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.575424 7454 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 21h11m11.936057804s for next certificate rotation Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.575516 7454 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: E0319 11:53:50.576639 7454 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.579696 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06df1b1b-154e-46f9-aee0-79a137c6c928" volumeName="kubernetes.io/secret/06df1b1b-154e-46f9-aee0-79a137c6c928-serving-cert" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.579761 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06f67c28-34fd-4356-92f0-edd0986ad34e" volumeName="kubernetes.io/configmap/06f67c28-34fd-4356-92f0-edd0986ad34e-iptables-alerter-script" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.579777 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19de6601-10d4-4112-a21f-0398d2b160d1" volumeName="kubernetes.io/configmap/19de6601-10d4-4112-a21f-0398d2b160d1-images" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.579860 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7044a7b3-4fac-40af-a31c-054a1a1db26b" volumeName="kubernetes.io/projected/7044a7b3-4fac-40af-a31c-054a1a1db26b-kube-api-access-shfs6" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.579877 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8414b6b0-ee16-47a5-982b-ee58b136cfcf" volumeName="kubernetes.io/configmap/8414b6b0-ee16-47a5-982b-ee58b136cfcf-ovnkube-identity-cm" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.579890 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="85912908-c447-4868-871b-82c5eadbfdbe" volumeName="kubernetes.io/projected/85912908-c447-4868-871b-82c5eadbfdbe-kube-api-access" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.579902 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3017b5e-178e-49de-89d2-817a18398203" volumeName="kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-service-ca-bundle" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.579917 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe245927-c937-4ec7-ab83-4900bade72cf" volumeName="kubernetes.io/configmap/fe245927-c937-4ec7-ab83-4900bade72cf-multus-daemon-config" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.579930 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1089ea24-add9-482e-9276-e6ded12052d7" volumeName="kubernetes.io/projected/1089ea24-add9-482e-9276-e6ded12052d7-kube-api-access" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.579945 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1089ea24-add9-482e-9276-e6ded12052d7" volumeName="kubernetes.io/secret/1089ea24-add9-482e-9276-e6ded12052d7-serving-cert" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.579956 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="63c12a89-1b49-4eba-8f5a-551b10d2246b" volumeName="kubernetes.io/projected/63c12a89-1b49-4eba-8f5a-551b10d2246b-kube-api-access-bst2w" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.579969 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aef8e03f-0363-4e13-b7ca-4fa871d77c62" volumeName="kubernetes.io/projected/aef8e03f-0363-4e13-b7ca-4fa871d77c62-kube-api-access-x252z" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.579980 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b80027fd-7b39-477a-a337-ff9bb08e7eeb" volumeName="kubernetes.io/configmap/b80027fd-7b39-477a-a337-ff9bb08e7eeb-trusted-ca" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.579994 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf226d89-450d-4876-a113-345632b94ee9" volumeName="kubernetes.io/configmap/bf226d89-450d-4876-a113-345632b94ee9-ovnkube-config" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.580006 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2dbd8b3-0e02-4747-a166-80aa6a94b060" volumeName="kubernetes.io/secret/c2dbd8b3-0e02-4747-a166-80aa6a94b060-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.580018 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06f67c28-34fd-4356-92f0-edd0986ad34e" volumeName="kubernetes.io/projected/06f67c28-34fd-4356-92f0-edd0986ad34e-kube-api-access-bdpj4" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.580028 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1089ea24-add9-482e-9276-e6ded12052d7" volumeName="kubernetes.io/configmap/1089ea24-add9-482e-9276-e6ded12052d7-config" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.580039 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82b98dca-59f9-42be-94ca-4a2a2b6fea0f" volumeName="kubernetes.io/projected/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-bound-sa-token" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.580052 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9702fc8c-4fe0-413b-b2d4-db23021d42b8" volumeName="kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-service-ca" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.580064 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ed2dbd1-aec4-4009-917a-933533912ab5" volumeName="kubernetes.io/configmap/9ed2dbd1-aec4-4009-917a-933533912ab5-config" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.580075 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab54833d-e57b-479d-b171-68155f6566f1" volumeName="kubernetes.io/projected/ab54833d-e57b-479d-b171-68155f6566f1-kube-api-access-gl6d7" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.580086 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="661b8957-a890-4032-9e57-45e2e0b35249" volumeName="kubernetes.io/configmap/661b8957-a890-4032-9e57-45e2e0b35249-config" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.580098 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9702fc8c-4fe0-413b-b2d4-db23021d42b8" volumeName="kubernetes.io/projected/9702fc8c-4fe0-413b-b2d4-db23021d42b8-kube-api-access-tpdts" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.580149 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ed2dbd1-aec4-4009-917a-933533912ab5" volumeName="kubernetes.io/projected/9ed2dbd1-aec4-4009-917a-933533912ab5-kube-api-access-gsk9d" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.580161 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3017b5e-178e-49de-89d2-817a18398203" volumeName="kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-config" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.580178 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2151eb84-177e-459c-be71-f48465323ac2" volumeName="kubernetes.io/secret/2151eb84-177e-459c-be71-f48465323ac2-serving-cert" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.580190 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9702fc8c-4fe0-413b-b2d4-db23021d42b8" volumeName="kubernetes.io/secret/9702fc8c-4fe0-413b-b2d4-db23021d42b8-serving-cert" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.580207 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9ab6ec4-eec9-4d27-8b43-2aaf954f098f" volumeName="kubernetes.io/secret/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-serving-cert" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.580218 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f08c5930-44f0-48e4-80dd-2563f2733b2f" volumeName="kubernetes.io/projected/f08c5930-44f0-48e4-80dd-2563f2733b2f-kube-api-access-h84l9" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.580228 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8414b6b0-ee16-47a5-982b-ee58b136cfcf" volumeName="kubernetes.io/projected/8414b6b0-ee16-47a5-982b-ee58b136cfcf-kube-api-access-864rg" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.580239 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87a3f546-e1c1-42a1-b80e-d45b6d5c0a04" volumeName="kubernetes.io/projected/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-kube-api-access-hwfg5" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.580252 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9702fc8c-4fe0-413b-b2d4-db23021d42b8" volumeName="kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-config" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.580262 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d2db220-4d5b-4819-a910-b186e1e9fb3e" volumeName="kubernetes.io/projected/9d2db220-4d5b-4819-a910-b186e1e9fb3e-kube-api-access-wshb2" seLinuxMountContext="" Mar 19 11:53:50.580355 master-0 kubenswrapper[7454]: I0319 11:53:50.580273 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf226d89-450d-4876-a113-345632b94ee9" volumeName="kubernetes.io/projected/bf226d89-450d-4876-a113-345632b94ee9-kube-api-access-wcxqj" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.597940 7454 factory.go:55] Registering systemd factory Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.597984 7454 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.580284 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2dbd8b3-0e02-4747-a166-80aa6a94b060" volumeName="kubernetes.io/empty-dir/c2dbd8b3-0e02-4747-a166-80aa6a94b060-operand-assets" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.598676 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19de6601-10d4-4112-a21f-0398d2b160d1" volumeName="kubernetes.io/configmap/19de6601-10d4-4112-a21f-0398d2b160d1-config" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.598727 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="63c12a89-1b49-4eba-8f5a-551b10d2246b" volumeName="kubernetes.io/configmap/63c12a89-1b49-4eba-8f5a-551b10d2246b-trusted-ca" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.598749 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8414b6b0-ee16-47a5-982b-ee58b136cfcf" volumeName="kubernetes.io/configmap/8414b6b0-ee16-47a5-982b-ee58b136cfcf-env-overrides" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.598785 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9702fc8c-4fe0-413b-b2d4-db23021d42b8" volumeName="kubernetes.io/secret/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-client" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.598842 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7747954-a222-4809-8656-818203b55ee8" volumeName="kubernetes.io/projected/a7747954-a222-4809-8656-818203b55ee8-kube-api-access-khv2z" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.598900 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d975e831-7348-41b9-9622-f4a503674c38" volumeName="kubernetes.io/projected/d975e831-7348-41b9-9622-f4a503674c38-kube-api-access-86r6z" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.598912 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9ab6ec4-eec9-4d27-8b43-2aaf954f098f" volumeName="kubernetes.io/configmap/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-config" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.598963 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="944eac68-e72b-4aed-b5dc-d7d9703178a3" volumeName="kubernetes.io/projected/944eac68-e72b-4aed-b5dc-d7d9703178a3-kube-api-access-m2mdn" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.598986 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b80027fd-7b39-477a-a337-ff9bb08e7eeb" volumeName="kubernetes.io/projected/b80027fd-7b39-477a-a337-ff9bb08e7eeb-bound-sa-token" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.598997 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b80027fd-7b39-477a-a337-ff9bb08e7eeb" volumeName="kubernetes.io/projected/b80027fd-7b39-477a-a337-ff9bb08e7eeb-kube-api-access-hs4jf" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.599042 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06df1b1b-154e-46f9-aee0-79a137c6c928" volumeName="kubernetes.io/configmap/06df1b1b-154e-46f9-aee0-79a137c6c928-config" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.599058 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2151eb84-177e-459c-be71-f48465323ac2" volumeName="kubernetes.io/projected/2151eb84-177e-459c-be71-f48465323ac2-kube-api-access" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.599077 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="284768b8-9d70-4cf7-bace-8adc6b587186" volumeName="kubernetes.io/projected/284768b8-9d70-4cf7-bace-8adc6b587186-kube-api-access-8p6vn" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.599090 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7044a7b3-4fac-40af-a31c-054a1a1db26b" volumeName="kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-whereabouts-flatfile-configmap" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.599139 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="806a4c30-7b93-4430-86da-f9e1f4f2d206" volumeName="kubernetes.io/projected/806a4c30-7b93-4430-86da-f9e1f4f2d206-kube-api-access-dfl29" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.599161 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ed2dbd1-aec4-4009-917a-933533912ab5" volumeName="kubernetes.io/secret/9ed2dbd1-aec4-4009-917a-933533912ab5-serving-cert" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.599200 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aef8e03f-0363-4e13-b7ca-4fa871d77c62" volumeName="kubernetes.io/empty-dir/aef8e03f-0363-4e13-b7ca-4fa871d77c62-available-featuregates" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.599235 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2dbd8b3-0e02-4747-a166-80aa6a94b060" volumeName="kubernetes.io/projected/c2dbd8b3-0e02-4747-a166-80aa6a94b060-kube-api-access-npc2t" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.599297 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f97d998-530c-4d9d-a030-ca1d9d2d4490" volumeName="kubernetes.io/projected/0f97d998-530c-4d9d-a030-ca1d9d2d4490-kube-api-access-zntzt" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.599322 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f97d998-530c-4d9d-a030-ca1d9d2d4490" volumeName="kubernetes.io/secret/0f97d998-530c-4d9d-a030-ca1d9d2d4490-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.599391 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82b98dca-59f9-42be-94ca-4a2a2b6fea0f" volumeName="kubernetes.io/configmap/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-trusted-ca" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602085 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3541cbe-3be0-40d3-89d2-b5937b6a8f47" volumeName="kubernetes.io/configmap/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-auth-proxy-config" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602177 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9ab6ec4-eec9-4d27-8b43-2aaf954f098f" volumeName="kubernetes.io/projected/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-kube-api-access-h5n89" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602194 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06df1b1b-154e-46f9-aee0-79a137c6c928" volumeName="kubernetes.io/projected/06df1b1b-154e-46f9-aee0-79a137c6c928-kube-api-access" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602402 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="284768b8-9d70-4cf7-bace-8adc6b587186" volumeName="kubernetes.io/secret/284768b8-9d70-4cf7-bace-8adc6b587186-metrics-tls" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602423 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7241bf11-192e-47db-9d80-2324938ed34c" volumeName="kubernetes.io/configmap/7241bf11-192e-47db-9d80-2324938ed34c-telemetry-config" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602436 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8414b6b0-ee16-47a5-982b-ee58b136cfcf" volumeName="kubernetes.io/secret/8414b6b0-ee16-47a5-982b-ee58b136cfcf-webhook-cert" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602472 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b0f5939c-48b1-4d6c-9712-9128a78d603b" volumeName="kubernetes.io/projected/b0f5939c-48b1-4d6c-9712-9128a78d603b-kube-api-access-6tqdb" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602489 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f08c5930-44f0-48e4-80dd-2563f2733b2f" volumeName="kubernetes.io/secret/f08c5930-44f0-48e4-80dd-2563f2733b2f-serving-cert" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602506 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="661b8957-a890-4032-9e57-45e2e0b35249" volumeName="kubernetes.io/projected/661b8957-a890-4032-9e57-45e2e0b35249-kube-api-access-8hq8f" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602520 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82b98dca-59f9-42be-94ca-4a2a2b6fea0f" volumeName="kubernetes.io/projected/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-kube-api-access-c5bmd" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602559 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="85912908-c447-4868-871b-82c5eadbfdbe" volumeName="kubernetes.io/configmap/85912908-c447-4868-871b-82c5eadbfdbe-service-ca" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602581 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d2db220-4d5b-4819-a910-b186e1e9fb3e" volumeName="kubernetes.io/secret/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovn-node-metrics-cert" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602598 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3541cbe-3be0-40d3-89d2-b5937b6a8f47" volumeName="kubernetes.io/projected/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-kube-api-access-pv6bc" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602635 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19de6601-10d4-4112-a21f-0398d2b160d1" volumeName="kubernetes.io/projected/19de6601-10d4-4112-a21f-0398d2b160d1-kube-api-access-6xpc2" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602654 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7044a7b3-4fac-40af-a31c-054a1a1db26b" volumeName="kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-cni-binary-copy" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602676 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d2db220-4d5b-4819-a910-b186e1e9fb3e" volumeName="kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-env-overrides" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602713 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aef8e03f-0363-4e13-b7ca-4fa871d77c62" volumeName="kubernetes.io/secret/aef8e03f-0363-4e13-b7ca-4fa871d77c62-serving-cert" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602728 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b0f5939c-48b1-4d6c-9712-9128a78d603b" volumeName="kubernetes.io/configmap/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-trusted-ca" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602744 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3017b5e-178e-49de-89d2-817a18398203" volumeName="kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-trusted-ca-bundle" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602760 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe245927-c937-4ec7-ab83-4900bade72cf" volumeName="kubernetes.io/projected/fe245927-c937-4ec7-ab83-4900bade72cf-kube-api-access-s4hsp" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602818 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="661b8957-a890-4032-9e57-45e2e0b35249" volumeName="kubernetes.io/secret/661b8957-a890-4032-9e57-45e2e0b35249-serving-cert" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602834 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9702fc8c-4fe0-413b-b2d4-db23021d42b8" volumeName="kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-ca" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602970 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d2db220-4d5b-4819-a910-b186e1e9fb3e" volumeName="kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovnkube-config" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.602991 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdcdb23d-ef1f-45e2-b9ac-7abf707637b6" volumeName="kubernetes.io/projected/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-kube-api-access-jnd9c" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.603004 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf226d89-450d-4876-a113-345632b94ee9" volumeName="kubernetes.io/secret/bf226d89-450d-4876-a113-345632b94ee9-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.603050 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3017b5e-178e-49de-89d2-817a18398203" volumeName="kubernetes.io/projected/d3017b5e-178e-49de-89d2-817a18398203-kube-api-access-b6wm6" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.603065 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3017b5e-178e-49de-89d2-817a18398203" volumeName="kubernetes.io/secret/d3017b5e-178e-49de-89d2-817a18398203-serving-cert" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.603081 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3541cbe-3be0-40d3-89d2-b5937b6a8f47" volumeName="kubernetes.io/configmap/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-images" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.603092 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7044a7b3-4fac-40af-a31c-054a1a1db26b" volumeName="kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-cni-sysctl-allowlist" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.603141 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7241bf11-192e-47db-9d80-2324938ed34c" volumeName="kubernetes.io/projected/7241bf11-192e-47db-9d80-2324938ed34c-kube-api-access-s5mkm" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.603161 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d2db220-4d5b-4819-a910-b186e1e9fb3e" volumeName="kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovnkube-script-lib" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.603174 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf226d89-450d-4876-a113-345632b94ee9" volumeName="kubernetes.io/configmap/bf226d89-450d-4876-a113-345632b94ee9-env-overrides" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.603211 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2151eb84-177e-459c-be71-f48465323ac2" volumeName="kubernetes.io/configmap/2151eb84-177e-459c-be71-f48465323ac2-config" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.603224 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="398bcaca-1bea-4633-a78f-717e3d015ddd" volumeName="kubernetes.io/projected/398bcaca-1bea-4633-a78f-717e3d015ddd-kube-api-access-fhqhb" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.603237 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="beb562de-402b-4d9f-b5ed-090b60847a95" volumeName="kubernetes.io/projected/beb562de-402b-4d9f-b5ed-090b60847a95-kube-api-access-9mr6d" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.603253 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f08c5930-44f0-48e4-80dd-2563f2733b2f" volumeName="kubernetes.io/configmap/f08c5930-44f0-48e4-80dd-2563f2733b2f-config" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.603263 7454 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe245927-c937-4ec7-ab83-4900bade72cf" volumeName="kubernetes.io/configmap/fe245927-c937-4ec7-ab83-4900bade72cf-cni-binary-copy" seLinuxMountContext="" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.603329 7454 reconstruct.go:97] "Volume reconstruction finished" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.603389 7454 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.604348 7454 factory.go:153] Registering CRI-O factory Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.604371 7454 factory.go:221] Registration of the crio container factory successfully Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.604537 7454 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.604605 7454 factory.go:103] Registering Raw factory Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.604617 7454 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.604638 7454 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.604624 7454 manager.go:1196] Started watching for new ooms in manager Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.606643 7454 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 19 11:53:50.615867 master-0 kubenswrapper[7454]: I0319 11:53:50.606889 7454 manager.go:319] Starting recovery of all containers Mar 19 11:53:50.627993 master-0 kubenswrapper[7454]: I0319 11:53:50.621274 7454 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 19 11:53:50.627993 master-0 kubenswrapper[7454]: E0319 11:53:50.625790 7454 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 19 11:53:50.637161 master-0 kubenswrapper[7454]: I0319 11:53:50.628908 7454 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:53:50.637161 master-0 kubenswrapper[7454]: I0319 11:53:50.632355 7454 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:53:50.637161 master-0 kubenswrapper[7454]: I0319 11:53:50.632410 7454 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 11:53:50.637161 master-0 kubenswrapper[7454]: I0319 11:53:50.632445 7454 kubelet.go:2335] "Starting kubelet main sync loop" Mar 19 11:53:50.637161 master-0 kubenswrapper[7454]: E0319 11:53:50.632502 7454 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:53:50.637585 master-0 kubenswrapper[7454]: I0319 11:53:50.637353 7454 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 19 11:53:50.695677 master-0 kubenswrapper[7454]: I0319 11:53:50.693573 7454 generic.go:334] "Generic (PLEG): container finished" podID="9d2db220-4d5b-4819-a910-b186e1e9fb3e" containerID="d91c3177fcc79be021d9124f0b7323db9969b5d246ad69be6568e14b2bb1c146" exitCode=0 Mar 19 11:53:50.708892 master-0 kubenswrapper[7454]: I0319 11:53:50.708832 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 19 11:53:50.709776 master-0 kubenswrapper[7454]: I0319 11:53:50.709701 7454 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="6b554ade444a2218312faf004411e7ca5ff136f234fd5270edc3b29df56f6e17" exitCode=1 Mar 19 11:53:50.709776 master-0 kubenswrapper[7454]: I0319 11:53:50.709748 7454 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="b1a54e1d5a4e1d27db12da7c6949a0237da9f713c6a17f5af4237b1c8b03cbfa" exitCode=0 Mar 19 11:53:50.733700 master-0 kubenswrapper[7454]: E0319 11:53:50.732861 7454 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 19 11:53:50.734245 master-0 kubenswrapper[7454]: I0319 11:53:50.734209 7454 generic.go:334] "Generic (PLEG): container finished" podID="a9819a56-abb1-485c-b424-5c62e30d5afc" containerID="61889dd9a935bc86ee38882d43925886388331ab38ba3004e85cc49cd1f39072" exitCode=0 Mar 19 11:53:50.750399 master-0 kubenswrapper[7454]: I0319 11:53:50.750323 7454 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="95a5e59caf12dcb834fa10b5b5af9755159f99a81152a1ebbfb9f9785ea5edff" exitCode=0 Mar 19 11:53:50.755029 master-0 kubenswrapper[7454]: I0319 11:53:50.754987 7454 generic.go:334] "Generic (PLEG): container finished" podID="c2dbd8b3-0e02-4747-a166-80aa6a94b060" containerID="58b2ce2cf7ade5f0117d8bf2599516b6d2046b5a2b2cff339f1186030594c1b8" exitCode=0 Mar 19 11:53:50.763900 master-0 kubenswrapper[7454]: I0319 11:53:50.763696 7454 generic.go:334] "Generic (PLEG): container finished" podID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerID="583df0d35b75cdd42a8c5d73920d4fc8b3684739b4fbdc9aa3860b1cc1087eeb" exitCode=0 Mar 19 11:53:50.772051 master-0 kubenswrapper[7454]: I0319 11:53:50.771996 7454 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="f7123f20a535bea151420277445f140ddc0e3200c0d15a65bcdb6b9d86c90ca9" exitCode=1 Mar 19 11:53:50.787541 master-0 kubenswrapper[7454]: I0319 11:53:50.785838 7454 generic.go:334] "Generic (PLEG): container finished" podID="7044a7b3-4fac-40af-a31c-054a1a1db26b" containerID="6bec5ff668b2f0913a9713d16292d3781feb7dfeeb82d87acec30ea3bfcbeb08" exitCode=0 Mar 19 11:53:50.787541 master-0 kubenswrapper[7454]: I0319 11:53:50.786256 7454 generic.go:334] "Generic (PLEG): container finished" podID="7044a7b3-4fac-40af-a31c-054a1a1db26b" containerID="09e947b1211885dac847d7f6f4b5d685a97ae8ac56061459ae15b5ca2dde25cb" exitCode=0 Mar 19 11:53:50.787541 master-0 kubenswrapper[7454]: I0319 11:53:50.786266 7454 generic.go:334] "Generic (PLEG): container finished" podID="7044a7b3-4fac-40af-a31c-054a1a1db26b" containerID="d621a54b4c12065eb160ef19e85adc68090a98c2fb8fea5b5228543edbaf07e1" exitCode=0 Mar 19 11:53:50.787541 master-0 kubenswrapper[7454]: I0319 11:53:50.786273 7454 generic.go:334] "Generic (PLEG): container finished" podID="7044a7b3-4fac-40af-a31c-054a1a1db26b" containerID="056242a76e14af2b45592d6a5dba2e28b2cd2e138b0b1a0f773a8e9eef170947" exitCode=0 Mar 19 11:53:50.787541 master-0 kubenswrapper[7454]: I0319 11:53:50.786283 7454 generic.go:334] "Generic (PLEG): container finished" podID="7044a7b3-4fac-40af-a31c-054a1a1db26b" containerID="a7b363361678d9e81d9d8ef32a8db06e2b9f3625d0d6871f670414917c137669" exitCode=0 Mar 19 11:53:50.787541 master-0 kubenswrapper[7454]: I0319 11:53:50.786290 7454 generic.go:334] "Generic (PLEG): container finished" podID="7044a7b3-4fac-40af-a31c-054a1a1db26b" containerID="2993484a619b94d2ea27105e0262a5ba0f7bb5c64e52ff512e989510a1380a8f" exitCode=0 Mar 19 11:53:50.793099 master-0 kubenswrapper[7454]: I0319 11:53:50.793044 7454 generic.go:334] "Generic (PLEG): container finished" podID="118dd8fa-f11f-4dda-96d7-f207e175b4da" containerID="5130296ba65834ed8eebf5136547f5b58340e0b2714dd3dba811f10381f648f5" exitCode=0 Mar 19 11:53:50.839514 master-0 kubenswrapper[7454]: I0319 11:53:50.839451 7454 manager.go:324] Recovery completed Mar 19 11:53:50.889020 master-0 kubenswrapper[7454]: I0319 11:53:50.888933 7454 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 19 11:53:50.889020 master-0 kubenswrapper[7454]: I0319 11:53:50.888980 7454 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 19 11:53:50.889020 master-0 kubenswrapper[7454]: I0319 11:53:50.889010 7454 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:53:50.889386 master-0 kubenswrapper[7454]: I0319 11:53:50.889202 7454 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 19 11:53:50.889386 master-0 kubenswrapper[7454]: I0319 11:53:50.889213 7454 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 19 11:53:50.889386 master-0 kubenswrapper[7454]: I0319 11:53:50.889236 7454 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 19 11:53:50.889386 master-0 kubenswrapper[7454]: I0319 11:53:50.889243 7454 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 19 11:53:50.889386 master-0 kubenswrapper[7454]: I0319 11:53:50.889249 7454 policy_none.go:49] "None policy: Start" Mar 19 11:53:50.891235 master-0 kubenswrapper[7454]: I0319 11:53:50.891199 7454 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 11:53:50.891235 master-0 kubenswrapper[7454]: I0319 11:53:50.891236 7454 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:53:50.891484 master-0 kubenswrapper[7454]: I0319 11:53:50.891458 7454 state_mem.go:75] "Updated machine memory state" Mar 19 11:53:50.891484 master-0 kubenswrapper[7454]: I0319 11:53:50.891471 7454 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 19 11:53:50.901863 master-0 kubenswrapper[7454]: I0319 11:53:50.901819 7454 manager.go:334] "Starting Device Plugin manager" Mar 19 11:53:50.901863 master-0 kubenswrapper[7454]: I0319 11:53:50.901867 7454 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:53:50.902070 master-0 kubenswrapper[7454]: I0319 11:53:50.901896 7454 server.go:79] "Starting device plugin registration server" Mar 19 11:53:50.902423 master-0 kubenswrapper[7454]: I0319 11:53:50.902393 7454 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 11:53:50.902479 master-0 kubenswrapper[7454]: I0319 11:53:50.902410 7454 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:53:50.902621 master-0 kubenswrapper[7454]: I0319 11:53:50.902573 7454 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 19 11:53:50.902772 master-0 kubenswrapper[7454]: I0319 11:53:50.902746 7454 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 19 11:53:50.902772 master-0 kubenswrapper[7454]: I0319 11:53:50.902763 7454 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:53:50.934311 master-0 kubenswrapper[7454]: I0319 11:53:50.934088 7454 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 19 11:53:50.936237 master-0 kubenswrapper[7454]: I0319 11:53:50.936131 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"6606dc49963e1cc0f10c3000efffd7cbb91c76beb712be6d1c6cb91c1b4a7c79"} Mar 19 11:53:50.936237 master-0 kubenswrapper[7454]: I0319 11:53:50.936223 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"48efbe72c10829dd5908b740a4651763088ff7358d327f0b015844979a99b5dd"} Mar 19 11:53:50.936237 master-0 kubenswrapper[7454]: I0319 11:53:50.936237 7454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c27e98a561ffe786fc1b95b71c3a149aa1f22e3037947fc028437c10cba9712b" Mar 19 11:53:50.936385 master-0 kubenswrapper[7454]: I0319 11:53:50.936257 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"20d447d60e6c323ac2a99fb9005538b9f698220ad800f2a9d7a82ebdd391df17"} Mar 19 11:53:50.936385 master-0 kubenswrapper[7454]: I0319 11:53:50.936268 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"6b554ade444a2218312faf004411e7ca5ff136f234fd5270edc3b29df56f6e17"} Mar 19 11:53:50.936385 master-0 kubenswrapper[7454]: I0319 11:53:50.936278 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"b1a54e1d5a4e1d27db12da7c6949a0237da9f713c6a17f5af4237b1c8b03cbfa"} Mar 19 11:53:50.936385 master-0 kubenswrapper[7454]: I0319 11:53:50.936287 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"e95d63ff24c648e1680e38f824e92cec0fcea7bc20cdade312b98b0468aad916"} Mar 19 11:53:50.936385 master-0 kubenswrapper[7454]: I0319 11:53:50.936304 7454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d2d73d5870e62554bb684d309080c493974123e3d07fe8faf016c90bfd3fdd4" Mar 19 11:53:50.936385 master-0 kubenswrapper[7454]: I0319 11:53:50.936318 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"4eb7482c86a1b5f9e745f031e830bded6c37fd855abcbff4d6d73294bfadb247"} Mar 19 11:53:50.936385 master-0 kubenswrapper[7454]: I0319 11:53:50.936327 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"f347ebf4af2e430c7010deb32f74eaaa375be42bd1cb0fd78e647b0e4fd96480"} Mar 19 11:53:50.936385 master-0 kubenswrapper[7454]: I0319 11:53:50.936336 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerDied","Data":"95a5e59caf12dcb834fa10b5b5af9755159f99a81152a1ebbfb9f9785ea5edff"} Mar 19 11:53:50.936385 master-0 kubenswrapper[7454]: I0319 11:53:50.936347 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"a1783b50c5a08e2a42241bed3f2df9ef9e7315549e4393a5e98fdcdce6ecef6e"} Mar 19 11:53:50.936385 master-0 kubenswrapper[7454]: I0319 11:53:50.936365 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"570446cbe4fe51c612e56ccc1c781b010d9f51a4701a23ab3e0e9c3afd18acfd"} Mar 19 11:53:50.936385 master-0 kubenswrapper[7454]: I0319 11:53:50.936374 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"37d48d754396bd1bc2527510939d9c6fc67f46e6581c207179d0cd71cd638e7d"} Mar 19 11:53:50.936385 master-0 kubenswrapper[7454]: I0319 11:53:50.936382 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"f7123f20a535bea151420277445f140ddc0e3200c0d15a65bcdb6b9d86c90ca9"} Mar 19 11:53:50.936385 master-0 kubenswrapper[7454]: I0319 11:53:50.936398 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"b08462654300221b81e734b82711f8871d4674a9fca01ad1cc20011ae2d1abfa"} Mar 19 11:53:50.936385 master-0 kubenswrapper[7454]: I0319 11:53:50.936411 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"5966cfc73c8dc098af4cd51014c727f29c95ad4372f9a6d75305e51f28be76da"} Mar 19 11:53:50.936385 master-0 kubenswrapper[7454]: I0319 11:53:50.936421 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"1f3d2affaef2ec02f4c8910d24c25e42512158f294dc5c7de4fd47923e7552fe"} Mar 19 11:53:50.936853 master-0 kubenswrapper[7454]: I0319 11:53:50.936434 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"842d46230cd4097ecd49786313f777a88243300f4db6d95963150d13dc2d40af"} Mar 19 11:53:50.936853 master-0 kubenswrapper[7454]: I0319 11:53:50.936465 7454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abb59c84a4c72145d2743db8f3e69c4a48795ef4c7b107cbbfb92f3b5047887c" Mar 19 11:53:50.948233 master-0 kubenswrapper[7454]: E0319 11:53:50.948140 7454 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 11:53:50.948233 master-0 kubenswrapper[7454]: E0319 11:53:50.948170 7454 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 11:53:50.948402 master-0 kubenswrapper[7454]: E0319 11:53:50.948287 7454 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:50.957587 master-0 kubenswrapper[7454]: E0319 11:53:50.956315 7454 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:50.957587 master-0 kubenswrapper[7454]: W0319 11:53:50.956519 7454 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 19 11:53:50.957587 master-0 kubenswrapper[7454]: E0319 11:53:50.956598 7454 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 19 11:53:51.002962 master-0 kubenswrapper[7454]: I0319 11:53:51.002909 7454 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 11:53:51.004562 master-0 kubenswrapper[7454]: I0319 11:53:51.004526 7454 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 11:53:51.004609 master-0 kubenswrapper[7454]: I0319 11:53:51.004565 7454 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 11:53:51.004609 master-0 kubenswrapper[7454]: I0319 11:53:51.004577 7454 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 11:53:51.004694 master-0 kubenswrapper[7454]: I0319 11:53:51.004652 7454 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 11:53:51.022824 master-0 kubenswrapper[7454]: I0319 11:53:51.022680 7454 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 19 11:53:51.022998 master-0 kubenswrapper[7454]: I0319 11:53:51.022917 7454 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 19 11:53:51.024493 master-0 kubenswrapper[7454]: I0319 11:53:51.024454 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:51.024552 master-0 kubenswrapper[7454]: I0319 11:53:51.024514 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 11:53:51.024552 master-0 kubenswrapper[7454]: I0319 11:53:51.024541 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 11:53:51.024641 master-0 kubenswrapper[7454]: I0319 11:53:51.024582 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:51.024641 master-0 kubenswrapper[7454]: I0319 11:53:51.024607 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:51.024699 master-0 kubenswrapper[7454]: I0319 11:53:51.024683 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 11:53:51.024752 master-0 kubenswrapper[7454]: I0319 11:53:51.024716 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:51.024810 master-0 kubenswrapper[7454]: I0319 11:53:51.024779 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 11:53:51.024866 master-0 kubenswrapper[7454]: I0319 11:53:51.024846 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:51.024923 master-0 kubenswrapper[7454]: I0319 11:53:51.024880 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 11:53:51.024965 master-0 kubenswrapper[7454]: I0319 11:53:51.024944 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 11:53:51.025029 master-0 kubenswrapper[7454]: I0319 11:53:51.025002 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:51.025117 master-0 kubenswrapper[7454]: I0319 11:53:51.025043 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:51.025157 master-0 kubenswrapper[7454]: I0319 11:53:51.025131 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:51.025213 master-0 kubenswrapper[7454]: I0319 11:53:51.025194 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:51.025254 master-0 kubenswrapper[7454]: I0319 11:53:51.025231 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:51.025321 master-0 kubenswrapper[7454]: I0319 11:53:51.025301 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:51.126067 master-0 kubenswrapper[7454]: I0319 11:53:51.126007 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:51.126067 master-0 kubenswrapper[7454]: I0319 11:53:51.126061 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:51.126353 master-0 kubenswrapper[7454]: I0319 11:53:51.126176 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:51.126353 master-0 kubenswrapper[7454]: I0319 11:53:51.126268 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:51.126353 master-0 kubenswrapper[7454]: I0319 11:53:51.126299 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:51.126353 master-0 kubenswrapper[7454]: I0319 11:53:51.126339 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 11:53:51.126353 master-0 kubenswrapper[7454]: I0319 11:53:51.126345 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:51.126624 master-0 kubenswrapper[7454]: I0319 11:53:51.126401 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 11:53:51.126624 master-0 kubenswrapper[7454]: I0319 11:53:51.126413 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 11:53:51.126624 master-0 kubenswrapper[7454]: I0319 11:53:51.126452 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 11:53:51.126624 master-0 kubenswrapper[7454]: I0319 11:53:51.126451 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:51.126624 master-0 kubenswrapper[7454]: I0319 11:53:51.126526 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:51.126624 master-0 kubenswrapper[7454]: I0319 11:53:51.126576 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 11:53:51.126624 master-0 kubenswrapper[7454]: I0319 11:53:51.126619 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 11:53:51.126932 master-0 kubenswrapper[7454]: I0319 11:53:51.126641 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:51.126932 master-0 kubenswrapper[7454]: I0319 11:53:51.126681 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:51.126932 master-0 kubenswrapper[7454]: I0319 11:53:51.126694 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 11:53:51.126932 master-0 kubenswrapper[7454]: I0319 11:53:51.126768 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 11:53:51.126932 master-0 kubenswrapper[7454]: I0319 11:53:51.126780 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:51.126932 master-0 kubenswrapper[7454]: I0319 11:53:51.126846 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:51.126932 master-0 kubenswrapper[7454]: I0319 11:53:51.126869 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 11:53:51.126932 master-0 kubenswrapper[7454]: I0319 11:53:51.126897 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:51.126932 master-0 kubenswrapper[7454]: I0319 11:53:51.126891 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 11:53:51.126932 master-0 kubenswrapper[7454]: I0319 11:53:51.126934 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 11:53:51.126932 master-0 kubenswrapper[7454]: I0319 11:53:51.126935 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:51.127326 master-0 kubenswrapper[7454]: I0319 11:53:51.126982 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 11:53:51.127326 master-0 kubenswrapper[7454]: I0319 11:53:51.127011 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:51.127326 master-0 kubenswrapper[7454]: I0319 11:53:51.127041 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:51.127326 master-0 kubenswrapper[7454]: I0319 11:53:51.127045 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:51.127326 master-0 kubenswrapper[7454]: I0319 11:53:51.127094 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:51.127326 master-0 kubenswrapper[7454]: I0319 11:53:51.127120 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:51.127326 master-0 kubenswrapper[7454]: I0319 11:53:51.127154 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:51.127326 master-0 kubenswrapper[7454]: I0319 11:53:51.127168 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:51.127326 master-0 kubenswrapper[7454]: I0319 11:53:51.127203 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:51.565594 master-0 kubenswrapper[7454]: I0319 11:53:51.565526 7454 apiserver.go:52] "Watching apiserver" Mar 19 11:53:51.585059 master-0 kubenswrapper[7454]: I0319 11:53:51.584992 7454 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 19 11:53:51.587195 master-0 kubenswrapper[7454]: I0319 11:53:51.587121 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lk9x9","openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw","openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4","openshift-controller-manager/controller-manager-f5df8899c-dc825","openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b","openshift-network-operator/iptables-alerter-276t5","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-2chdm","kube-system/bootstrap-kube-scheduler-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk","openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d","openshift-multus/network-metrics-daemon-6t6sn","openshift-network-node-identity/network-node-identity-wd4nx","kube-system/bootstrap-kube-controller-manager-master-0","openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8","openshift-kube-storage-version-migrator/migrator-8487694857-99fgs","assisted-installer/assisted-installer-controller-b6qm2","openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654","openshift-ingress-operator/ingress-operator-66b84d69b-btppx","openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6","openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt","openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v","openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh","openshift-marketplace/marketplace-operator-89ccd998f-pr7gk","openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl","openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq","openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws","openshift-dns-operator/dns-operator-9c5679d8f-z6kvm","openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz","openshift-multus/multus-additional-cni-plugins-2z4h8","openshift-network-diagnostics/network-check-target-v66z4","openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t","openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg","openshift-multus/multus-w82cg","openshift-network-operator/network-operator-7bd846bfc4-nb8bk","openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4","openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj","openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt","openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6"] Mar 19 11:53:51.587562 master-0 kubenswrapper[7454]: I0319 11:53:51.587527 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-b6qm2" Mar 19 11:53:51.588050 master-0 kubenswrapper[7454]: I0319 11:53:51.588014 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:51.588162 master-0 kubenswrapper[7454]: I0319 11:53:51.588132 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:53:51.590139 master-0 kubenswrapper[7454]: I0319 11:53:51.590086 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:53:51.592086 master-0 kubenswrapper[7454]: I0319 11:53:51.592000 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:51.593392 master-0 kubenswrapper[7454]: I0319 11:53:51.593359 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 19 11:53:51.595919 master-0 kubenswrapper[7454]: I0319 11:53:51.595879 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:53:51.602332 master-0 kubenswrapper[7454]: I0319 11:53:51.602213 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:51.603494 master-0 kubenswrapper[7454]: I0319 11:53:51.603455 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:51.604113 master-0 kubenswrapper[7454]: I0319 11:53:51.604073 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 19 11:53:51.604297 master-0 kubenswrapper[7454]: I0319 11:53:51.604273 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 19 11:53:51.604677 master-0 kubenswrapper[7454]: I0319 11:53:51.604646 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 19 11:53:51.604743 master-0 kubenswrapper[7454]: I0319 11:53:51.604678 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.604830 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.604924 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.604843 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.605323 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.605347 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.605379 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.605407 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.605510 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.605553 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.605562 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.605865 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.605951 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.605982 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.605671 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.606079 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.606118 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.606234 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.606262 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.605864 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.606383 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.606388 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.606605 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.606816 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.606936 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.607079 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.607192 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.607207 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 19 11:53:51.607818 master-0 kubenswrapper[7454]: I0319 11:53:51.607383 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 19 11:53:51.608851 master-0 kubenswrapper[7454]: I0319 11:53:51.608080 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:51.608851 master-0 kubenswrapper[7454]: I0319 11:53:51.608466 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:53:51.608944 master-0 kubenswrapper[7454]: I0319 11:53:51.608915 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:51.611367 master-0 kubenswrapper[7454]: I0319 11:53:51.609414 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" Mar 19 11:53:51.611367 master-0 kubenswrapper[7454]: I0319 11:53:51.609531 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 19 11:53:51.611367 master-0 kubenswrapper[7454]: I0319 11:53:51.609560 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-8487694857-99fgs" Mar 19 11:53:51.611367 master-0 kubenswrapper[7454]: I0319 11:53:51.609992 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:51.613246 master-0 kubenswrapper[7454]: I0319 11:53:51.613194 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 19 11:53:51.613868 master-0 kubenswrapper[7454]: I0319 11:53:51.613340 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 19 11:53:51.615995 master-0 kubenswrapper[7454]: I0319 11:53:51.614405 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 19 11:53:51.617406 master-0 kubenswrapper[7454]: I0319 11:53:51.617016 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 19 11:53:51.617406 master-0 kubenswrapper[7454]: I0319 11:53:51.617269 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 19 11:53:51.617406 master-0 kubenswrapper[7454]: I0319 11:53:51.617392 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 19 11:53:51.617642 master-0 kubenswrapper[7454]: I0319 11:53:51.617427 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 19 11:53:51.617642 master-0 kubenswrapper[7454]: I0319 11:53:51.617552 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 19 11:53:51.617819 master-0 kubenswrapper[7454]: I0319 11:53:51.617775 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 19 11:53:51.619342 master-0 kubenswrapper[7454]: I0319 11:53:51.618268 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 19 11:53:51.619342 master-0 kubenswrapper[7454]: I0319 11:53:51.618688 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 19 11:53:51.621518 master-0 kubenswrapper[7454]: I0319 11:53:51.621013 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 19 11:53:51.621518 master-0 kubenswrapper[7454]: I0319 11:53:51.621103 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 19 11:53:51.621518 master-0 kubenswrapper[7454]: I0319 11:53:51.621138 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 19 11:53:51.621518 master-0 kubenswrapper[7454]: I0319 11:53:51.621143 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 19 11:53:51.621518 master-0 kubenswrapper[7454]: I0319 11:53:51.621193 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 19 11:53:51.621518 master-0 kubenswrapper[7454]: I0319 11:53:51.621240 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 19 11:53:51.621518 master-0 kubenswrapper[7454]: I0319 11:53:51.621266 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 19 11:53:51.621518 master-0 kubenswrapper[7454]: I0319 11:53:51.621475 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 19 11:53:51.621790 master-0 kubenswrapper[7454]: I0319 11:53:51.621587 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 19 11:53:51.621790 master-0 kubenswrapper[7454]: I0319 11:53:51.621702 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 19 11:53:51.625393 master-0 kubenswrapper[7454]: I0319 11:53:51.623850 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 19 11:53:51.625393 master-0 kubenswrapper[7454]: I0319 11:53:51.623977 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 19 11:53:51.625393 master-0 kubenswrapper[7454]: I0319 11:53:51.624049 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 19 11:53:51.625393 master-0 kubenswrapper[7454]: I0319 11:53:51.624156 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 19 11:53:51.625393 master-0 kubenswrapper[7454]: I0319 11:53:51.624775 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 19 11:53:51.625393 master-0 kubenswrapper[7454]: I0319 11:53:51.624939 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 19 11:53:51.625771 master-0 kubenswrapper[7454]: I0319 11:53:51.625577 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 19 11:53:51.625771 master-0 kubenswrapper[7454]: I0319 11:53:51.625580 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 19 11:53:51.625916 master-0 kubenswrapper[7454]: I0319 11:53:51.625870 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 19 11:53:51.626030 master-0 kubenswrapper[7454]: I0319 11:53:51.625998 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 19 11:53:51.626156 master-0 kubenswrapper[7454]: I0319 11:53:51.626129 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 19 11:53:51.626247 master-0 kubenswrapper[7454]: I0319 11:53:51.626225 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 19 11:53:51.626339 master-0 kubenswrapper[7454]: I0319 11:53:51.626311 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 19 11:53:51.627292 master-0 kubenswrapper[7454]: I0319 11:53:51.626461 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 19 11:53:51.627292 master-0 kubenswrapper[7454]: I0319 11:53:51.626657 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 19 11:53:51.627292 master-0 kubenswrapper[7454]: I0319 11:53:51.626972 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 19 11:53:51.627292 master-0 kubenswrapper[7454]: I0319 11:53:51.627005 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 19 11:53:51.627292 master-0 kubenswrapper[7454]: I0319 11:53:51.627075 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 19 11:53:51.627292 master-0 kubenswrapper[7454]: I0319 11:53:51.627150 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 19 11:53:51.627292 master-0 kubenswrapper[7454]: I0319 11:53:51.627249 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 19 11:53:51.627696 master-0 kubenswrapper[7454]: I0319 11:53:51.627513 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 19 11:53:51.627815 master-0 kubenswrapper[7454]: I0319 11:53:51.627739 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 19 11:53:51.627888 master-0 kubenswrapper[7454]: I0319 11:53:51.627791 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 19 11:53:51.627997 master-0 kubenswrapper[7454]: I0319 11:53:51.627524 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 19 11:53:51.628248 master-0 kubenswrapper[7454]: I0319 11:53:51.628024 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 19 11:53:51.628466 master-0 kubenswrapper[7454]: I0319 11:53:51.628379 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 19 11:53:51.628466 master-0 kubenswrapper[7454]: I0319 11:53:51.628417 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 19 11:53:51.628612 master-0 kubenswrapper[7454]: I0319 11:53:51.628513 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 19 11:53:51.628612 master-0 kubenswrapper[7454]: I0319 11:53:51.628537 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 19 11:53:51.628612 master-0 kubenswrapper[7454]: I0319 11:53:51.628598 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 19 11:53:51.628775 master-0 kubenswrapper[7454]: I0319 11:53:51.628613 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 19 11:53:51.628775 master-0 kubenswrapper[7454]: I0319 11:53:51.628625 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 19 11:53:51.628775 master-0 kubenswrapper[7454]: I0319 11:53:51.628648 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 19 11:53:51.628948 master-0 kubenswrapper[7454]: I0319 11:53:51.628860 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 19 11:53:51.629008 master-0 kubenswrapper[7454]: I0319 11:53:51.628996 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 19 11:53:51.629066 master-0 kubenswrapper[7454]: I0319 11:53:51.629027 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 19 11:53:51.629066 master-0 kubenswrapper[7454]: I0319 11:53:51.629043 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 19 11:53:51.629066 master-0 kubenswrapper[7454]: I0319 11:53:51.629056 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 19 11:53:51.629236 master-0 kubenswrapper[7454]: I0319 11:53:51.629083 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 19 11:53:51.629236 master-0 kubenswrapper[7454]: I0319 11:53:51.629136 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 19 11:53:51.629236 master-0 kubenswrapper[7454]: I0319 11:53:51.629185 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 19 11:53:51.629236 master-0 kubenswrapper[7454]: I0319 11:53:51.629233 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 19 11:53:51.630644 master-0 kubenswrapper[7454]: I0319 11:53:51.630602 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 19 11:53:51.632167 master-0 kubenswrapper[7454]: I0319 11:53:51.632131 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 19 11:53:51.638307 master-0 kubenswrapper[7454]: I0319 11:53:51.638272 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 19 11:53:51.638763 master-0 kubenswrapper[7454]: I0319 11:53:51.638713 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 19 11:53:51.645419 master-0 kubenswrapper[7454]: I0319 11:53:51.645378 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 19 11:53:51.649687 master-0 kubenswrapper[7454]: I0319 11:53:51.649637 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 19 11:53:51.650964 master-0 kubenswrapper[7454]: I0319 11:53:51.650641 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 19 11:53:51.651463 master-0 kubenswrapper[7454]: I0319 11:53:51.651419 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 19 11:53:51.651645 master-0 kubenswrapper[7454]: I0319 11:53:51.651619 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 19 11:53:51.652679 master-0 kubenswrapper[7454]: I0319 11:53:51.651947 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 19 11:53:51.669913 master-0 kubenswrapper[7454]: I0319 11:53:51.669866 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 19 11:53:51.678632 master-0 kubenswrapper[7454]: I0319 11:53:51.678589 7454 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 19 11:53:51.689372 master-0 kubenswrapper[7454]: I0319 11:53:51.689344 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 19 11:53:51.702502 master-0 kubenswrapper[7454]: I0319 11:53:51.702446 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:51.707636 master-0 kubenswrapper[7454]: I0319 11:53:51.707594 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:51.710466 master-0 kubenswrapper[7454]: I0319 11:53:51.710429 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 19 11:53:51.730334 master-0 kubenswrapper[7454]: I0319 11:53:51.730272 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 19 11:53:51.731488 master-0 kubenswrapper[7454]: I0319 11:53:51.731442 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:51.731535 master-0 kubenswrapper[7454]: I0319 11:53:51.731505 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-run-ovn-kubernetes\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.731636 master-0 kubenswrapper[7454]: I0319 11:53:51.731606 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1089ea24-add9-482e-9276-e6ded12052d7-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-qv4cg\" (UID: \"1089ea24-add9-482e-9276-e6ded12052d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 11:53:51.731786 master-0 kubenswrapper[7454]: I0319 11:53:51.731761 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:51.731856 master-0 kubenswrapper[7454]: I0319 11:53:51.731838 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f97d998-530c-4d9d-a030-ca1d9d2d4490-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-6wzws\" (UID: \"0f97d998-530c-4d9d-a030-ca1d9d2d4490\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws" Mar 19 11:53:51.732227 master-0 kubenswrapper[7454]: I0319 11:53:51.732199 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f97d998-530c-4d9d-a030-ca1d9d2d4490-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-6wzws\" (UID: \"0f97d998-530c-4d9d-a030-ca1d9d2d4490\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws" Mar 19 11:53:51.732354 master-0 kubenswrapper[7454]: I0319 11:53:51.732193 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:53:51.732580 master-0 kubenswrapper[7454]: I0319 11:53:51.732377 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fe245927-c937-4ec7-ab83-4900bade72cf-cni-binary-copy\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.732580 master-0 kubenswrapper[7454]: I0319 11:53:51.732425 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06df1b1b-154e-46f9-aee0-79a137c6c928-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-ptqdh\" (UID: \"06df1b1b-154e-46f9-aee0-79a137c6c928\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 11:53:51.732580 master-0 kubenswrapper[7454]: I0319 11:53:51.732449 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-cni-netd\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.732580 master-0 kubenswrapper[7454]: I0319 11:53:51.732492 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:51.732580 master-0 kubenswrapper[7454]: I0319 11:53:51.732519 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-dc825\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:51.732580 master-0 kubenswrapper[7454]: I0319 11:53:51.732555 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:53:51.732580 master-0 kubenswrapper[7454]: I0319 11:53:51.732579 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h84l9\" (UniqueName: \"kubernetes.io/projected/f08c5930-44f0-48e4-80dd-2563f2733b2f-kube-api-access-h84l9\") pod \"openshift-apiserver-operator-d65958b8-mjs7x\" (UID: \"f08c5930-44f0-48e4-80dd-2563f2733b2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 11:53:51.732758 master-0 kubenswrapper[7454]: I0319 11:53:51.732602 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ed2dbd1-aec4-4009-917a-933533912ab5-config\") pod \"openshift-controller-manager-operator-8c94f4649-gx4w8\" (UID: \"9ed2dbd1-aec4-4009-917a-933533912ab5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 11:53:51.732758 master-0 kubenswrapper[7454]: I0319 11:53:51.732640 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:51.732758 master-0 kubenswrapper[7454]: I0319 11:53:51.732661 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-conf-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.732758 master-0 kubenswrapper[7454]: I0319 11:53:51.732666 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06df1b1b-154e-46f9-aee0-79a137c6c928-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-ptqdh\" (UID: \"06df1b1b-154e-46f9-aee0-79a137c6c928\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 11:53:51.732758 master-0 kubenswrapper[7454]: I0319 11:53:51.732683 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8414b6b0-ee16-47a5-982b-ee58b136cfcf-webhook-cert\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:53:51.732758 master-0 kubenswrapper[7454]: I0319 11:53:51.732720 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:51.732758 master-0 kubenswrapper[7454]: I0319 11:53:51.732742 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b80027fd-7b39-477a-a337-ff9bb08e7eeb-bound-sa-token\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:51.732950 master-0 kubenswrapper[7454]: I0319 11:53:51.732778 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-client-ca\") pod \"controller-manager-f5df8899c-dc825\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:51.732950 master-0 kubenswrapper[7454]: I0319 11:53:51.732831 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-config\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:51.732950 master-0 kubenswrapper[7454]: I0319 11:53:51.732856 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpdts\" (UniqueName: \"kubernetes.io/projected/9702fc8c-4fe0-413b-b2d4-db23021d42b8-kube-api-access-tpdts\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:51.732950 master-0 kubenswrapper[7454]: I0319 11:53:51.732879 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/7241bf11-192e-47db-9d80-2324938ed34c-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:51.732950 master-0 kubenswrapper[7454]: I0319 11:53:51.732860 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fe245927-c937-4ec7-ab83-4900bade72cf-cni-binary-copy\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.732950 master-0 kubenswrapper[7454]: I0319 11:53:51.732909 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnd9c\" (UniqueName: \"kubernetes.io/projected/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-kube-api-access-jnd9c\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:53:51.732950 master-0 kubenswrapper[7454]: I0319 11:53:51.732937 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovn-node-metrics-cert\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.733116 master-0 kubenswrapper[7454]: I0319 11:53:51.732968 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bf226d89-450d-4876-a113-345632b94ee9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:53:51.733116 master-0 kubenswrapper[7454]: I0319 11:53:51.732997 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/661b8957-a890-4032-9e57-45e2e0b35249-config\") pod \"service-ca-operator-b865698dc-sxsxt\" (UID: \"661b8957-a890-4032-9e57-45e2e0b35249\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 11:53:51.733116 master-0 kubenswrapper[7454]: I0319 11:53:51.733021 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:51.733116 master-0 kubenswrapper[7454]: I0319 11:53:51.733045 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-netns\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.733116 master-0 kubenswrapper[7454]: I0319 11:53:51.733071 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:51.733360 master-0 kubenswrapper[7454]: I0319 11:53:51.733334 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bf226d89-450d-4876-a113-345632b94ee9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:53:51.733417 master-0 kubenswrapper[7454]: I0319 11:53:51.733393 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ed2dbd1-aec4-4009-917a-933533912ab5-config\") pod \"openshift-controller-manager-operator-8c94f4649-gx4w8\" (UID: \"9ed2dbd1-aec4-4009-917a-933533912ab5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 11:53:51.733447 master-0 kubenswrapper[7454]: I0319 11:53:51.733430 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-config\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:51.733671 master-0 kubenswrapper[7454]: I0319 11:53:51.733649 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/7241bf11-192e-47db-9d80-2324938ed34c-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:51.733809 master-0 kubenswrapper[7454]: I0319 11:53:51.733757 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:51.733966 master-0 kubenswrapper[7454]: I0319 11:53:51.733926 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:51.734001 master-0 kubenswrapper[7454]: I0319 11:53:51.733937 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:51.734001 master-0 kubenswrapper[7454]: I0319 11:53:51.733976 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/661b8957-a890-4032-9e57-45e2e0b35249-config\") pod \"service-ca-operator-b865698dc-sxsxt\" (UID: \"661b8957-a890-4032-9e57-45e2e0b35249\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 11:53:51.734065 master-0 kubenswrapper[7454]: I0319 11:53:51.734008 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk\" (UID: \"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 11:53:51.734092 master-0 kubenswrapper[7454]: I0319 11:53:51.734077 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-system-cni-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.734118 master-0 kubenswrapper[7454]: I0319 11:53:51.734102 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:51.734182 master-0 kubenswrapper[7454]: I0319 11:53:51.734166 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x252z\" (UniqueName: \"kubernetes.io/projected/aef8e03f-0363-4e13-b7ca-4fa871d77c62-kube-api-access-x252z\") pod \"openshift-config-operator-95bf4f4d-nhvl4\" (UID: \"aef8e03f-0363-4e13-b7ca-4fa871d77c62\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:53:51.734256 master-0 kubenswrapper[7454]: I0319 11:53:51.734242 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xpc2\" (UniqueName: \"kubernetes.io/projected/19de6601-10d4-4112-a21f-0398d2b160d1-kube-api-access-6xpc2\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:51.734298 master-0 kubenswrapper[7454]: I0319 11:53:51.734284 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1089ea24-add9-482e-9276-e6ded12052d7-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-qv4cg\" (UID: \"1089ea24-add9-482e-9276-e6ded12052d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 11:53:51.734349 master-0 kubenswrapper[7454]: I0319 11:53:51.734330 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7b28a5a-7aec-4894-b8e3-63a4104207f7-serving-cert\") pod \"controller-manager-f5df8899c-dc825\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:51.734349 master-0 kubenswrapper[7454]: I0319 11:53:51.734332 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk\" (UID: \"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 11:53:51.734417 master-0 kubenswrapper[7454]: I0319 11:53:51.734398 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwfg5\" (UniqueName: \"kubernetes.io/projected/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-kube-api-access-hwfg5\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:53:51.734462 master-0 kubenswrapper[7454]: I0319 11:53:51.734442 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f08c5930-44f0-48e4-80dd-2563f2733b2f-config\") pod \"openshift-apiserver-operator-d65958b8-mjs7x\" (UID: \"f08c5930-44f0-48e4-80dd-2563f2733b2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 11:53:51.734494 master-0 kubenswrapper[7454]: I0319 11:53:51.734477 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:51.734566 master-0 kubenswrapper[7454]: I0319 11:53:51.734495 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1089ea24-add9-482e-9276-e6ded12052d7-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-qv4cg\" (UID: \"1089ea24-add9-482e-9276-e6ded12052d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 11:53:51.734566 master-0 kubenswrapper[7454]: I0319 11:53:51.734531 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhqhb\" (UniqueName: \"kubernetes.io/projected/398bcaca-1bea-4633-a78f-717e3d015ddd-kube-api-access-fhqhb\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:51.734618 master-0 kubenswrapper[7454]: I0319 11:53:51.734566 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:51.734618 master-0 kubenswrapper[7454]: I0319 11:53:51.734596 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.734683 master-0 kubenswrapper[7454]: I0319 11:53:51.734641 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f08c5930-44f0-48e4-80dd-2563f2733b2f-config\") pod \"openshift-apiserver-operator-d65958b8-mjs7x\" (UID: \"f08c5930-44f0-48e4-80dd-2563f2733b2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 11:53:51.734712 master-0 kubenswrapper[7454]: I0319 11:53:51.734673 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5n89\" (UniqueName: \"kubernetes.io/projected/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-kube-api-access-h5n89\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk\" (UID: \"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 11:53:51.734741 master-0 kubenswrapper[7454]: I0319 11:53:51.734707 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-cni-multus\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.734741 master-0 kubenswrapper[7454]: I0319 11:53:51.734733 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef8e03f-0363-4e13-b7ca-4fa871d77c62-serving-cert\") pod \"openshift-config-operator-95bf4f4d-nhvl4\" (UID: \"aef8e03f-0363-4e13-b7ca-4fa871d77c62\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:53:51.734808 master-0 kubenswrapper[7454]: I0319 11:53:51.734763 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-node-log\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.734847 master-0 kubenswrapper[7454]: I0319 11:53:51.734784 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-config\") pod \"controller-manager-f5df8899c-dc825\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:51.734847 master-0 kubenswrapper[7454]: I0319 11:53:51.734840 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/284768b8-9d70-4cf7-bace-8adc6b587186-metrics-tls\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 11:53:51.734901 master-0 kubenswrapper[7454]: I0319 11:53:51.734859 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06df1b1b-154e-46f9-aee0-79a137c6c928-config\") pod \"openshift-kube-scheduler-operator-dddff6458-ptqdh\" (UID: \"06df1b1b-154e-46f9-aee0-79a137c6c928\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 11:53:51.734901 master-0 kubenswrapper[7454]: I0319 11:53:51.734876 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-log-socket\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.734901 master-0 kubenswrapper[7454]: I0319 11:53:51.734896 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bf226d89-450d-4876-a113-345632b94ee9-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:53:51.734985 master-0 kubenswrapper[7454]: I0319 11:53:51.734924 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g6zz\" (UniqueName: \"kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz\") pod \"network-check-target-v66z4\" (UID: \"616dbb32-6b65-4e44-a217-6b1be2844cc9\") " pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:51.734985 master-0 kubenswrapper[7454]: I0319 11:53:51.734929 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef8e03f-0363-4e13-b7ca-4fa871d77c62-serving-cert\") pod \"openshift-config-operator-95bf4f4d-nhvl4\" (UID: \"aef8e03f-0363-4e13-b7ca-4fa871d77c62\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:53:51.734985 master-0 kubenswrapper[7454]: I0319 11:53:51.734964 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06df1b1b-154e-46f9-aee0-79a137c6c928-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-ptqdh\" (UID: \"06df1b1b-154e-46f9-aee0-79a137c6c928\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 11:53:51.735064 master-0 kubenswrapper[7454]: I0319 11:53:51.734989 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-systemd-units\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.735064 master-0 kubenswrapper[7454]: I0319 11:53:51.735010 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-864rg\" (UniqueName: \"kubernetes.io/projected/8414b6b0-ee16-47a5-982b-ee58b136cfcf-kube-api-access-864rg\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:53:51.735064 master-0 kubenswrapper[7454]: I0319 11:53:51.735029 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/aef8e03f-0363-4e13-b7ca-4fa871d77c62-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-nhvl4\" (UID: \"aef8e03f-0363-4e13-b7ca-4fa871d77c62\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:53:51.735175 master-0 kubenswrapper[7454]: I0319 11:53:51.735093 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06df1b1b-154e-46f9-aee0-79a137c6c928-config\") pod \"openshift-kube-scheduler-operator-dddff6458-ptqdh\" (UID: \"06df1b1b-154e-46f9-aee0-79a137c6c928\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 11:53:51.735175 master-0 kubenswrapper[7454]: I0319 11:53:51.735142 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-ovn\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.735175 master-0 kubenswrapper[7454]: I0319 11:53:51.735146 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/aef8e03f-0363-4e13-b7ca-4fa871d77c62-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-nhvl4\" (UID: \"aef8e03f-0363-4e13-b7ca-4fa871d77c62\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:53:51.735175 master-0 kubenswrapper[7454]: I0319 11:53:51.735162 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-os-release\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.735330 master-0 kubenswrapper[7454]: I0319 11:53:51.735189 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9702fc8c-4fe0-413b-b2d4-db23021d42b8-serving-cert\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:51.735330 master-0 kubenswrapper[7454]: I0319 11:53:51.735214 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bf226d89-450d-4876-a113-345632b94ee9-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:53:51.735330 master-0 kubenswrapper[7454]: I0319 11:53:51.735242 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-fz8cg\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:53:51.735330 master-0 kubenswrapper[7454]: I0319 11:53:51.735270 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:51.735330 master-0 kubenswrapper[7454]: I0319 11:53:51.735290 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-cnibin\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.735330 master-0 kubenswrapper[7454]: I0319 11:53:51.735318 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/06f67c28-34fd-4356-92f0-edd0986ad34e-iptables-alerter-script\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 11:53:51.735588 master-0 kubenswrapper[7454]: I0319 11:53:51.735347 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-etc-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.735588 master-0 kubenswrapper[7454]: I0319 11:53:51.735396 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:53:51.735588 master-0 kubenswrapper[7454]: I0319 11:53:51.735438 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9702fc8c-4fe0-413b-b2d4-db23021d42b8-serving-cert\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:51.735588 master-0 kubenswrapper[7454]: I0319 11:53:51.735457 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl6d7\" (UniqueName: \"kubernetes.io/projected/ab54833d-e57b-479d-b171-68155f6566f1-kube-api-access-gl6d7\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:53:51.735588 master-0 kubenswrapper[7454]: I0319 11:53:51.735506 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khv2z\" (UniqueName: \"kubernetes.io/projected/a7747954-a222-4809-8656-818203b55ee8-kube-api-access-khv2z\") pod \"csi-snapshot-controller-operator-5f5d689c6b-2chdm\" (UID: \"a7747954-a222-4809-8656-818203b55ee8\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-2chdm" Mar 19 11:53:51.735588 master-0 kubenswrapper[7454]: I0319 11:53:51.735540 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-config\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:51.735829 master-0 kubenswrapper[7454]: I0319 11:53:51.735620 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/284768b8-9d70-4cf7-bace-8adc6b587186-metrics-tls\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 11:53:51.735829 master-0 kubenswrapper[7454]: I0319 11:53:51.735657 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:51.735829 master-0 kubenswrapper[7454]: I0319 11:53:51.735701 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5bmd\" (UniqueName: \"kubernetes.io/projected/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-kube-api-access-c5bmd\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:51.735829 master-0 kubenswrapper[7454]: I0319 11:53:51.735724 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk\" (UID: \"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 11:53:51.735829 master-0 kubenswrapper[7454]: I0319 11:53:51.735744 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-kubelet\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.735829 master-0 kubenswrapper[7454]: I0319 11:53:51.735778 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/8414b6b0-ee16-47a5-982b-ee58b136cfcf-ovnkube-identity-cm\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:53:51.735829 master-0 kubenswrapper[7454]: I0319 11:53:51.735777 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-config\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:51.736213 master-0 kubenswrapper[7454]: I0319 11:53:51.735865 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovnkube-script-lib\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.736213 master-0 kubenswrapper[7454]: I0319 11:53:51.735908 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fe245927-c937-4ec7-ab83-4900bade72cf-multus-daemon-config\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.736213 master-0 kubenswrapper[7454]: I0319 11:53:51.735948 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19de6601-10d4-4112-a21f-0398d2b160d1-config\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:51.736213 master-0 kubenswrapper[7454]: I0319 11:53:51.735984 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-socket-dir-parent\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.736213 master-0 kubenswrapper[7454]: I0319 11:53:51.736015 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-multus-certs\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.736213 master-0 kubenswrapper[7454]: I0319 11:53:51.736017 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:51.736213 master-0 kubenswrapper[7454]: I0319 11:53:51.736051 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4hsp\" (UniqueName: \"kubernetes.io/projected/fe245927-c937-4ec7-ab83-4900bade72cf-kube-api-access-s4hsp\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.736213 master-0 kubenswrapper[7454]: I0319 11:53:51.736086 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tqdb\" (UniqueName: \"kubernetes.io/projected/b0f5939c-48b1-4d6c-9712-9128a78d603b-kube-api-access-6tqdb\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:51.736213 master-0 kubenswrapper[7454]: I0319 11:53:51.736102 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fe245927-c937-4ec7-ab83-4900bade72cf-multus-daemon-config\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.736213 master-0 kubenswrapper[7454]: I0319 11:53:51.736119 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-kubelet\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.736213 master-0 kubenswrapper[7454]: I0319 11:53:51.736161 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcxqj\" (UniqueName: \"kubernetes.io/projected/bf226d89-450d-4876-a113-345632b94ee9-kube-api-access-wcxqj\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:53:51.736213 master-0 kubenswrapper[7454]: I0319 11:53:51.736171 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk\" (UID: \"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 11:53:51.736213 master-0 kubenswrapper[7454]: I0319 11:53:51.736215 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bf226d89-450d-4876-a113-345632b94ee9-env-overrides\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:53:51.736882 master-0 kubenswrapper[7454]: I0319 11:53:51.736260 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:53:51.736882 master-0 kubenswrapper[7454]: I0319 11:53:51.736296 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsk9d\" (UniqueName: \"kubernetes.io/projected/9ed2dbd1-aec4-4009-917a-933533912ab5-kube-api-access-gsk9d\") pod \"openshift-controller-manager-operator-8c94f4649-gx4w8\" (UID: \"9ed2dbd1-aec4-4009-917a-933533912ab5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 11:53:51.736882 master-0 kubenswrapper[7454]: I0319 11:53:51.736321 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/284768b8-9d70-4cf7-bace-8adc6b587186-host-etc-kube\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 11:53:51.736882 master-0 kubenswrapper[7454]: I0319 11:53:51.736347 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p6vn\" (UniqueName: \"kubernetes.io/projected/284768b8-9d70-4cf7-bace-8adc6b587186-kube-api-access-8p6vn\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 11:53:51.736882 master-0 kubenswrapper[7454]: I0319 11:53:51.736371 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-hostroot\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.736882 master-0 kubenswrapper[7454]: I0319 11:53:51.736389 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-slash\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.736882 master-0 kubenswrapper[7454]: I0319 11:53:51.736390 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bf226d89-450d-4876-a113-345632b94ee9-env-overrides\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:53:51.736882 master-0 kubenswrapper[7454]: I0319 11:53:51.736414 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs4jf\" (UniqueName: \"kubernetes.io/projected/b80027fd-7b39-477a-a337-ff9bb08e7eeb-kube-api-access-hs4jf\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:51.736882 master-0 kubenswrapper[7454]: I0319 11:53:51.736416 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19de6601-10d4-4112-a21f-0398d2b160d1-config\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:51.736882 master-0 kubenswrapper[7454]: I0319 11:53:51.736573 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovnkube-config\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.736882 master-0 kubenswrapper[7454]: I0319 11:53:51.736605 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1089ea24-add9-482e-9276-e6ded12052d7-config\") pod \"kube-apiserver-operator-8b68b9d9b-qv4cg\" (UID: \"1089ea24-add9-482e-9276-e6ded12052d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 11:53:51.736882 master-0 kubenswrapper[7454]: I0319 11:53:51.736630 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:51.736882 master-0 kubenswrapper[7454]: I0319 11:53:51.736654 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv6bc\" (UniqueName: \"kubernetes.io/projected/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-kube-api-access-pv6bc\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:51.736882 master-0 kubenswrapper[7454]: I0319 11:53:51.736677 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hq8f\" (UniqueName: \"kubernetes.io/projected/661b8957-a890-4032-9e57-45e2e0b35249-kube-api-access-8hq8f\") pod \"service-ca-operator-b865698dc-sxsxt\" (UID: \"661b8957-a890-4032-9e57-45e2e0b35249\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 11:53:51.736882 master-0 kubenswrapper[7454]: I0319 11:53:51.736702 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c2dbd8b3-0e02-4747-a166-80aa6a94b060-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-cg9pq\" (UID: \"c2dbd8b3-0e02-4747-a166-80aa6a94b060\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 11:53:51.737282 master-0 kubenswrapper[7454]: I0319 11:53:51.736957 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovnkube-config\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.737282 master-0 kubenswrapper[7454]: I0319 11:53:51.737068 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-var-lib-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.737282 master-0 kubenswrapper[7454]: I0319 11:53:51.737096 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.737282 master-0 kubenswrapper[7454]: I0319 11:53:51.737115 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1089ea24-add9-482e-9276-e6ded12052d7-config\") pod \"kube-apiserver-operator-8b68b9d9b-qv4cg\" (UID: \"1089ea24-add9-482e-9276-e6ded12052d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 11:53:51.737282 master-0 kubenswrapper[7454]: I0319 11:53:51.737121 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b80027fd-7b39-477a-a337-ff9bb08e7eeb-trusted-ca\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:51.737282 master-0 kubenswrapper[7454]: I0319 11:53:51.737171 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wshb2\" (UniqueName: \"kubernetes.io/projected/9d2db220-4d5b-4819-a910-b186e1e9fb3e-kube-api-access-wshb2\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.737282 master-0 kubenswrapper[7454]: I0319 11:53:51.737223 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c2dbd8b3-0e02-4747-a166-80aa6a94b060-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-cg9pq\" (UID: \"c2dbd8b3-0e02-4747-a166-80aa6a94b060\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 11:53:51.737478 master-0 kubenswrapper[7454]: I0319 11:53:51.737288 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhdpf\" (UniqueName: \"kubernetes.io/projected/e7b28a5a-7aec-4894-b8e3-63a4104207f7-kube-api-access-jhdpf\") pod \"controller-manager-f5df8899c-dc825\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:51.737478 master-0 kubenswrapper[7454]: I0319 11:53:51.737316 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-k8s-cni-cncf-io\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.737478 master-0 kubenswrapper[7454]: I0319 11:53:51.737324 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:51.737478 master-0 kubenswrapper[7454]: I0319 11:53:51.737370 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-etc-kubernetes\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.737478 master-0 kubenswrapper[7454]: I0319 11:53:51.737398 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b80027fd-7b39-477a-a337-ff9bb08e7eeb-trusted-ca\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:51.737478 master-0 kubenswrapper[7454]: I0319 11:53:51.737399 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:53:51.737478 master-0 kubenswrapper[7454]: I0319 11:53:51.737439 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-cni-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.737665 master-0 kubenswrapper[7454]: I0319 11:53:51.737520 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/c2dbd8b3-0e02-4747-a166-80aa6a94b060-operand-assets\") pod \"cluster-olm-operator-67dcd4998-cg9pq\" (UID: \"c2dbd8b3-0e02-4747-a166-80aa6a94b060\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 11:53:51.737665 master-0 kubenswrapper[7454]: I0319 11:53:51.737551 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-cni-binary-copy\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:53:51.737665 master-0 kubenswrapper[7454]: I0319 11:53:51.737575 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:53:51.737665 master-0 kubenswrapper[7454]: I0319 11:53:51.737603 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5mkm\" (UniqueName: \"kubernetes.io/projected/7241bf11-192e-47db-9d80-2324938ed34c-kube-api-access-s5mkm\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:51.737665 master-0 kubenswrapper[7454]: I0319 11:53:51.737644 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/63c12a89-1b49-4eba-8f5a-551b10d2246b-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:51.738128 master-0 kubenswrapper[7454]: I0319 11:53:51.737670 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85912908-c447-4868-871b-82c5eadbfdbe-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:53:51.738128 master-0 kubenswrapper[7454]: I0319 11:53:51.737900 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:53:51.738128 master-0 kubenswrapper[7454]: I0319 11:53:51.737938 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2mdn\" (UniqueName: \"kubernetes.io/projected/944eac68-e72b-4aed-b5dc-d7d9703178a3-kube-api-access-m2mdn\") pod \"csi-snapshot-controller-64854d9cff-6m654\" (UID: \"944eac68-e72b-4aed-b5dc-d7d9703178a3\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" Mar 19 11:53:51.738128 master-0 kubenswrapper[7454]: I0319 11:53:51.737965 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shfs6\" (UniqueName: \"kubernetes.io/projected/7044a7b3-4fac-40af-a31c-054a1a1db26b-kube-api-access-shfs6\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:53:51.738128 master-0 kubenswrapper[7454]: I0319 11:53:51.738025 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-cnibin\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:53:51.738128 master-0 kubenswrapper[7454]: I0319 11:53:51.738036 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/c2dbd8b3-0e02-4747-a166-80aa6a94b060-operand-assets\") pod \"cluster-olm-operator-67dcd4998-cg9pq\" (UID: \"c2dbd8b3-0e02-4747-a166-80aa6a94b060\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 11:53:51.738128 master-0 kubenswrapper[7454]: I0319 11:53:51.738052 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdpj4\" (UniqueName: \"kubernetes.io/projected/06f67c28-34fd-4356-92f0-edd0986ad34e-kube-api-access-bdpj4\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 11:53:51.738128 master-0 kubenswrapper[7454]: I0319 11:53:51.738109 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ed2dbd1-aec4-4009-917a-933533912ab5-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-gx4w8\" (UID: \"9ed2dbd1-aec4-4009-917a-933533912ab5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 11:53:51.738128 master-0 kubenswrapper[7454]: I0319 11:53:51.738132 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-cni-binary-copy\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:53:51.738128 master-0 kubenswrapper[7454]: I0319 11:53:51.738146 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/19de6601-10d4-4112-a21f-0398d2b160d1-images\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:51.738732 master-0 kubenswrapper[7454]: I0319 11:53:51.738176 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:51.738732 master-0 kubenswrapper[7454]: I0319 11:53:51.738205 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/661b8957-a890-4032-9e57-45e2e0b35249-serving-cert\") pod \"service-ca-operator-b865698dc-sxsxt\" (UID: \"661b8957-a890-4032-9e57-45e2e0b35249\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 11:53:51.738732 master-0 kubenswrapper[7454]: I0319 11:53:51.738231 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-os-release\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:53:51.738732 master-0 kubenswrapper[7454]: I0319 11:53:51.738253 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:53:51.738732 master-0 kubenswrapper[7454]: I0319 11:53:51.738274 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2151eb84-177e-459c-be71-f48465323ac2-config\") pod \"kube-controller-manager-operator-ff989d6cc-fjn4b\" (UID: \"2151eb84-177e-459c-be71-f48465323ac2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 11:53:51.738732 master-0 kubenswrapper[7454]: I0319 11:53:51.738295 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-images\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:51.738732 master-0 kubenswrapper[7454]: I0319 11:53:51.738353 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/19de6601-10d4-4112-a21f-0398d2b160d1-images\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:51.738732 master-0 kubenswrapper[7454]: I0319 11:53:51.738454 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-images\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:51.738732 master-0 kubenswrapper[7454]: I0319 11:53:51.738581 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:53:51.738732 master-0 kubenswrapper[7454]: I0319 11:53:51.738672 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ed2dbd1-aec4-4009-917a-933533912ab5-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-gx4w8\" (UID: \"9ed2dbd1-aec4-4009-917a-933533912ab5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 11:53:51.738732 master-0 kubenswrapper[7454]: I0319 11:53:51.738737 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/661b8957-a890-4032-9e57-45e2e0b35249-serving-cert\") pod \"service-ca-operator-b865698dc-sxsxt\" (UID: \"661b8957-a890-4032-9e57-45e2e0b35249\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 11:53:51.739046 master-0 kubenswrapper[7454]: I0319 11:53:51.738846 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-ca\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:51.739046 master-0 kubenswrapper[7454]: I0319 11:53:51.738914 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-system-cni-dir\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:53:51.739046 master-0 kubenswrapper[7454]: I0319 11:53:51.738962 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2151eb84-177e-459c-be71-f48465323ac2-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-fjn4b\" (UID: \"2151eb84-177e-459c-be71-f48465323ac2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 11:53:51.739046 master-0 kubenswrapper[7454]: I0319 11:53:51.738997 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-env-overrides\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.739046 master-0 kubenswrapper[7454]: I0319 11:53:51.738999 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2151eb84-177e-459c-be71-f48465323ac2-config\") pod \"kube-controller-manager-operator-ff989d6cc-fjn4b\" (UID: \"2151eb84-177e-459c-be71-f48465323ac2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 11:53:51.739168 master-0 kubenswrapper[7454]: I0319 11:53:51.739103 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-ca\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:51.739168 master-0 kubenswrapper[7454]: I0319 11:53:51.739121 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2151eb84-177e-459c-be71-f48465323ac2-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-fjn4b\" (UID: \"2151eb84-177e-459c-be71-f48465323ac2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 11:53:51.739168 master-0 kubenswrapper[7454]: I0319 11:53:51.739122 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:51.739168 master-0 kubenswrapper[7454]: I0319 11:53:51.739126 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:53:51.739271 master-0 kubenswrapper[7454]: I0319 11:53:51.739208 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f08c5930-44f0-48e4-80dd-2563f2733b2f-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-mjs7x\" (UID: \"f08c5930-44f0-48e4-80dd-2563f2733b2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 11:53:51.739271 master-0 kubenswrapper[7454]: I0319 11:53:51.739225 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-env-overrides\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.739271 master-0 kubenswrapper[7454]: I0319 11:53:51.739239 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-systemd\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.739383 master-0 kubenswrapper[7454]: I0319 11:53:51.739340 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/63c12a89-1b49-4eba-8f5a-551b10d2246b-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:51.739383 master-0 kubenswrapper[7454]: I0319 11:53:51.739361 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3017b5e-178e-49de-89d2-817a18398203-serving-cert\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:51.739439 master-0 kubenswrapper[7454]: I0319 11:53:51.739398 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85912908-c447-4868-871b-82c5eadbfdbe-kube-api-access\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:53:51.739439 master-0 kubenswrapper[7454]: I0319 11:53:51.739408 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f08c5930-44f0-48e4-80dd-2563f2733b2f-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-mjs7x\" (UID: \"f08c5930-44f0-48e4-80dd-2563f2733b2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 11:53:51.739439 master-0 kubenswrapper[7454]: I0319 11:53:51.739422 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06f67c28-34fd-4356-92f0-edd0986ad34e-host-slash\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 11:53:51.739439 master-0 kubenswrapper[7454]: I0319 11:53:51.739432 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:51.739547 master-0 kubenswrapper[7454]: I0319 11:53:51.739446 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-client\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:51.739547 master-0 kubenswrapper[7454]: I0319 11:53:51.739478 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6wm6\" (UniqueName: \"kubernetes.io/projected/d3017b5e-178e-49de-89d2-817a18398203-kube-api-access-b6wm6\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:51.739547 master-0 kubenswrapper[7454]: I0319 11:53:51.739505 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85912908-c447-4868-871b-82c5eadbfdbe-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:53:51.739547 master-0 kubenswrapper[7454]: I0319 11:53:51.739528 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:51.739643 master-0 kubenswrapper[7454]: I0319 11:53:51.739555 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8414b6b0-ee16-47a5-982b-ee58b136cfcf-env-overrides\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:53:51.739643 master-0 kubenswrapper[7454]: I0319 11:53:51.739559 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3017b5e-178e-49de-89d2-817a18398203-serving-cert\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:51.739643 master-0 kubenswrapper[7454]: I0319 11:53:51.739576 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85912908-c447-4868-871b-82c5eadbfdbe-service-ca\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:53:51.739718 master-0 kubenswrapper[7454]: I0319 11:53:51.739667 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfl29\" (UniqueName: \"kubernetes.io/projected/806a4c30-7b93-4430-86da-f9e1f4f2d206-kube-api-access-dfl29\") pod \"multus-admission-controller-5dbbb8b86f-fz8cg\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:53:51.739718 master-0 kubenswrapper[7454]: I0319 11:53:51.739668 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-client\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:51.739718 master-0 kubenswrapper[7454]: I0319 11:53:51.739702 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:51.739824 master-0 kubenswrapper[7454]: I0319 11:53:51.739753 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86r6z\" (UniqueName: \"kubernetes.io/projected/d975e831-7348-41b9-9622-f4a503674c38-kube-api-access-86r6z\") pod \"migrator-8487694857-99fgs\" (UID: \"d975e831-7348-41b9-9622-f4a503674c38\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-99fgs" Mar 19 11:53:51.739824 master-0 kubenswrapper[7454]: I0319 11:53:51.739774 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mr6d\" (UniqueName: \"kubernetes.io/projected/beb562de-402b-4d9f-b5ed-090b60847a95-kube-api-access-9mr6d\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:53:51.739824 master-0 kubenswrapper[7454]: I0319 11:53:51.739776 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85912908-c447-4868-871b-82c5eadbfdbe-service-ca\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:53:51.739908 master-0 kubenswrapper[7454]: I0319 11:53:51.739846 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2151eb84-177e-459c-be71-f48465323ac2-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-fjn4b\" (UID: \"2151eb84-177e-459c-be71-f48465323ac2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 11:53:51.739908 master-0 kubenswrapper[7454]: I0319 11:53:51.739868 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-cni-bin\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.739967 master-0 kubenswrapper[7454]: I0319 11:53:51.739913 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bst2w\" (UniqueName: \"kubernetes.io/projected/63c12a89-1b49-4eba-8f5a-551b10d2246b-kube-api-access-bst2w\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:51.739967 master-0 kubenswrapper[7454]: I0319 11:53:51.739938 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npc2t\" (UniqueName: \"kubernetes.io/projected/c2dbd8b3-0e02-4747-a166-80aa6a94b060-kube-api-access-npc2t\") pod \"cluster-olm-operator-67dcd4998-cg9pq\" (UID: \"c2dbd8b3-0e02-4747-a166-80aa6a94b060\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 11:53:51.740018 master-0 kubenswrapper[7454]: I0319 11:53:51.739976 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-run-netns\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.740067 master-0 kubenswrapper[7454]: I0319 11:53:51.740031 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-cni-bin\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.740096 master-0 kubenswrapper[7454]: I0319 11:53:51.740085 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zntzt\" (UniqueName: \"kubernetes.io/projected/0f97d998-530c-4d9d-a030-ca1d9d2d4490-kube-api-access-zntzt\") pod \"cluster-storage-operator-7d87854d6-6wzws\" (UID: \"0f97d998-530c-4d9d-a030-ca1d9d2d4490\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws" Mar 19 11:53:51.749681 master-0 kubenswrapper[7454]: I0319 11:53:51.749648 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 19 11:53:51.770005 master-0 kubenswrapper[7454]: I0319 11:53:51.769965 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 19 11:53:51.791862 master-0 kubenswrapper[7454]: I0319 11:53:51.791808 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 19 11:53:51.794708 master-0 kubenswrapper[7454]: I0319 11:53:51.794674 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8414b6b0-ee16-47a5-982b-ee58b136cfcf-webhook-cert\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:53:51.810448 master-0 kubenswrapper[7454]: I0319 11:53:51.810324 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 19 11:53:51.820165 master-0 kubenswrapper[7454]: I0319 11:53:51.820066 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8414b6b0-ee16-47a5-982b-ee58b136cfcf-env-overrides\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:53:51.830436 master-0 kubenswrapper[7454]: I0319 11:53:51.830391 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 19 11:53:51.841655 master-0 kubenswrapper[7454]: I0319 11:53:51.841578 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-kubelet\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.841860 master-0 kubenswrapper[7454]: I0319 11:53:51.841711 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-kubelet\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.842027 master-0 kubenswrapper[7454]: I0319 11:53:51.841884 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-socket-dir-parent\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.842027 master-0 kubenswrapper[7454]: I0319 11:53:51.841944 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-multus-certs\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.842027 master-0 kubenswrapper[7454]: I0319 11:53:51.842010 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-socket-dir-parent\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.842027 master-0 kubenswrapper[7454]: I0319 11:53:51.842021 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-kubelet\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.842184 master-0 kubenswrapper[7454]: I0319 11:53:51.842042 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-multus-certs\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.842184 master-0 kubenswrapper[7454]: I0319 11:53:51.842085 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:53:51.842184 master-0 kubenswrapper[7454]: I0319 11:53:51.842145 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-kubelet\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.842184 master-0 kubenswrapper[7454]: I0319 11:53:51.842144 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/284768b8-9d70-4cf7-bace-8adc6b587186-host-etc-kube\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 11:53:51.842320 master-0 kubenswrapper[7454]: I0319 11:53:51.842191 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-hostroot\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.842320 master-0 kubenswrapper[7454]: I0319 11:53:51.842208 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-slash\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.842320 master-0 kubenswrapper[7454]: I0319 11:53:51.842244 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-var-lib-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.842320 master-0 kubenswrapper[7454]: I0319 11:53:51.842264 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.842320 master-0 kubenswrapper[7454]: I0319 11:53:51.842287 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhdpf\" (UniqueName: \"kubernetes.io/projected/e7b28a5a-7aec-4894-b8e3-63a4104207f7-kube-api-access-jhdpf\") pod \"controller-manager-f5df8899c-dc825\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:51.842320 master-0 kubenswrapper[7454]: I0319 11:53:51.842304 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-k8s-cni-cncf-io\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.842320 master-0 kubenswrapper[7454]: I0319 11:53:51.842322 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-etc-kubernetes\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.842605 master-0 kubenswrapper[7454]: I0319 11:53:51.842339 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-cni-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.842605 master-0 kubenswrapper[7454]: I0319 11:53:51.842343 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/284768b8-9d70-4cf7-bace-8adc6b587186-host-etc-kube\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 11:53:51.842605 master-0 kubenswrapper[7454]: I0319 11:53:51.842357 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:53:51.842605 master-0 kubenswrapper[7454]: E0319 11:53:51.842430 7454 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 11:53:51.842605 master-0 kubenswrapper[7454]: E0319 11:53:51.842551 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert podName:beb562de-402b-4d9f-b5ed-090b60847a95 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:52.342534382 +0000 UTC m=+1.973000295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6j2nj" (UID: "beb562de-402b-4d9f-b5ed-090b60847a95") : secret "package-server-manager-serving-cert" not found Mar 19 11:53:51.842605 master-0 kubenswrapper[7454]: I0319 11:53:51.842549 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85912908-c447-4868-871b-82c5eadbfdbe-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:53:51.842859 master-0 kubenswrapper[7454]: E0319 11:53:51.842685 7454 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 19 11:53:51.842859 master-0 kubenswrapper[7454]: E0319 11:53:51.842750 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls podName:ab54833d-e57b-479d-b171-68155f6566f1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:52.342726578 +0000 UTC m=+1.973192531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls") pod "dns-operator-9c5679d8f-z6kvm" (UID: "ab54833d-e57b-479d-b171-68155f6566f1") : secret "metrics-tls" not found Mar 19 11:53:51.842945 master-0 kubenswrapper[7454]: I0319 11:53:51.842884 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-k8s-cni-cncf-io\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.842945 master-0 kubenswrapper[7454]: I0319 11:53:51.842926 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-etc-kubernetes\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.843023 master-0 kubenswrapper[7454]: I0319 11:53:51.842985 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-cni-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.843023 master-0 kubenswrapper[7454]: I0319 11:53:51.842998 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-slash\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.843092 master-0 kubenswrapper[7454]: I0319 11:53:51.843012 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-hostroot\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.843092 master-0 kubenswrapper[7454]: I0319 11:53:51.842442 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85912908-c447-4868-871b-82c5eadbfdbe-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:53:51.843163 master-0 kubenswrapper[7454]: I0319 11:53:51.843130 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.843201 master-0 kubenswrapper[7454]: I0319 11:53:51.843179 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-var-lib-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.843289 master-0 kubenswrapper[7454]: I0319 11:53:51.843249 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:53:51.843289 master-0 kubenswrapper[7454]: E0319 11:53:51.843275 7454 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 11:53:51.843383 master-0 kubenswrapper[7454]: I0319 11:53:51.843296 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-cnibin\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:53:51.843383 master-0 kubenswrapper[7454]: E0319 11:53:51.843309 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert podName:85912908-c447-4868-871b-82c5eadbfdbe nodeName:}" failed. No retries permitted until 2026-03-19 11:53:52.343301186 +0000 UTC m=+1.973767099 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert") pod "cluster-version-operator-56d8475767-gjj5v" (UID: "85912908-c447-4868-871b-82c5eadbfdbe") : secret "cluster-version-operator-serving-cert" not found Mar 19 11:53:51.843383 master-0 kubenswrapper[7454]: I0319 11:53:51.843327 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-cnibin\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:53:51.843524 master-0 kubenswrapper[7454]: I0319 11:53:51.843457 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-os-release\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:53:51.843524 master-0 kubenswrapper[7454]: I0319 11:53:51.843506 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:51.843595 master-0 kubenswrapper[7454]: I0319 11:53:51.843541 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-system-cni-dir\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:53:51.843595 master-0 kubenswrapper[7454]: I0319 11:53:51.843575 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-systemd\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.843725 master-0 kubenswrapper[7454]: I0319 11:53:51.843607 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-os-release\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:53:51.843725 master-0 kubenswrapper[7454]: I0319 11:53:51.843606 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06f67c28-34fd-4356-92f0-edd0986ad34e-host-slash\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 11:53:51.843725 master-0 kubenswrapper[7454]: I0319 11:53:51.843648 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85912908-c447-4868-871b-82c5eadbfdbe-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:53:51.843725 master-0 kubenswrapper[7454]: I0319 11:53:51.843652 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06f67c28-34fd-4356-92f0-edd0986ad34e-host-slash\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 11:53:51.843725 master-0 kubenswrapper[7454]: I0319 11:53:51.843689 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85912908-c447-4868-871b-82c5eadbfdbe-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:53:51.843725 master-0 kubenswrapper[7454]: I0319 11:53:51.843707 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:51.843725 master-0 kubenswrapper[7454]: I0319 11:53:51.843725 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:51.843725 master-0 kubenswrapper[7454]: E0319 11:53:51.843728 7454 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 11:53:51.844015 master-0 kubenswrapper[7454]: I0319 11:53:51.843760 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-cni-bin\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.844015 master-0 kubenswrapper[7454]: E0319 11:53:51.843776 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls podName:d3541cbe-3be0-40d3-89d2-b5937b6a8f47 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:52.3437585 +0000 UTC m=+1.974224443 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls") pod "machine-config-operator-84d549f6d5-lswqw" (UID: "d3541cbe-3be0-40d3-89d2-b5937b6a8f47") : secret "mco-proxy-tls" not found Mar 19 11:53:51.844015 master-0 kubenswrapper[7454]: I0319 11:53:51.843786 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-cni-bin\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.844015 master-0 kubenswrapper[7454]: E0319 11:53:51.843863 7454 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 11:53:51.844015 master-0 kubenswrapper[7454]: E0319 11:53:51.843889 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics podName:b0f5939c-48b1-4d6c-9712-9128a78d603b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:52.343881134 +0000 UTC m=+1.974347047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-pr7gk" (UID: "b0f5939c-48b1-4d6c-9712-9128a78d603b") : secret "marketplace-operator-metrics" not found Mar 19 11:53:51.844015 master-0 kubenswrapper[7454]: I0319 11:53:51.843903 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-run-netns\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.844015 master-0 kubenswrapper[7454]: I0319 11:53:51.843919 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-cni-bin\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.844015 master-0 kubenswrapper[7454]: I0319 11:53:51.843935 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-system-cni-dir\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:53:51.844015 master-0 kubenswrapper[7454]: I0319 11:53:51.843952 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-run-ovn-kubernetes\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.844015 master-0 kubenswrapper[7454]: I0319 11:53:51.843969 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-run-ovn-kubernetes\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.844015 master-0 kubenswrapper[7454]: I0319 11:53:51.843998 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-run-netns\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.844411 master-0 kubenswrapper[7454]: I0319 11:53:51.844021 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:51.844411 master-0 kubenswrapper[7454]: E0319 11:53:51.844041 7454 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 11:53:51.844411 master-0 kubenswrapper[7454]: E0319 11:53:51.844066 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls podName:b80027fd-7b39-477a-a337-ff9bb08e7eeb nodeName:}" failed. No retries permitted until 2026-03-19 11:53:52.34405794 +0000 UTC m=+1.974523853 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls") pod "ingress-operator-66b84d69b-btppx" (UID: "b80027fd-7b39-477a-a337-ff9bb08e7eeb") : secret "metrics-tls" not found Mar 19 11:53:51.844411 master-0 kubenswrapper[7454]: I0319 11:53:51.844082 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:53:51.844411 master-0 kubenswrapper[7454]: I0319 11:53:51.844106 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-cni-netd\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.844411 master-0 kubenswrapper[7454]: I0319 11:53:51.844134 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-dc825\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:51.844411 master-0 kubenswrapper[7454]: E0319 11:53:51.844148 7454 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:53:51.844411 master-0 kubenswrapper[7454]: E0319 11:53:51.844192 7454 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 11:53:51.844411 master-0 kubenswrapper[7454]: E0319 11:53:51.844220 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:52.344198454 +0000 UTC m=+1.974664407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:53:51.844411 master-0 kubenswrapper[7454]: E0319 11:53:51.844251 7454 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 11:53:51.844411 master-0 kubenswrapper[7454]: E0319 11:53:51.844252 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert podName:87a3f546-e1c1-42a1-b80e-d45b6d5c0a04 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:52.344236975 +0000 UTC m=+1.974702928 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert") pod "olm-operator-5c9796789-8cldl" (UID: "87a3f546-e1c1-42a1-b80e-d45b6d5c0a04") : secret "olm-operator-serving-cert" not found Mar 19 11:53:51.844411 master-0 kubenswrapper[7454]: E0319 11:53:51.844274 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert podName:bdcdb23d-ef1f-45e2-b9ac-7abf707637b6 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:52.344267646 +0000 UTC m=+1.974733559 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert") pod "catalog-operator-68f85b4d6c-2trz4" (UID: "bdcdb23d-ef1f-45e2-b9ac-7abf707637b6") : secret "catalog-operator-serving-cert" not found Mar 19 11:53:51.844411 master-0 kubenswrapper[7454]: I0319 11:53:51.844291 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-cni-netd\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.844411 master-0 kubenswrapper[7454]: I0319 11:53:51.844295 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-systemd\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.844411 master-0 kubenswrapper[7454]: I0319 11:53:51.844158 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:53:51.844411 master-0 kubenswrapper[7454]: I0319 11:53:51.844219 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-cni-bin\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.845024 master-0 kubenswrapper[7454]: I0319 11:53:51.844440 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-conf-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.845024 master-0 kubenswrapper[7454]: I0319 11:53:51.844497 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:51.845024 master-0 kubenswrapper[7454]: I0319 11:53:51.844514 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-conf-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.845024 master-0 kubenswrapper[7454]: I0319 11:53:51.844545 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-client-ca\") pod \"controller-manager-f5df8899c-dc825\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:51.845024 master-0 kubenswrapper[7454]: I0319 11:53:51.844674 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-netns\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.845024 master-0 kubenswrapper[7454]: E0319 11:53:51.844689 7454 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 11:53:51.845024 master-0 kubenswrapper[7454]: I0319 11:53:51.844730 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:51.845024 master-0 kubenswrapper[7454]: I0319 11:53:51.844746 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-netns\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.845024 master-0 kubenswrapper[7454]: E0319 11:53:51.844785 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:52.344762792 +0000 UTC m=+1.975228715 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-operator-tls" not found Mar 19 11:53:51.845024 master-0 kubenswrapper[7454]: I0319 11:53:51.844845 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-system-cni-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.845024 master-0 kubenswrapper[7454]: I0319 11:53:51.844880 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:51.845024 master-0 kubenswrapper[7454]: E0319 11:53:51.844885 7454 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 19 11:53:51.845024 master-0 kubenswrapper[7454]: I0319 11:53:51.844927 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-system-cni-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.845024 master-0 kubenswrapper[7454]: I0319 11:53:51.844927 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7b28a5a-7aec-4894-b8e3-63a4104207f7-serving-cert\") pod \"controller-manager-f5df8899c-dc825\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:51.845024 master-0 kubenswrapper[7454]: E0319 11:53:51.844956 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls podName:63c12a89-1b49-4eba-8f5a-551b10d2246b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:52.344931227 +0000 UTC m=+1.975397190 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-zfsqt" (UID: "63c12a89-1b49-4eba-8f5a-551b10d2246b") : secret "node-tuning-operator-tls" not found Mar 19 11:53:51.845550 master-0 kubenswrapper[7454]: E0319 11:53:51.845060 7454 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 19 11:53:51.845550 master-0 kubenswrapper[7454]: E0319 11:53:51.845133 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs podName:398bcaca-1bea-4633-a78f-717e3d015ddd nodeName:}" failed. No retries permitted until 2026-03-19 11:53:52.345115323 +0000 UTC m=+1.975581236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs") pod "network-metrics-daemon-6t6sn" (UID: "398bcaca-1bea-4633-a78f-717e3d015ddd") : secret "metrics-daemon-secret" not found Mar 19 11:53:51.845550 master-0 kubenswrapper[7454]: I0319 11:53:51.845175 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:51.845550 master-0 kubenswrapper[7454]: I0319 11:53:51.845206 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:51.845550 master-0 kubenswrapper[7454]: I0319 11:53:51.845227 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.845550 master-0 kubenswrapper[7454]: I0319 11:53:51.845244 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-cni-multus\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.845550 master-0 kubenswrapper[7454]: E0319 11:53:51.845371 7454 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 19 11:53:51.845550 master-0 kubenswrapper[7454]: I0319 11:53:51.845397 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-node-log\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.845550 master-0 kubenswrapper[7454]: I0319 11:53:51.845419 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-cni-multus\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.845550 master-0 kubenswrapper[7454]: I0319 11:53:51.845442 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.845550 master-0 kubenswrapper[7454]: I0319 11:53:51.845450 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-node-log\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.845550 master-0 kubenswrapper[7454]: E0319 11:53:51.845461 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert podName:63c12a89-1b49-4eba-8f5a-551b10d2246b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:52.345437263 +0000 UTC m=+1.975903176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-zfsqt" (UID: "63c12a89-1b49-4eba-8f5a-551b10d2246b") : secret "performance-addon-operator-webhook-cert" not found Mar 19 11:53:51.845550 master-0 kubenswrapper[7454]: E0319 11:53:51.845492 7454 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 11:53:51.845550 master-0 kubenswrapper[7454]: E0319 11:53:51.845516 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls podName:7241bf11-192e-47db-9d80-2324938ed34c nodeName:}" failed. No retries permitted until 2026-03-19 11:53:52.345509346 +0000 UTC m=+1.975975249 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-92c5d" (UID: "7241bf11-192e-47db-9d80-2324938ed34c") : secret "cluster-monitoring-operator-tls" not found Mar 19 11:53:51.845550 master-0 kubenswrapper[7454]: I0319 11:53:51.845501 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-log-socket\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.845550 master-0 kubenswrapper[7454]: I0319 11:53:51.845553 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-config\") pod \"controller-manager-f5df8899c-dc825\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:51.846168 master-0 kubenswrapper[7454]: I0319 11:53:51.845587 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-systemd-units\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.846168 master-0 kubenswrapper[7454]: I0319 11:53:51.845622 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g6zz\" (UniqueName: \"kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz\") pod \"network-check-target-v66z4\" (UID: \"616dbb32-6b65-4e44-a217-6b1be2844cc9\") " pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:51.846168 master-0 kubenswrapper[7454]: I0319 11:53:51.845648 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-ovn\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.846168 master-0 kubenswrapper[7454]: I0319 11:53:51.845669 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-os-release\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.846168 master-0 kubenswrapper[7454]: I0319 11:53:51.845696 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-fz8cg\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:53:51.846168 master-0 kubenswrapper[7454]: I0319 11:53:51.845719 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:51.846168 master-0 kubenswrapper[7454]: I0319 11:53:51.845725 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-systemd-units\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.846168 master-0 kubenswrapper[7454]: I0319 11:53:51.845741 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-cnibin\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.846168 master-0 kubenswrapper[7454]: I0319 11:53:51.845761 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-log-socket\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.846168 master-0 kubenswrapper[7454]: I0319 11:53:51.845766 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-etc-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.846168 master-0 kubenswrapper[7454]: I0319 11:53:51.845817 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:53:51.846168 master-0 kubenswrapper[7454]: E0319 11:53:51.845926 7454 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 11:53:51.846168 master-0 kubenswrapper[7454]: E0319 11:53:51.845956 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs podName:806a4c30-7b93-4430-86da-f9e1f4f2d206 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:52.34594855 +0000 UTC m=+1.976414463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-fz8cg" (UID: "806a4c30-7b93-4430-86da-f9e1f4f2d206") : secret "multus-admission-controller-secret" not found Mar 19 11:53:51.846168 master-0 kubenswrapper[7454]: I0319 11:53:51.845958 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:53:51.846168 master-0 kubenswrapper[7454]: I0319 11:53:51.846005 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-etc-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.846168 master-0 kubenswrapper[7454]: I0319 11:53:51.846161 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-cnibin\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.846708 master-0 kubenswrapper[7454]: E0319 11:53:51.846230 7454 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 19 11:53:51.846708 master-0 kubenswrapper[7454]: E0319 11:53:51.846268 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls podName:82b98dca-59f9-42be-94ca-4a2a2b6fea0f nodeName:}" failed. No retries permitted until 2026-03-19 11:53:52.346256278 +0000 UTC m=+1.976722201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-g6sn6" (UID: "82b98dca-59f9-42be-94ca-4a2a2b6fea0f") : secret "image-registry-operator-tls" not found Mar 19 11:53:51.846708 master-0 kubenswrapper[7454]: I0319 11:53:51.846308 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-ovn\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.846708 master-0 kubenswrapper[7454]: I0319 11:53:51.846359 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-os-release\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:51.850877 master-0 kubenswrapper[7454]: I0319 11:53:51.850055 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 19 11:53:51.856507 master-0 kubenswrapper[7454]: I0319 11:53:51.856467 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/8414b6b0-ee16-47a5-982b-ee58b136cfcf-ovnkube-identity-cm\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:53:51.870623 master-0 kubenswrapper[7454]: I0319 11:53:51.870578 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 19 11:53:51.889669 master-0 kubenswrapper[7454]: I0319 11:53:51.889445 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 19 11:53:51.913815 master-0 kubenswrapper[7454]: I0319 11:53:51.909531 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 19 11:53:51.917440 master-0 kubenswrapper[7454]: I0319 11:53:51.917380 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovnkube-script-lib\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.928849 master-0 kubenswrapper[7454]: I0319 11:53:51.928768 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 19 11:53:51.934532 master-0 kubenswrapper[7454]: I0319 11:53:51.934491 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovn-node-metrics-cert\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:51.948813 master-0 kubenswrapper[7454]: I0319 11:53:51.948760 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 19 11:53:51.969923 master-0 kubenswrapper[7454]: I0319 11:53:51.969881 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 19 11:53:51.990945 master-0 kubenswrapper[7454]: I0319 11:53:51.990899 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 19 11:53:51.994674 master-0 kubenswrapper[7454]: E0319 11:53:51.994648 7454 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 19 11:53:51.994843 master-0 kubenswrapper[7454]: E0319 11:53:51.994829 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-proxy-ca-bundles podName:e7b28a5a-7aec-4894-b8e3-63a4104207f7 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:52.49478551 +0000 UTC m=+2.125251423 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-proxy-ca-bundles") pod "controller-manager-f5df8899c-dc825" (UID: "e7b28a5a-7aec-4894-b8e3-63a4104207f7") : configmap "openshift-global-ca" not found Mar 19 11:53:52.009972 master-0 kubenswrapper[7454]: I0319 11:53:52.009935 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 19 11:53:52.030107 master-0 kubenswrapper[7454]: I0319 11:53:52.030057 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 19 11:53:52.035958 master-0 kubenswrapper[7454]: E0319 11:53:52.035914 7454 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 11:53:52.036248 master-0 kubenswrapper[7454]: E0319 11:53:52.036217 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b28a5a-7aec-4894-b8e3-63a4104207f7-serving-cert podName:e7b28a5a-7aec-4894-b8e3-63a4104207f7 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:52.536179607 +0000 UTC m=+2.166645570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e7b28a5a-7aec-4894-b8e3-63a4104207f7-serving-cert") pod "controller-manager-f5df8899c-dc825" (UID: "e7b28a5a-7aec-4894-b8e3-63a4104207f7") : secret "serving-cert" not found Mar 19 11:53:52.049350 master-0 kubenswrapper[7454]: I0319 11:53:52.049315 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 19 11:53:52.056196 master-0 kubenswrapper[7454]: E0319 11:53:52.056159 7454 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 19 11:53:52.056432 master-0 kubenswrapper[7454]: E0319 11:53:52.056408 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-config podName:e7b28a5a-7aec-4894-b8e3-63a4104207f7 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:52.55638518 +0000 UTC m=+2.186851133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-config") pod "controller-manager-f5df8899c-dc825" (UID: "e7b28a5a-7aec-4894-b8e3-63a4104207f7") : configmap "config" not found Mar 19 11:53:52.069594 master-0 kubenswrapper[7454]: I0319 11:53:52.069507 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 19 11:53:52.075566 master-0 kubenswrapper[7454]: E0319 11:53:52.075458 7454 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 11:53:52.075566 master-0 kubenswrapper[7454]: E0319 11:53:52.075555 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-client-ca podName:e7b28a5a-7aec-4894-b8e3-63a4104207f7 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:52.57552573 +0000 UTC m=+2.205991673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-client-ca") pod "controller-manager-f5df8899c-dc825" (UID: "e7b28a5a-7aec-4894-b8e3-63a4104207f7") : configmap "client-ca" not found Mar 19 11:53:52.089933 master-0 kubenswrapper[7454]: I0319 11:53:52.089869 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 19 11:53:52.096336 master-0 kubenswrapper[7454]: I0319 11:53:52.096272 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/06f67c28-34fd-4356-92f0-edd0986ad34e-iptables-alerter-script\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 11:53:52.156639 master-0 kubenswrapper[7454]: I0319 11:53:52.156541 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1089ea24-add9-482e-9276-e6ded12052d7-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-qv4cg\" (UID: \"1089ea24-add9-482e-9276-e6ded12052d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 11:53:52.177839 master-0 kubenswrapper[7454]: I0319 11:53:52.177744 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:52.201658 master-0 kubenswrapper[7454]: I0319 11:53:52.201600 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h84l9\" (UniqueName: \"kubernetes.io/projected/f08c5930-44f0-48e4-80dd-2563f2733b2f-kube-api-access-h84l9\") pod \"openshift-apiserver-operator-d65958b8-mjs7x\" (UID: \"f08c5930-44f0-48e4-80dd-2563f2733b2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 11:53:52.217534 master-0 kubenswrapper[7454]: I0319 11:53:52.217443 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b80027fd-7b39-477a-a337-ff9bb08e7eeb-bound-sa-token\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:52.231933 master-0 kubenswrapper[7454]: I0319 11:53:52.231838 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpdts\" (UniqueName: \"kubernetes.io/projected/9702fc8c-4fe0-413b-b2d4-db23021d42b8-kube-api-access-tpdts\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 11:53:52.252613 master-0 kubenswrapper[7454]: I0319 11:53:52.252536 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnd9c\" (UniqueName: \"kubernetes.io/projected/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-kube-api-access-jnd9c\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:53:52.262163 master-0 kubenswrapper[7454]: I0319 11:53:52.262092 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-dc825"] Mar 19 11:53:52.262688 master-0 kubenswrapper[7454]: E0319 11:53:52.262613 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-jhdpf proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" podUID="e7b28a5a-7aec-4894-b8e3-63a4104207f7" Mar 19 11:53:52.276705 master-0 kubenswrapper[7454]: I0319 11:53:52.276652 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d"] Mar 19 11:53:52.277160 master-0 kubenswrapper[7454]: E0319 11:53:52.277134 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118dd8fa-f11f-4dda-96d7-f207e175b4da" containerName="prober" Mar 19 11:53:52.277299 master-0 kubenswrapper[7454]: I0319 11:53:52.277281 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="118dd8fa-f11f-4dda-96d7-f207e175b4da" containerName="prober" Mar 19 11:53:52.277406 master-0 kubenswrapper[7454]: E0319 11:53:52.277389 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9819a56-abb1-485c-b424-5c62e30d5afc" containerName="assisted-installer-controller" Mar 19 11:53:52.277501 master-0 kubenswrapper[7454]: I0319 11:53:52.277485 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9819a56-abb1-485c-b424-5c62e30d5afc" containerName="assisted-installer-controller" Mar 19 11:53:52.277725 master-0 kubenswrapper[7454]: I0319 11:53:52.277704 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9819a56-abb1-485c-b424-5c62e30d5afc" containerName="assisted-installer-controller" Mar 19 11:53:52.277878 master-0 kubenswrapper[7454]: I0319 11:53:52.277859 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="118dd8fa-f11f-4dda-96d7-f207e175b4da" containerName="prober" Mar 19 11:53:52.278003 master-0 kubenswrapper[7454]: I0319 11:53:52.277966 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xpc2\" (UniqueName: \"kubernetes.io/projected/19de6601-10d4-4112-a21f-0398d2b160d1-kube-api-access-6xpc2\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:52.278654 master-0 kubenswrapper[7454]: I0319 11:53:52.278630 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:53:52.286824 master-0 kubenswrapper[7454]: I0319 11:53:52.286699 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d"] Mar 19 11:53:52.309541 master-0 kubenswrapper[7454]: I0319 11:53:52.309510 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhqhb\" (UniqueName: \"kubernetes.io/projected/398bcaca-1bea-4633-a78f-717e3d015ddd-kube-api-access-fhqhb\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:52.311954 master-0 kubenswrapper[7454]: I0319 11:53:52.311918 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwfg5\" (UniqueName: \"kubernetes.io/projected/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-kube-api-access-hwfg5\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:53:52.331523 master-0 kubenswrapper[7454]: I0319 11:53:52.331379 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5n89\" (UniqueName: \"kubernetes.io/projected/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-kube-api-access-h5n89\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk\" (UID: \"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 11:53:52.341832 master-0 kubenswrapper[7454]: I0319 11:53:52.341761 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:52.350027 master-0 kubenswrapper[7454]: I0319 11:53:52.349979 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:52.352854 master-0 kubenswrapper[7454]: I0319 11:53:52.352786 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:53:52.353061 master-0 kubenswrapper[7454]: E0319 11:53:52.352997 7454 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 11:53:52.353139 master-0 kubenswrapper[7454]: E0319 11:53:52.353117 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert podName:85912908-c447-4868-871b-82c5eadbfdbe nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.353093022 +0000 UTC m=+2.983558955 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert") pod "cluster-version-operator-56d8475767-gjj5v" (UID: "85912908-c447-4868-871b-82c5eadbfdbe") : secret "cluster-version-operator-serving-cert" not found Mar 19 11:53:52.353139 master-0 kubenswrapper[7454]: E0319 11:53:52.353125 7454 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 11:53:52.353243 master-0 kubenswrapper[7454]: I0319 11:53:52.353008 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:52.353284 master-0 kubenswrapper[7454]: E0319 11:53:52.353197 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls podName:d3541cbe-3be0-40d3-89d2-b5937b6a8f47 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.353178715 +0000 UTC m=+2.983644698 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls") pod "machine-config-operator-84d549f6d5-lswqw" (UID: "d3541cbe-3be0-40d3-89d2-b5937b6a8f47") : secret "mco-proxy-tls" not found Mar 19 11:53:52.353419 master-0 kubenswrapper[7454]: I0319 11:53:52.353366 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmq9j\" (UniqueName: \"kubernetes.io/projected/25be5572-c6f3-45df-8a9d-9d6f759200ac-kube-api-access-pmq9j\") pod \"route-controller-manager-5d6d7c9966-r7q9d\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:53:52.353478 master-0 kubenswrapper[7454]: I0319 11:53:52.353451 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:52.353519 master-0 kubenswrapper[7454]: I0319 11:53:52.353499 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:52.353581 master-0 kubenswrapper[7454]: E0319 11:53:52.353562 7454 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 11:53:52.353627 master-0 kubenswrapper[7454]: E0319 11:53:52.353594 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls podName:b80027fd-7b39-477a-a337-ff9bb08e7eeb nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.353585348 +0000 UTC m=+2.984051261 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls") pod "ingress-operator-66b84d69b-btppx" (UID: "b80027fd-7b39-477a-a337-ff9bb08e7eeb") : secret "metrics-tls" not found Mar 19 11:53:52.353627 master-0 kubenswrapper[7454]: I0319 11:53:52.353617 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:52.353712 master-0 kubenswrapper[7454]: I0319 11:53:52.353640 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:53:52.353712 master-0 kubenswrapper[7454]: I0319 11:53:52.353665 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:52.353782 master-0 kubenswrapper[7454]: E0319 11:53:52.353722 7454 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 11:53:52.353846 master-0 kubenswrapper[7454]: E0319 11:53:52.353813 7454 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:53:52.353910 master-0 kubenswrapper[7454]: E0319 11:53:52.353884 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics podName:b0f5939c-48b1-4d6c-9712-9128a78d603b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.353861417 +0000 UTC m=+2.984327330 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-pr7gk" (UID: "b0f5939c-48b1-4d6c-9712-9128a78d603b") : secret "marketplace-operator-metrics" not found Mar 19 11:53:52.353910 master-0 kubenswrapper[7454]: I0319 11:53:52.353874 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:53:52.354010 master-0 kubenswrapper[7454]: E0319 11:53:52.353894 7454 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 11:53:52.354010 master-0 kubenswrapper[7454]: E0319 11:53:52.353938 7454 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 11:53:52.354010 master-0 kubenswrapper[7454]: E0319 11:53:52.353922 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.353912698 +0000 UTC m=+2.984378851 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:53:52.354010 master-0 kubenswrapper[7454]: I0319 11:53:52.354009 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:52.354150 master-0 kubenswrapper[7454]: I0319 11:53:52.354034 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:52.354150 master-0 kubenswrapper[7454]: E0319 11:53:52.354048 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert podName:bdcdb23d-ef1f-45e2-b9ac-7abf707637b6 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.354034602 +0000 UTC m=+2.984500525 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert") pod "catalog-operator-68f85b4d6c-2trz4" (UID: "bdcdb23d-ef1f-45e2-b9ac-7abf707637b6") : secret "catalog-operator-serving-cert" not found Mar 19 11:53:52.354150 master-0 kubenswrapper[7454]: E0319 11:53:52.354055 7454 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 11:53:52.354150 master-0 kubenswrapper[7454]: E0319 11:53:52.354070 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.354060053 +0000 UTC m=+2.984526186 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-operator-tls" not found Mar 19 11:53:52.354150 master-0 kubenswrapper[7454]: E0319 11:53:52.354118 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert podName:87a3f546-e1c1-42a1-b80e-d45b6d5c0a04 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.354094114 +0000 UTC m=+2.984560027 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert") pod "olm-operator-5c9796789-8cldl" (UID: "87a3f546-e1c1-42a1-b80e-d45b6d5c0a04") : secret "olm-operator-serving-cert" not found Mar 19 11:53:52.354150 master-0 kubenswrapper[7454]: E0319 11:53:52.354146 7454 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 19 11:53:52.354387 master-0 kubenswrapper[7454]: I0319 11:53:52.354160 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:52.354387 master-0 kubenswrapper[7454]: E0319 11:53:52.354180 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs podName:398bcaca-1bea-4633-a78f-717e3d015ddd nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.354171527 +0000 UTC m=+2.984637440 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs") pod "network-metrics-daemon-6t6sn" (UID: "398bcaca-1bea-4633-a78f-717e3d015ddd") : secret "metrics-daemon-secret" not found Mar 19 11:53:52.354387 master-0 kubenswrapper[7454]: E0319 11:53:52.354117 7454 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 19 11:53:52.354387 master-0 kubenswrapper[7454]: I0319 11:53:52.354197 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:52.354387 master-0 kubenswrapper[7454]: E0319 11:53:52.354226 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls podName:63c12a89-1b49-4eba-8f5a-551b10d2246b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.354216268 +0000 UTC m=+2.984682191 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-zfsqt" (UID: "63c12a89-1b49-4eba-8f5a-551b10d2246b") : secret "node-tuning-operator-tls" not found Mar 19 11:53:52.354387 master-0 kubenswrapper[7454]: E0319 11:53:52.354230 7454 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 11:53:52.354387 master-0 kubenswrapper[7454]: E0319 11:53:52.354256 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls podName:7241bf11-192e-47db-9d80-2324938ed34c nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.354248489 +0000 UTC m=+2.984714392 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-92c5d" (UID: "7241bf11-192e-47db-9d80-2324938ed34c") : secret "cluster-monitoring-operator-tls" not found Mar 19 11:53:52.354387 master-0 kubenswrapper[7454]: I0319 11:53:52.354255 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25be5572-c6f3-45df-8a9d-9d6f759200ac-serving-cert\") pod \"route-controller-manager-5d6d7c9966-r7q9d\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:53:52.354387 master-0 kubenswrapper[7454]: E0319 11:53:52.354259 7454 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 19 11:53:52.354387 master-0 kubenswrapper[7454]: E0319 11:53:52.354296 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert podName:63c12a89-1b49-4eba-8f5a-551b10d2246b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.35428865 +0000 UTC m=+2.984754573 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-zfsqt" (UID: "63c12a89-1b49-4eba-8f5a-551b10d2246b") : secret "performance-addon-operator-webhook-cert" not found Mar 19 11:53:52.354815 master-0 kubenswrapper[7454]: I0319 11:53:52.354404 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-fz8cg\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:53:52.354815 master-0 kubenswrapper[7454]: I0319 11:53:52.354434 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:52.354815 master-0 kubenswrapper[7454]: I0319 11:53:52.354527 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:53:52.354815 master-0 kubenswrapper[7454]: I0319 11:53:52.354580 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-config\") pod \"route-controller-manager-5d6d7c9966-r7q9d\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:53:52.354815 master-0 kubenswrapper[7454]: E0319 11:53:52.354709 7454 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 11:53:52.354815 master-0 kubenswrapper[7454]: E0319 11:53:52.354753 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs podName:806a4c30-7b93-4430-86da-f9e1f4f2d206 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.354743745 +0000 UTC m=+2.985209648 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-fz8cg" (UID: "806a4c30-7b93-4430-86da-f9e1f4f2d206") : secret "multus-admission-controller-secret" not found Mar 19 11:53:52.354815 master-0 kubenswrapper[7454]: E0319 11:53:52.354773 7454 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 19 11:53:52.354815 master-0 kubenswrapper[7454]: E0319 11:53:52.354824 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls podName:ab54833d-e57b-479d-b171-68155f6566f1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.354814077 +0000 UTC m=+2.985280010 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls") pod "dns-operator-9c5679d8f-z6kvm" (UID: "ab54833d-e57b-479d-b171-68155f6566f1") : secret "metrics-tls" not found Mar 19 11:53:52.355103 master-0 kubenswrapper[7454]: E0319 11:53:52.354888 7454 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 19 11:53:52.355103 master-0 kubenswrapper[7454]: E0319 11:53:52.354917 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls podName:82b98dca-59f9-42be-94ca-4a2a2b6fea0f nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.35490938 +0000 UTC m=+2.985375293 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-g6sn6" (UID: "82b98dca-59f9-42be-94ca-4a2a2b6fea0f") : secret "image-registry-operator-tls" not found Mar 19 11:53:52.355103 master-0 kubenswrapper[7454]: I0319 11:53:52.354945 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:53:52.355103 master-0 kubenswrapper[7454]: I0319 11:53:52.354971 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-client-ca\") pod \"route-controller-manager-5d6d7c9966-r7q9d\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:53:52.355103 master-0 kubenswrapper[7454]: E0319 11:53:52.355040 7454 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 11:53:52.355271 master-0 kubenswrapper[7454]: E0319 11:53:52.355111 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert podName:beb562de-402b-4d9f-b5ed-090b60847a95 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.355097976 +0000 UTC m=+2.985563899 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6j2nj" (UID: "beb562de-402b-4d9f-b5ed-090b60847a95") : secret "package-server-manager-serving-cert" not found Mar 19 11:53:52.361106 master-0 kubenswrapper[7454]: I0319 11:53:52.361059 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06df1b1b-154e-46f9-aee0-79a137c6c928-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-ptqdh\" (UID: \"06df1b1b-154e-46f9-aee0-79a137c6c928\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 11:53:52.374705 master-0 kubenswrapper[7454]: I0319 11:53:52.374661 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-864rg\" (UniqueName: \"kubernetes.io/projected/8414b6b0-ee16-47a5-982b-ee58b136cfcf-kube-api-access-864rg\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 11:53:52.398830 master-0 kubenswrapper[7454]: I0319 11:53:52.398734 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x252z\" (UniqueName: \"kubernetes.io/projected/aef8e03f-0363-4e13-b7ca-4fa871d77c62-kube-api-access-x252z\") pod \"openshift-config-operator-95bf4f4d-nhvl4\" (UID: \"aef8e03f-0363-4e13-b7ca-4fa871d77c62\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:53:52.418244 master-0 kubenswrapper[7454]: I0319 11:53:52.418192 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khv2z\" (UniqueName: \"kubernetes.io/projected/a7747954-a222-4809-8656-818203b55ee8-kube-api-access-khv2z\") pod \"csi-snapshot-controller-operator-5f5d689c6b-2chdm\" (UID: \"a7747954-a222-4809-8656-818203b55ee8\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-2chdm" Mar 19 11:53:52.421721 master-0 kubenswrapper[7454]: I0319 11:53:52.421685 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl6d7\" (UniqueName: \"kubernetes.io/projected/ab54833d-e57b-479d-b171-68155f6566f1-kube-api-access-gl6d7\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:53:52.446419 master-0 kubenswrapper[7454]: I0319 11:53:52.446369 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5bmd\" (UniqueName: \"kubernetes.io/projected/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-kube-api-access-c5bmd\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:52.456087 master-0 kubenswrapper[7454]: I0319 11:53:52.456053 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-client-ca\") pod \"route-controller-manager-5d6d7c9966-r7q9d\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:53:52.456301 master-0 kubenswrapper[7454]: I0319 11:53:52.456284 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmq9j\" (UniqueName: \"kubernetes.io/projected/25be5572-c6f3-45df-8a9d-9d6f759200ac-kube-api-access-pmq9j\") pod \"route-controller-manager-5d6d7c9966-r7q9d\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:53:52.456685 master-0 kubenswrapper[7454]: I0319 11:53:52.456668 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25be5572-c6f3-45df-8a9d-9d6f759200ac-serving-cert\") pod \"route-controller-manager-5d6d7c9966-r7q9d\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:53:52.456991 master-0 kubenswrapper[7454]: I0319 11:53:52.456945 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-config\") pod \"route-controller-manager-5d6d7c9966-r7q9d\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:53:52.463441 master-0 kubenswrapper[7454]: I0319 11:53:52.463392 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsk9d\" (UniqueName: \"kubernetes.io/projected/9ed2dbd1-aec4-4009-917a-933533912ab5-kube-api-access-gsk9d\") pod \"openshift-controller-manager-operator-8c94f4649-gx4w8\" (UID: \"9ed2dbd1-aec4-4009-917a-933533912ab5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 11:53:52.480848 master-0 kubenswrapper[7454]: I0319 11:53:52.480812 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs4jf\" (UniqueName: \"kubernetes.io/projected/b80027fd-7b39-477a-a337-ff9bb08e7eeb-kube-api-access-hs4jf\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:52.494118 master-0 kubenswrapper[7454]: I0319 11:53:52.493762 7454 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 11:53:52.504278 master-0 kubenswrapper[7454]: I0319 11:53:52.504239 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tqdb\" (UniqueName: \"kubernetes.io/projected/b0f5939c-48b1-4d6c-9712-9128a78d603b-kube-api-access-6tqdb\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:52.529566 master-0 kubenswrapper[7454]: I0319 11:53:52.529519 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcxqj\" (UniqueName: \"kubernetes.io/projected/bf226d89-450d-4876-a113-345632b94ee9-kube-api-access-wcxqj\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 11:53:52.547105 master-0 kubenswrapper[7454]: I0319 11:53:52.547047 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv6bc\" (UniqueName: \"kubernetes.io/projected/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-kube-api-access-pv6bc\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:52.559550 master-0 kubenswrapper[7454]: I0319 11:53:52.559491 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-dc825\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:52.559725 master-0 kubenswrapper[7454]: I0319 11:53:52.559693 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7b28a5a-7aec-4894-b8e3-63a4104207f7-serving-cert\") pod \"controller-manager-f5df8899c-dc825\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:52.559772 master-0 kubenswrapper[7454]: I0319 11:53:52.559747 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-config\") pod \"controller-manager-f5df8899c-dc825\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:52.560270 master-0 kubenswrapper[7454]: E0319 11:53:52.560246 7454 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 11:53:52.560414 master-0 kubenswrapper[7454]: E0319 11:53:52.560393 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b28a5a-7aec-4894-b8e3-63a4104207f7-serving-cert podName:e7b28a5a-7aec-4894-b8e3-63a4104207f7 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.560370564 +0000 UTC m=+3.190836487 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e7b28a5a-7aec-4894-b8e3-63a4104207f7-serving-cert") pod "controller-manager-f5df8899c-dc825" (UID: "e7b28a5a-7aec-4894-b8e3-63a4104207f7") : secret "serving-cert" not found Mar 19 11:53:52.561078 master-0 kubenswrapper[7454]: I0319 11:53:52.561037 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-dc825\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:52.561370 master-0 kubenswrapper[7454]: I0319 11:53:52.561333 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-config\") pod \"controller-manager-f5df8899c-dc825\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:52.566059 master-0 kubenswrapper[7454]: I0319 11:53:52.566018 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4hsp\" (UniqueName: \"kubernetes.io/projected/fe245927-c937-4ec7-ab83-4900bade72cf-kube-api-access-s4hsp\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 11:53:52.587681 master-0 kubenswrapper[7454]: I0319 11:53:52.587554 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p6vn\" (UniqueName: \"kubernetes.io/projected/284768b8-9d70-4cf7-bace-8adc6b587186-kube-api-access-8p6vn\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 11:53:52.600509 master-0 kubenswrapper[7454]: I0319 11:53:52.600466 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wshb2\" (UniqueName: \"kubernetes.io/projected/9d2db220-4d5b-4819-a910-b186e1e9fb3e-kube-api-access-wshb2\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:52.629298 master-0 kubenswrapper[7454]: I0319 11:53:52.629239 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hq8f\" (UniqueName: \"kubernetes.io/projected/661b8957-a890-4032-9e57-45e2e0b35249-kube-api-access-8hq8f\") pod \"service-ca-operator-b865698dc-sxsxt\" (UID: \"661b8957-a890-4032-9e57-45e2e0b35249\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 11:53:52.643706 master-0 kubenswrapper[7454]: I0319 11:53:52.643656 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5mkm\" (UniqueName: \"kubernetes.io/projected/7241bf11-192e-47db-9d80-2324938ed34c-kube-api-access-s5mkm\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:52.661207 master-0 kubenswrapper[7454]: I0319 11:53:52.661172 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-client-ca\") pod \"controller-manager-f5df8899c-dc825\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:52.661374 master-0 kubenswrapper[7454]: E0319 11:53:52.661289 7454 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 11:53:52.661421 master-0 kubenswrapper[7454]: E0319 11:53:52.661375 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-client-ca podName:e7b28a5a-7aec-4894-b8e3-63a4104207f7 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.661350437 +0000 UTC m=+3.291816370 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-client-ca") pod "controller-manager-f5df8899c-dc825" (UID: "e7b28a5a-7aec-4894-b8e3-63a4104207f7") : configmap "client-ca" not found Mar 19 11:53:52.662847 master-0 kubenswrapper[7454]: I0319 11:53:52.662811 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdpj4\" (UniqueName: \"kubernetes.io/projected/06f67c28-34fd-4356-92f0-edd0986ad34e-kube-api-access-bdpj4\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 11:53:52.686989 master-0 kubenswrapper[7454]: I0319 11:53:52.686949 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2mdn\" (UniqueName: \"kubernetes.io/projected/944eac68-e72b-4aed-b5dc-d7d9703178a3-kube-api-access-m2mdn\") pod \"csi-snapshot-controller-64854d9cff-6m654\" (UID: \"944eac68-e72b-4aed-b5dc-d7d9703178a3\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" Mar 19 11:53:52.710283 master-0 kubenswrapper[7454]: I0319 11:53:52.709903 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shfs6\" (UniqueName: \"kubernetes.io/projected/7044a7b3-4fac-40af-a31c-054a1a1db26b-kube-api-access-shfs6\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 11:53:52.725368 master-0 kubenswrapper[7454]: I0319 11:53:52.725308 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85912908-c447-4868-871b-82c5eadbfdbe-kube-api-access\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:53:52.746453 master-0 kubenswrapper[7454]: I0319 11:53:52.741574 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6wm6\" (UniqueName: \"kubernetes.io/projected/d3017b5e-178e-49de-89d2-817a18398203-kube-api-access-b6wm6\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:53:52.748875 master-0 kubenswrapper[7454]: I0319 11:53:52.748623 7454 request.go:700] Waited for 1.008803576s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ac/token Mar 19 11:53:52.762882 master-0 kubenswrapper[7454]: I0319 11:53:52.762567 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfl29\" (UniqueName: \"kubernetes.io/projected/806a4c30-7b93-4430-86da-f9e1f4f2d206-kube-api-access-dfl29\") pod \"multus-admission-controller-5dbbb8b86f-fz8cg\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:53:52.781132 master-0 kubenswrapper[7454]: I0319 11:53:52.781064 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mr6d\" (UniqueName: \"kubernetes.io/projected/beb562de-402b-4d9f-b5ed-090b60847a95-kube-api-access-9mr6d\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:53:52.801871 master-0 kubenswrapper[7454]: I0319 11:53:52.801770 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86r6z\" (UniqueName: \"kubernetes.io/projected/d975e831-7348-41b9-9622-f4a503674c38-kube-api-access-86r6z\") pod \"migrator-8487694857-99fgs\" (UID: \"d975e831-7348-41b9-9622-f4a503674c38\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-99fgs" Mar 19 11:53:52.802680 master-0 kubenswrapper[7454]: I0319 11:53:52.802632 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:52.810184 master-0 kubenswrapper[7454]: I0319 11:53:52.810133 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:52.811601 master-0 kubenswrapper[7454]: I0319 11:53:52.811519 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" Mar 19 11:53:52.821355 master-0 kubenswrapper[7454]: I0319 11:53:52.821304 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2151eb84-177e-459c-be71-f48465323ac2-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-fjn4b\" (UID: \"2151eb84-177e-459c-be71-f48465323ac2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 11:53:52.851130 master-0 kubenswrapper[7454]: I0319 11:53:52.851079 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bst2w\" (UniqueName: \"kubernetes.io/projected/63c12a89-1b49-4eba-8f5a-551b10d2246b-kube-api-access-bst2w\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:52.851299 master-0 kubenswrapper[7454]: I0319 11:53:52.851266 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-8487694857-99fgs" Mar 19 11:53:52.881672 master-0 kubenswrapper[7454]: I0319 11:53:52.881391 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npc2t\" (UniqueName: \"kubernetes.io/projected/c2dbd8b3-0e02-4747-a166-80aa6a94b060-kube-api-access-npc2t\") pod \"cluster-olm-operator-67dcd4998-cg9pq\" (UID: \"c2dbd8b3-0e02-4747-a166-80aa6a94b060\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 11:53:52.884822 master-0 kubenswrapper[7454]: I0319 11:53:52.884706 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zntzt\" (UniqueName: \"kubernetes.io/projected/0f97d998-530c-4d9d-a030-ca1d9d2d4490-kube-api-access-zntzt\") pod \"cluster-storage-operator-7d87854d6-6wzws\" (UID: \"0f97d998-530c-4d9d-a030-ca1d9d2d4490\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws" Mar 19 11:53:52.903230 master-0 kubenswrapper[7454]: E0319 11:53:52.903154 7454 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 11:53:52.917034 master-0 kubenswrapper[7454]: E0319 11:53:52.916980 7454 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:52.938842 master-0 kubenswrapper[7454]: E0319 11:53:52.938790 7454 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 11:53:52.970989 master-0 kubenswrapper[7454]: I0319 11:53:52.969182 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-proxy-ca-bundles\") pod \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " Mar 19 11:53:52.970989 master-0 kubenswrapper[7454]: I0319 11:53:52.969295 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-config\") pod \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " Mar 19 11:53:52.970989 master-0 kubenswrapper[7454]: I0319 11:53:52.969947 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e7b28a5a-7aec-4894-b8e3-63a4104207f7" (UID: "e7b28a5a-7aec-4894-b8e3-63a4104207f7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:53:52.970989 master-0 kubenswrapper[7454]: I0319 11:53:52.970127 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-config" (OuterVolumeSpecName: "config") pod "e7b28a5a-7aec-4894-b8e3-63a4104207f7" (UID: "e7b28a5a-7aec-4894-b8e3-63a4104207f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:53:52.970989 master-0 kubenswrapper[7454]: I0319 11:53:52.970423 7454 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:52.970989 master-0 kubenswrapper[7454]: I0319 11:53:52.970443 7454 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-config\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:52.977553 master-0 kubenswrapper[7454]: I0319 11:53:52.975287 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:52.977553 master-0 kubenswrapper[7454]: I0319 11:53:52.975388 7454 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 19 11:53:52.982942 master-0 kubenswrapper[7454]: I0319 11:53:52.982883 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhdpf\" (UniqueName: \"kubernetes.io/projected/e7b28a5a-7aec-4894-b8e3-63a4104207f7-kube-api-access-jhdpf\") pod \"controller-manager-f5df8899c-dc825\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:53.002733 master-0 kubenswrapper[7454]: I0319 11:53:53.002687 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:53.009585 master-0 kubenswrapper[7454]: I0319 11:53:53.009509 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 19 11:53:53.014106 master-0 kubenswrapper[7454]: I0319 11:53:53.014067 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g6zz\" (UniqueName: \"kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz\") pod \"network-check-target-v66z4\" (UID: \"616dbb32-6b65-4e44-a217-6b1be2844cc9\") " pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:53.020750 master-0 kubenswrapper[7454]: I0319 11:53:53.020690 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-config\") pod \"route-controller-manager-5d6d7c9966-r7q9d\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:53:53.033363 master-0 kubenswrapper[7454]: I0319 11:53:53.033282 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 19 11:53:53.050233 master-0 kubenswrapper[7454]: I0319 11:53:53.050153 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 19 11:53:53.053264 master-0 kubenswrapper[7454]: I0319 11:53:53.053230 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:53.053345 master-0 kubenswrapper[7454]: I0319 11:53:53.053270 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-99fgs"] Mar 19 11:53:53.056934 master-0 kubenswrapper[7454]: E0319 11:53:53.056913 7454 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 11:53:53.056993 master-0 kubenswrapper[7454]: E0319 11:53:53.056979 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25be5572-c6f3-45df-8a9d-9d6f759200ac-serving-cert podName:25be5572-c6f3-45df-8a9d-9d6f759200ac nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.556960358 +0000 UTC m=+3.187426261 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/25be5572-c6f3-45df-8a9d-9d6f759200ac-serving-cert") pod "route-controller-manager-5d6d7c9966-r7q9d" (UID: "25be5572-c6f3-45df-8a9d-9d6f759200ac") : secret "serving-cert" not found Mar 19 11:53:53.057109 master-0 kubenswrapper[7454]: I0319 11:53:53.057068 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:53.071801 master-0 kubenswrapper[7454]: W0319 11:53:53.071634 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd975e831_7348_41b9_9622_f4a503674c38.slice/crio-b5d29a971edd0c0a90849227d71d2a1720436090bfc1809b33b6d52cfd6a7ffe WatchSource:0}: Error finding container b5d29a971edd0c0a90849227d71d2a1720436090bfc1809b33b6d52cfd6a7ffe: Status 404 returned error can't find the container with id b5d29a971edd0c0a90849227d71d2a1720436090bfc1809b33b6d52cfd6a7ffe Mar 19 11:53:53.071801 master-0 kubenswrapper[7454]: I0319 11:53:53.071728 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhdpf\" (UniqueName: \"kubernetes.io/projected/e7b28a5a-7aec-4894-b8e3-63a4104207f7-kube-api-access-jhdpf\") pod \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " Mar 19 11:53:53.073865 master-0 kubenswrapper[7454]: I0319 11:53:53.073778 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 19 11:53:53.075697 master-0 kubenswrapper[7454]: I0319 11:53:53.075664 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7b28a5a-7aec-4894-b8e3-63a4104207f7-kube-api-access-jhdpf" (OuterVolumeSpecName: "kube-api-access-jhdpf") pod "e7b28a5a-7aec-4894-b8e3-63a4104207f7" (UID: "e7b28a5a-7aec-4894-b8e3-63a4104207f7"). InnerVolumeSpecName "kube-api-access-jhdpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:53:53.076859 master-0 kubenswrapper[7454]: E0319 11:53:53.076810 7454 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 19 11:53:53.076917 master-0 kubenswrapper[7454]: E0319 11:53:53.076869 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-client-ca podName:25be5572-c6f3-45df-8a9d-9d6f759200ac nodeName:}" failed. No retries permitted until 2026-03-19 11:53:53.576851131 +0000 UTC m=+3.207317044 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-client-ca") pod "route-controller-manager-5d6d7c9966-r7q9d" (UID: "25be5572-c6f3-45df-8a9d-9d6f759200ac") : configmap "client-ca" not found Mar 19 11:53:53.090181 master-0 kubenswrapper[7454]: I0319 11:53:53.090141 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 19 11:53:53.110336 master-0 kubenswrapper[7454]: I0319 11:53:53.110293 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:53.112278 master-0 kubenswrapper[7454]: I0319 11:53:53.112257 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:53.117212 master-0 kubenswrapper[7454]: I0319 11:53:53.117189 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:53.138628 master-0 kubenswrapper[7454]: I0319 11:53:53.138583 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmq9j\" (UniqueName: \"kubernetes.io/projected/25be5572-c6f3-45df-8a9d-9d6f759200ac-kube-api-access-pmq9j\") pod \"route-controller-manager-5d6d7c9966-r7q9d\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:53:53.173943 master-0 kubenswrapper[7454]: I0319 11:53:53.173169 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhdpf\" (UniqueName: \"kubernetes.io/projected/e7b28a5a-7aec-4894-b8e3-63a4104207f7-kube-api-access-jhdpf\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:53.279239 master-0 kubenswrapper[7454]: I0319 11:53:53.279196 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-v66z4"] Mar 19 11:53:53.288052 master-0 kubenswrapper[7454]: W0319 11:53:53.288003 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod616dbb32_6b65_4e44_a217_6b1be2844cc9.slice/crio-629e57f409989b86433406dbc0486de42ee1d2a4a26b2835682900a861605e8f WatchSource:0}: Error finding container 629e57f409989b86433406dbc0486de42ee1d2a4a26b2835682900a861605e8f: Status 404 returned error can't find the container with id 629e57f409989b86433406dbc0486de42ee1d2a4a26b2835682900a861605e8f Mar 19 11:53:53.380266 master-0 kubenswrapper[7454]: I0319 11:53:53.379957 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:53:53.380428 master-0 kubenswrapper[7454]: E0319 11:53:53.380248 7454 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 11:53:53.380428 master-0 kubenswrapper[7454]: I0319 11:53:53.380285 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:53.380428 master-0 kubenswrapper[7454]: E0319 11:53:53.380401 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert podName:85912908-c447-4868-871b-82c5eadbfdbe nodeName:}" failed. No retries permitted until 2026-03-19 11:53:55.380351396 +0000 UTC m=+5.010817499 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert") pod "cluster-version-operator-56d8475767-gjj5v" (UID: "85912908-c447-4868-871b-82c5eadbfdbe") : secret "cluster-version-operator-serving-cert" not found Mar 19 11:53:53.380428 master-0 kubenswrapper[7454]: E0319 11:53:53.380412 7454 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 11:53:53.380541 master-0 kubenswrapper[7454]: I0319 11:53:53.380456 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:53.380541 master-0 kubenswrapper[7454]: E0319 11:53:53.380480 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls podName:d3541cbe-3be0-40d3-89d2-b5937b6a8f47 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:55.38045936 +0000 UTC m=+5.010925463 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls") pod "machine-config-operator-84d549f6d5-lswqw" (UID: "d3541cbe-3be0-40d3-89d2-b5937b6a8f47") : secret "mco-proxy-tls" not found Mar 19 11:53:53.380541 master-0 kubenswrapper[7454]: I0319 11:53:53.380502 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:53.380541 master-0 kubenswrapper[7454]: I0319 11:53:53.380536 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:53:53.380654 master-0 kubenswrapper[7454]: E0319 11:53:53.380559 7454 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 11:53:53.380654 master-0 kubenswrapper[7454]: I0319 11:53:53.380564 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:53.380654 master-0 kubenswrapper[7454]: I0319 11:53:53.380590 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:53:53.380736 master-0 kubenswrapper[7454]: E0319 11:53:53.380608 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics podName:b0f5939c-48b1-4d6c-9712-9128a78d603b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:55.380591244 +0000 UTC m=+5.011057147 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-pr7gk" (UID: "b0f5939c-48b1-4d6c-9712-9128a78d603b") : secret "marketplace-operator-metrics" not found Mar 19 11:53:53.380736 master-0 kubenswrapper[7454]: I0319 11:53:53.380696 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:53.380736 master-0 kubenswrapper[7454]: E0319 11:53:53.380705 7454 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 11:53:53.380736 master-0 kubenswrapper[7454]: E0319 11:53:53.380724 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert podName:bdcdb23d-ef1f-45e2-b9ac-7abf707637b6 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:55.380718348 +0000 UTC m=+5.011184261 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert") pod "catalog-operator-68f85b4d6c-2trz4" (UID: "bdcdb23d-ef1f-45e2-b9ac-7abf707637b6") : secret "catalog-operator-serving-cert" not found Mar 19 11:53:53.380870 master-0 kubenswrapper[7454]: I0319 11:53:53.380740 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:53.380870 master-0 kubenswrapper[7454]: E0319 11:53:53.380756 7454 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 11:53:53.380870 master-0 kubenswrapper[7454]: I0319 11:53:53.380767 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:53.380870 master-0 kubenswrapper[7454]: E0319 11:53:53.380784 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:55.38077628 +0000 UTC m=+5.011242193 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-operator-tls" not found Mar 19 11:53:53.380870 master-0 kubenswrapper[7454]: E0319 11:53:53.380641 7454 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 11:53:53.380870 master-0 kubenswrapper[7454]: E0319 11:53:53.380807 7454 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 11:53:53.380870 master-0 kubenswrapper[7454]: I0319 11:53:53.380832 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:53.380870 master-0 kubenswrapper[7454]: E0319 11:53:53.380845 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls podName:b80027fd-7b39-477a-a337-ff9bb08e7eeb nodeName:}" failed. No retries permitted until 2026-03-19 11:53:55.380817801 +0000 UTC m=+5.011283914 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls") pod "ingress-operator-66b84d69b-btppx" (UID: "b80027fd-7b39-477a-a337-ff9bb08e7eeb") : secret "metrics-tls" not found Mar 19 11:53:53.380870 master-0 kubenswrapper[7454]: E0319 11:53:53.380677 7454 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:53:53.380870 master-0 kubenswrapper[7454]: E0319 11:53:53.380864 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert podName:87a3f546-e1c1-42a1-b80e-d45b6d5c0a04 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:55.380854702 +0000 UTC m=+5.011320615 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert") pod "olm-operator-5c9796789-8cldl" (UID: "87a3f546-e1c1-42a1-b80e-d45b6d5c0a04") : secret "olm-operator-serving-cert" not found Mar 19 11:53:53.381122 master-0 kubenswrapper[7454]: E0319 11:53:53.380899 7454 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 11:53:53.381122 master-0 kubenswrapper[7454]: E0319 11:53:53.380912 7454 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 19 11:53:53.381122 master-0 kubenswrapper[7454]: I0319 11:53:53.380908 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:53.381122 master-0 kubenswrapper[7454]: E0319 11:53:53.380926 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls podName:7241bf11-192e-47db-9d80-2324938ed34c nodeName:}" failed. No retries permitted until 2026-03-19 11:53:55.380918844 +0000 UTC m=+5.011384757 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-92c5d" (UID: "7241bf11-192e-47db-9d80-2324938ed34c") : secret "cluster-monitoring-operator-tls" not found Mar 19 11:53:53.381122 master-0 kubenswrapper[7454]: E0319 11:53:53.380941 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls podName:63c12a89-1b49-4eba-8f5a-551b10d2246b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:55.380932935 +0000 UTC m=+5.011399048 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-zfsqt" (UID: "63c12a89-1b49-4eba-8f5a-551b10d2246b") : secret "node-tuning-operator-tls" not found Mar 19 11:53:53.381122 master-0 kubenswrapper[7454]: E0319 11:53:53.380959 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:55.380951605 +0000 UTC m=+5.011417748 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:53:53.381122 master-0 kubenswrapper[7454]: E0319 11:53:53.381105 7454 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 19 11:53:53.381122 master-0 kubenswrapper[7454]: E0319 11:53:53.381099 7454 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 19 11:53:53.381352 master-0 kubenswrapper[7454]: E0319 11:53:53.381131 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert podName:63c12a89-1b49-4eba-8f5a-551b10d2246b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:55.381122601 +0000 UTC m=+5.011588504 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-zfsqt" (UID: "63c12a89-1b49-4eba-8f5a-551b10d2246b") : secret "performance-addon-operator-webhook-cert" not found Mar 19 11:53:53.381352 master-0 kubenswrapper[7454]: I0319 11:53:53.381255 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:53.381352 master-0 kubenswrapper[7454]: I0319 11:53:53.381279 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-fz8cg\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:53:53.381352 master-0 kubenswrapper[7454]: E0319 11:53:53.381306 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs podName:398bcaca-1bea-4633-a78f-717e3d015ddd nodeName:}" failed. No retries permitted until 2026-03-19 11:53:55.381289576 +0000 UTC m=+5.011755649 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs") pod "network-metrics-daemon-6t6sn" (UID: "398bcaca-1bea-4633-a78f-717e3d015ddd") : secret "metrics-daemon-secret" not found Mar 19 11:53:53.381352 master-0 kubenswrapper[7454]: I0319 11:53:53.381332 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:53:53.381352 master-0 kubenswrapper[7454]: E0319 11:53:53.381344 7454 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 11:53:53.381556 master-0 kubenswrapper[7454]: E0319 11:53:53.381374 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs podName:806a4c30-7b93-4430-86da-f9e1f4f2d206 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:55.381366698 +0000 UTC m=+5.011832611 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-fz8cg" (UID: "806a4c30-7b93-4430-86da-f9e1f4f2d206") : secret "multus-admission-controller-secret" not found Mar 19 11:53:53.381556 master-0 kubenswrapper[7454]: E0319 11:53:53.381391 7454 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 19 11:53:53.381556 master-0 kubenswrapper[7454]: E0319 11:53:53.381393 7454 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 19 11:53:53.381556 master-0 kubenswrapper[7454]: E0319 11:53:53.381419 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls podName:ab54833d-e57b-479d-b171-68155f6566f1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:55.38140385 +0000 UTC m=+5.011869763 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls") pod "dns-operator-9c5679d8f-z6kvm" (UID: "ab54833d-e57b-479d-b171-68155f6566f1") : secret "metrics-tls" not found Mar 19 11:53:53.381556 master-0 kubenswrapper[7454]: E0319 11:53:53.381439 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls podName:82b98dca-59f9-42be-94ca-4a2a2b6fea0f nodeName:}" failed. No retries permitted until 2026-03-19 11:53:55.38142778 +0000 UTC m=+5.011893693 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-g6sn6" (UID: "82b98dca-59f9-42be-94ca-4a2a2b6fea0f") : secret "image-registry-operator-tls" not found Mar 19 11:53:53.381556 master-0 kubenswrapper[7454]: I0319 11:53:53.381472 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:53:53.381825 master-0 kubenswrapper[7454]: E0319 11:53:53.381585 7454 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 11:53:53.381825 master-0 kubenswrapper[7454]: E0319 11:53:53.381625 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert podName:beb562de-402b-4d9f-b5ed-090b60847a95 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:55.381616506 +0000 UTC m=+5.012082419 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6j2nj" (UID: "beb562de-402b-4d9f-b5ed-090b60847a95") : secret "package-server-manager-serving-cert" not found Mar 19 11:53:53.583351 master-0 kubenswrapper[7454]: I0319 11:53:53.583293 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-client-ca\") pod \"route-controller-manager-5d6d7c9966-r7q9d\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:53:53.584190 master-0 kubenswrapper[7454]: E0319 11:53:53.583715 7454 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 19 11:53:53.584190 master-0 kubenswrapper[7454]: E0319 11:53:53.583788 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-client-ca podName:25be5572-c6f3-45df-8a9d-9d6f759200ac nodeName:}" failed. No retries permitted until 2026-03-19 11:53:54.583768468 +0000 UTC m=+4.214234381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-client-ca") pod "route-controller-manager-5d6d7c9966-r7q9d" (UID: "25be5572-c6f3-45df-8a9d-9d6f759200ac") : configmap "client-ca" not found Mar 19 11:53:53.585166 master-0 kubenswrapper[7454]: I0319 11:53:53.585112 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7b28a5a-7aec-4894-b8e3-63a4104207f7-serving-cert\") pod \"controller-manager-f5df8899c-dc825\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:53.585214 master-0 kubenswrapper[7454]: I0319 11:53:53.585178 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25be5572-c6f3-45df-8a9d-9d6f759200ac-serving-cert\") pod \"route-controller-manager-5d6d7c9966-r7q9d\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:53:53.585428 master-0 kubenswrapper[7454]: E0319 11:53:53.585387 7454 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 11:53:53.585472 master-0 kubenswrapper[7454]: E0319 11:53:53.585455 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25be5572-c6f3-45df-8a9d-9d6f759200ac-serving-cert podName:25be5572-c6f3-45df-8a9d-9d6f759200ac nodeName:}" failed. No retries permitted until 2026-03-19 11:53:54.58543563 +0000 UTC m=+4.215901573 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/25be5572-c6f3-45df-8a9d-9d6f759200ac-serving-cert") pod "route-controller-manager-5d6d7c9966-r7q9d" (UID: "25be5572-c6f3-45df-8a9d-9d6f759200ac") : secret "serving-cert" not found Mar 19 11:53:53.585533 master-0 kubenswrapper[7454]: E0319 11:53:53.585515 7454 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 11:53:53.585578 master-0 kubenswrapper[7454]: E0319 11:53:53.585551 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b28a5a-7aec-4894-b8e3-63a4104207f7-serving-cert podName:e7b28a5a-7aec-4894-b8e3-63a4104207f7 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:55.585542363 +0000 UTC m=+5.216008316 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e7b28a5a-7aec-4894-b8e3-63a4104207f7-serving-cert") pod "controller-manager-f5df8899c-dc825" (UID: "e7b28a5a-7aec-4894-b8e3-63a4104207f7") : secret "serving-cert" not found Mar 19 11:53:53.686694 master-0 kubenswrapper[7454]: I0319 11:53:53.686566 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-client-ca\") pod \"controller-manager-f5df8899c-dc825\" (UID: \"e7b28a5a-7aec-4894-b8e3-63a4104207f7\") " pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:53.686910 master-0 kubenswrapper[7454]: E0319 11:53:53.686710 7454 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 11:53:53.686910 master-0 kubenswrapper[7454]: E0319 11:53:53.686794 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-client-ca podName:e7b28a5a-7aec-4894-b8e3-63a4104207f7 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:55.686770504 +0000 UTC m=+5.317236457 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-client-ca") pod "controller-manager-f5df8899c-dc825" (UID: "e7b28a5a-7aec-4894-b8e3-63a4104207f7") : configmap "client-ca" not found Mar 19 11:53:53.815816 master-0 kubenswrapper[7454]: I0319 11:53:53.815742 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" event={"ID":"944eac68-e72b-4aed-b5dc-d7d9703178a3","Type":"ContainerStarted","Data":"fcd57352498da84e6fbc9969ab5176b5b32433301a69ada5c5c0571371a536da"} Mar 19 11:53:53.816973 master-0 kubenswrapper[7454]: I0319 11:53:53.816947 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-99fgs" event={"ID":"d975e831-7348-41b9-9622-f4a503674c38","Type":"ContainerStarted","Data":"b5d29a971edd0c0a90849227d71d2a1720436090bfc1809b33b6d52cfd6a7ffe"} Mar 19 11:53:53.819246 master-0 kubenswrapper[7454]: I0319 11:53:53.819225 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-dc825" Mar 19 11:53:53.820242 master-0 kubenswrapper[7454]: I0319 11:53:53.820207 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-v66z4" event={"ID":"616dbb32-6b65-4e44-a217-6b1be2844cc9","Type":"ContainerStarted","Data":"f3bc9b7e698c4f7daf5085f2f417d2e071d8485797648994acba05453ed46446"} Mar 19 11:53:53.820300 master-0 kubenswrapper[7454]: I0319 11:53:53.820246 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-v66z4" event={"ID":"616dbb32-6b65-4e44-a217-6b1be2844cc9","Type":"ContainerStarted","Data":"629e57f409989b86433406dbc0486de42ee1d2a4a26b2835682900a861605e8f"} Mar 19 11:53:53.921926 master-0 kubenswrapper[7454]: I0319 11:53:53.921767 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:53.927191 master-0 kubenswrapper[7454]: I0319 11:53:53.927163 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 11:53:54.599204 master-0 kubenswrapper[7454]: I0319 11:53:54.599143 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25be5572-c6f3-45df-8a9d-9d6f759200ac-serving-cert\") pod \"route-controller-manager-5d6d7c9966-r7q9d\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:53:54.599926 master-0 kubenswrapper[7454]: I0319 11:53:54.599404 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-client-ca\") pod \"route-controller-manager-5d6d7c9966-r7q9d\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:53:54.599926 master-0 kubenswrapper[7454]: E0319 11:53:54.599435 7454 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 11:53:54.599926 master-0 kubenswrapper[7454]: E0319 11:53:54.599533 7454 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 19 11:53:54.599926 master-0 kubenswrapper[7454]: E0319 11:53:54.599553 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25be5572-c6f3-45df-8a9d-9d6f759200ac-serving-cert podName:25be5572-c6f3-45df-8a9d-9d6f759200ac nodeName:}" failed. No retries permitted until 2026-03-19 11:53:56.599521621 +0000 UTC m=+6.229987534 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/25be5572-c6f3-45df-8a9d-9d6f759200ac-serving-cert") pod "route-controller-manager-5d6d7c9966-r7q9d" (UID: "25be5572-c6f3-45df-8a9d-9d6f759200ac") : secret "serving-cert" not found Mar 19 11:53:54.599926 master-0 kubenswrapper[7454]: E0319 11:53:54.599601 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-client-ca podName:25be5572-c6f3-45df-8a9d-9d6f759200ac nodeName:}" failed. No retries permitted until 2026-03-19 11:53:56.599578673 +0000 UTC m=+6.230044766 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-client-ca") pod "route-controller-manager-5d6d7c9966-r7q9d" (UID: "25be5572-c6f3-45df-8a9d-9d6f759200ac") : configmap "client-ca" not found Mar 19 11:53:54.658360 master-0 kubenswrapper[7454]: I0319 11:53:54.658311 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-dc825"] Mar 19 11:53:54.663335 master-0 kubenswrapper[7454]: I0319 11:53:54.662300 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-dc825"] Mar 19 11:53:54.801774 master-0 kubenswrapper[7454]: I0319 11:53:54.801710 7454 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7b28a5a-7aec-4894-b8e3-63a4104207f7-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:54.801774 master-0 kubenswrapper[7454]: I0319 11:53:54.801751 7454 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7b28a5a-7aec-4894-b8e3-63a4104207f7-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 11:53:54.828172 master-0 kubenswrapper[7454]: I0319 11:53:54.826132 7454 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:53:54.828172 master-0 kubenswrapper[7454]: I0319 11:53:54.826772 7454 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:53:54.828172 master-0 kubenswrapper[7454]: I0319 11:53:54.826782 7454 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:53:55.377372 master-0 kubenswrapper[7454]: I0319 11:53:55.377313 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6484d6777d-wmpqv"] Mar 19 11:53:55.377789 master-0 kubenswrapper[7454]: I0319 11:53:55.377758 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:53:55.379881 master-0 kubenswrapper[7454]: I0319 11:53:55.379858 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 19 11:53:55.380189 master-0 kubenswrapper[7454]: I0319 11:53:55.380137 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 19 11:53:55.380356 master-0 kubenswrapper[7454]: I0319 11:53:55.380307 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 19 11:53:55.380412 master-0 kubenswrapper[7454]: I0319 11:53:55.380316 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 19 11:53:55.383312 master-0 kubenswrapper[7454]: I0319 11:53:55.383275 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 19 11:53:55.389469 master-0 kubenswrapper[7454]: I0319 11:53:55.389427 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6484d6777d-wmpqv"] Mar 19 11:53:55.391713 master-0 kubenswrapper[7454]: I0319 11:53:55.391666 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 19 11:53:55.410189 master-0 kubenswrapper[7454]: I0319 11:53:55.410065 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:53:55.410786 master-0 kubenswrapper[7454]: I0319 11:53:55.410201 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:53:55.410786 master-0 kubenswrapper[7454]: I0319 11:53:55.410226 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:55.410786 master-0 kubenswrapper[7454]: I0319 11:53:55.410251 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:55.410786 master-0 kubenswrapper[7454]: I0319 11:53:55.410273 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:55.410786 master-0 kubenswrapper[7454]: I0319 11:53:55.410300 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:55.410786 master-0 kubenswrapper[7454]: I0319 11:53:55.410320 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:53:55.410786 master-0 kubenswrapper[7454]: I0319 11:53:55.410341 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:55.410786 master-0 kubenswrapper[7454]: I0319 11:53:55.410359 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:53:55.410786 master-0 kubenswrapper[7454]: I0319 11:53:55.410385 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:55.410786 master-0 kubenswrapper[7454]: I0319 11:53:55.410405 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:55.410786 master-0 kubenswrapper[7454]: I0319 11:53:55.410427 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:55.410786 master-0 kubenswrapper[7454]: I0319 11:53:55.410447 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:55.410786 master-0 kubenswrapper[7454]: I0319 11:53:55.410481 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-fz8cg\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:53:55.410786 master-0 kubenswrapper[7454]: E0319 11:53:55.410787 7454 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 19 11:53:55.411472 master-0 kubenswrapper[7454]: E0319 11:53:55.410889 7454 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 11:53:55.411472 master-0 kubenswrapper[7454]: E0319 11:53:55.410937 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert podName:85912908-c447-4868-871b-82c5eadbfdbe nodeName:}" failed. No retries permitted until 2026-03-19 11:53:59.410921284 +0000 UTC m=+9.041387197 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert") pod "cluster-version-operator-56d8475767-gjj5v" (UID: "85912908-c447-4868-871b-82c5eadbfdbe") : secret "cluster-version-operator-serving-cert" not found Mar 19 11:53:55.411472 master-0 kubenswrapper[7454]: I0319 11:53:55.411018 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:55.411472 master-0 kubenswrapper[7454]: I0319 11:53:55.411062 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:53:55.411472 master-0 kubenswrapper[7454]: E0319 11:53:55.411189 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls podName:82b98dca-59f9-42be-94ca-4a2a2b6fea0f nodeName:}" failed. No retries permitted until 2026-03-19 11:53:59.411180282 +0000 UTC m=+9.041646195 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-g6sn6" (UID: "82b98dca-59f9-42be-94ca-4a2a2b6fea0f") : secret "image-registry-operator-tls" not found Mar 19 11:53:55.411472 master-0 kubenswrapper[7454]: E0319 11:53:55.411205 7454 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 19 11:53:55.411472 master-0 kubenswrapper[7454]: E0319 11:53:55.411234 7454 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 11:53:55.411472 master-0 kubenswrapper[7454]: E0319 11:53:55.411250 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls podName:ab54833d-e57b-479d-b171-68155f6566f1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:59.411228094 +0000 UTC m=+9.041694007 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls") pod "dns-operator-9c5679d8f-z6kvm" (UID: "ab54833d-e57b-479d-b171-68155f6566f1") : secret "metrics-tls" not found Mar 19 11:53:55.411472 master-0 kubenswrapper[7454]: E0319 11:53:55.411264 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert podName:87a3f546-e1c1-42a1-b80e-d45b6d5c0a04 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:59.411258815 +0000 UTC m=+9.041724728 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert") pod "olm-operator-5c9796789-8cldl" (UID: "87a3f546-e1c1-42a1-b80e-d45b6d5c0a04") : secret "olm-operator-serving-cert" not found Mar 19 11:53:55.411472 master-0 kubenswrapper[7454]: E0319 11:53:55.411348 7454 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 11:53:55.411472 master-0 kubenswrapper[7454]: E0319 11:53:55.411372 7454 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 11:53:55.411472 master-0 kubenswrapper[7454]: E0319 11:53:55.411369 7454 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 11:53:55.411472 master-0 kubenswrapper[7454]: E0319 11:53:55.411408 7454 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 11:53:55.411472 master-0 kubenswrapper[7454]: E0319 11:53:55.411371 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics podName:b0f5939c-48b1-4d6c-9712-9128a78d603b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:59.411364678 +0000 UTC m=+9.041830591 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-pr7gk" (UID: "b0f5939c-48b1-4d6c-9712-9128a78d603b") : secret "marketplace-operator-metrics" not found Mar 19 11:53:55.411472 master-0 kubenswrapper[7454]: E0319 11:53:55.411478 7454 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 11:53:55.412033 master-0 kubenswrapper[7454]: E0319 11:53:55.411484 7454 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 11:53:55.412033 master-0 kubenswrapper[7454]: E0319 11:53:55.411501 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert podName:bdcdb23d-ef1f-45e2-b9ac-7abf707637b6 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:59.411470981 +0000 UTC m=+9.041936894 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert") pod "catalog-operator-68f85b4d6c-2trz4" (UID: "bdcdb23d-ef1f-45e2-b9ac-7abf707637b6") : secret "catalog-operator-serving-cert" not found Mar 19 11:53:55.412033 master-0 kubenswrapper[7454]: E0319 11:53:55.411523 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs podName:806a4c30-7b93-4430-86da-f9e1f4f2d206 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:59.411512393 +0000 UTC m=+9.041978306 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-fz8cg" (UID: "806a4c30-7b93-4430-86da-f9e1f4f2d206") : secret "multus-admission-controller-secret" not found Mar 19 11:53:55.412033 master-0 kubenswrapper[7454]: E0319 11:53:55.411521 7454 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 19 11:53:55.412033 master-0 kubenswrapper[7454]: E0319 11:53:55.411548 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert podName:beb562de-402b-4d9f-b5ed-090b60847a95 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:59.411532163 +0000 UTC m=+9.041998076 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6j2nj" (UID: "beb562de-402b-4d9f-b5ed-090b60847a95") : secret "package-server-manager-serving-cert" not found Mar 19 11:53:55.412033 master-0 kubenswrapper[7454]: E0319 11:53:55.411565 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:59.411556694 +0000 UTC m=+9.042022607 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-operator-tls" not found Mar 19 11:53:55.412033 master-0 kubenswrapper[7454]: E0319 11:53:55.411580 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls podName:7241bf11-192e-47db-9d80-2324938ed34c nodeName:}" failed. No retries permitted until 2026-03-19 11:53:59.411572055 +0000 UTC m=+9.042037968 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-92c5d" (UID: "7241bf11-192e-47db-9d80-2324938ed34c") : secret "cluster-monitoring-operator-tls" not found Mar 19 11:53:55.412033 master-0 kubenswrapper[7454]: E0319 11:53:55.411608 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls podName:63c12a89-1b49-4eba-8f5a-551b10d2246b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:59.411586945 +0000 UTC m=+9.042052858 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-zfsqt" (UID: "63c12a89-1b49-4eba-8f5a-551b10d2246b") : secret "node-tuning-operator-tls" not found Mar 19 11:53:55.412033 master-0 kubenswrapper[7454]: E0319 11:53:55.411618 7454 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 19 11:53:55.412033 master-0 kubenswrapper[7454]: E0319 11:53:55.411668 7454 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 11:53:55.412033 master-0 kubenswrapper[7454]: E0319 11:53:55.411700 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls podName:b80027fd-7b39-477a-a337-ff9bb08e7eeb nodeName:}" failed. No retries permitted until 2026-03-19 11:53:59.411688568 +0000 UTC m=+9.042154481 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls") pod "ingress-operator-66b84d69b-btppx" (UID: "b80027fd-7b39-477a-a337-ff9bb08e7eeb") : secret "metrics-tls" not found Mar 19 11:53:55.412033 master-0 kubenswrapper[7454]: E0319 11:53:55.411712 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert podName:63c12a89-1b49-4eba-8f5a-551b10d2246b nodeName:}" failed. No retries permitted until 2026-03-19 11:53:59.411705879 +0000 UTC m=+9.042171792 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-zfsqt" (UID: "63c12a89-1b49-4eba-8f5a-551b10d2246b") : secret "performance-addon-operator-webhook-cert" not found Mar 19 11:53:55.412033 master-0 kubenswrapper[7454]: E0319 11:53:55.411791 7454 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:53:55.412033 master-0 kubenswrapper[7454]: E0319 11:53:55.411841 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:59.411831133 +0000 UTC m=+9.042297046 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:53:55.412033 master-0 kubenswrapper[7454]: E0319 11:53:55.411859 7454 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 19 11:53:55.412033 master-0 kubenswrapper[7454]: E0319 11:53:55.411891 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs podName:398bcaca-1bea-4633-a78f-717e3d015ddd nodeName:}" failed. No retries permitted until 2026-03-19 11:53:59.411881644 +0000 UTC m=+9.042347557 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs") pod "network-metrics-daemon-6t6sn" (UID: "398bcaca-1bea-4633-a78f-717e3d015ddd") : secret "metrics-daemon-secret" not found Mar 19 11:53:55.412033 master-0 kubenswrapper[7454]: E0319 11:53:55.411893 7454 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 11:53:55.412033 master-0 kubenswrapper[7454]: E0319 11:53:55.411935 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls podName:d3541cbe-3be0-40d3-89d2-b5937b6a8f47 nodeName:}" failed. No retries permitted until 2026-03-19 11:53:59.411924646 +0000 UTC m=+9.042390559 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls") pod "machine-config-operator-84d549f6d5-lswqw" (UID: "d3541cbe-3be0-40d3-89d2-b5937b6a8f47") : secret "mco-proxy-tls" not found Mar 19 11:53:55.514040 master-0 kubenswrapper[7454]: I0319 11:53:55.512458 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9glx9\" (UniqueName: \"kubernetes.io/projected/921791b6-51d2-4d7c-995b-488a37f85b3f-kube-api-access-9glx9\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:53:55.514040 master-0 kubenswrapper[7454]: I0319 11:53:55.512519 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-config\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:53:55.514040 master-0 kubenswrapper[7454]: I0319 11:53:55.512560 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-client-ca\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:53:55.514040 master-0 kubenswrapper[7454]: I0319 11:53:55.512577 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/921791b6-51d2-4d7c-995b-488a37f85b3f-serving-cert\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:53:55.514040 master-0 kubenswrapper[7454]: I0319 11:53:55.512628 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-proxy-ca-bundles\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:53:55.613338 master-0 kubenswrapper[7454]: I0319 11:53:55.613247 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-proxy-ca-bundles\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:53:55.614399 master-0 kubenswrapper[7454]: I0319 11:53:55.613530 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9glx9\" (UniqueName: \"kubernetes.io/projected/921791b6-51d2-4d7c-995b-488a37f85b3f-kube-api-access-9glx9\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:53:55.614399 master-0 kubenswrapper[7454]: I0319 11:53:55.613613 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-config\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:53:55.614399 master-0 kubenswrapper[7454]: I0319 11:53:55.613669 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-client-ca\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:53:55.614399 master-0 kubenswrapper[7454]: I0319 11:53:55.613688 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/921791b6-51d2-4d7c-995b-488a37f85b3f-serving-cert\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:53:55.614399 master-0 kubenswrapper[7454]: E0319 11:53:55.613890 7454 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 11:53:55.614399 master-0 kubenswrapper[7454]: E0319 11:53:55.613947 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/921791b6-51d2-4d7c-995b-488a37f85b3f-serving-cert podName:921791b6-51d2-4d7c-995b-488a37f85b3f nodeName:}" failed. No retries permitted until 2026-03-19 11:53:56.113931913 +0000 UTC m=+5.744397826 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/921791b6-51d2-4d7c-995b-488a37f85b3f-serving-cert") pod "controller-manager-6484d6777d-wmpqv" (UID: "921791b6-51d2-4d7c-995b-488a37f85b3f") : secret "serving-cert" not found Mar 19 11:53:55.615044 master-0 kubenswrapper[7454]: I0319 11:53:55.614998 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-proxy-ca-bundles\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:53:55.615115 master-0 kubenswrapper[7454]: E0319 11:53:55.615094 7454 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 11:53:55.615167 master-0 kubenswrapper[7454]: E0319 11:53:55.615152 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-client-ca podName:921791b6-51d2-4d7c-995b-488a37f85b3f nodeName:}" failed. No retries permitted until 2026-03-19 11:53:56.1151329 +0000 UTC m=+5.745598883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-client-ca") pod "controller-manager-6484d6777d-wmpqv" (UID: "921791b6-51d2-4d7c-995b-488a37f85b3f") : configmap "client-ca" not found Mar 19 11:53:55.615270 master-0 kubenswrapper[7454]: I0319 11:53:55.615239 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-config\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:53:55.633716 master-0 kubenswrapper[7454]: I0319 11:53:55.633611 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9glx9\" (UniqueName: \"kubernetes.io/projected/921791b6-51d2-4d7c-995b-488a37f85b3f-kube-api-access-9glx9\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:53:56.120290 master-0 kubenswrapper[7454]: I0319 11:53:56.120204 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-client-ca\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:53:56.120290 master-0 kubenswrapper[7454]: I0319 11:53:56.120284 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/921791b6-51d2-4d7c-995b-488a37f85b3f-serving-cert\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:53:56.120673 master-0 kubenswrapper[7454]: E0319 11:53:56.120444 7454 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 11:53:56.120673 master-0 kubenswrapper[7454]: E0319 11:53:56.120574 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-client-ca podName:921791b6-51d2-4d7c-995b-488a37f85b3f nodeName:}" failed. No retries permitted until 2026-03-19 11:53:57.12053663 +0000 UTC m=+6.751002773 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-client-ca") pod "controller-manager-6484d6777d-wmpqv" (UID: "921791b6-51d2-4d7c-995b-488a37f85b3f") : configmap "client-ca" not found Mar 19 11:53:56.120852 master-0 kubenswrapper[7454]: E0319 11:53:56.120772 7454 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 11:53:56.120960 master-0 kubenswrapper[7454]: E0319 11:53:56.120936 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/921791b6-51d2-4d7c-995b-488a37f85b3f-serving-cert podName:921791b6-51d2-4d7c-995b-488a37f85b3f nodeName:}" failed. No retries permitted until 2026-03-19 11:53:57.120902061 +0000 UTC m=+6.751368164 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/921791b6-51d2-4d7c-995b-488a37f85b3f-serving-cert") pod "controller-manager-6484d6777d-wmpqv" (UID: "921791b6-51d2-4d7c-995b-488a37f85b3f") : secret "serving-cert" not found Mar 19 11:53:56.627886 master-0 kubenswrapper[7454]: I0319 11:53:56.627833 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25be5572-c6f3-45df-8a9d-9d6f759200ac-serving-cert\") pod \"route-controller-manager-5d6d7c9966-r7q9d\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:53:56.628326 master-0 kubenswrapper[7454]: E0319 11:53:56.628012 7454 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 11:53:56.628326 master-0 kubenswrapper[7454]: E0319 11:53:56.628091 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25be5572-c6f3-45df-8a9d-9d6f759200ac-serving-cert podName:25be5572-c6f3-45df-8a9d-9d6f759200ac nodeName:}" failed. No retries permitted until 2026-03-19 11:54:00.628072015 +0000 UTC m=+10.258537928 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/25be5572-c6f3-45df-8a9d-9d6f759200ac-serving-cert") pod "route-controller-manager-5d6d7c9966-r7q9d" (UID: "25be5572-c6f3-45df-8a9d-9d6f759200ac") : secret "serving-cert" not found Mar 19 11:53:56.628326 master-0 kubenswrapper[7454]: I0319 11:53:56.628084 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-client-ca\") pod \"route-controller-manager-5d6d7c9966-r7q9d\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:53:56.628326 master-0 kubenswrapper[7454]: E0319 11:53:56.628153 7454 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 19 11:53:56.628326 master-0 kubenswrapper[7454]: E0319 11:53:56.628222 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-client-ca podName:25be5572-c6f3-45df-8a9d-9d6f759200ac nodeName:}" failed. No retries permitted until 2026-03-19 11:54:00.62820841 +0000 UTC m=+10.258674323 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-client-ca") pod "route-controller-manager-5d6d7c9966-r7q9d" (UID: "25be5572-c6f3-45df-8a9d-9d6f759200ac") : configmap "client-ca" not found Mar 19 11:53:56.638773 master-0 kubenswrapper[7454]: I0319 11:53:56.638533 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7b28a5a-7aec-4894-b8e3-63a4104207f7" path="/var/lib/kubelet/pods/e7b28a5a-7aec-4894-b8e3-63a4104207f7/volumes" Mar 19 11:53:56.835072 master-0 kubenswrapper[7454]: I0319 11:53:56.834965 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-276t5" event={"ID":"06f67c28-34fd-4356-92f0-edd0986ad34e","Type":"ContainerStarted","Data":"14ef497dbefee5e45d62752b9c51471d9921659bcf2fde5c96bea24c927ad377"} Mar 19 11:53:57.135830 master-0 kubenswrapper[7454]: I0319 11:53:57.135183 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-client-ca\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:53:57.135830 master-0 kubenswrapper[7454]: I0319 11:53:57.135228 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/921791b6-51d2-4d7c-995b-488a37f85b3f-serving-cert\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:53:57.135830 master-0 kubenswrapper[7454]: E0319 11:53:57.135300 7454 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 11:53:57.135830 master-0 kubenswrapper[7454]: E0319 11:53:57.135371 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-client-ca podName:921791b6-51d2-4d7c-995b-488a37f85b3f nodeName:}" failed. No retries permitted until 2026-03-19 11:53:59.135349773 +0000 UTC m=+8.765815706 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-client-ca") pod "controller-manager-6484d6777d-wmpqv" (UID: "921791b6-51d2-4d7c-995b-488a37f85b3f") : configmap "client-ca" not found Mar 19 11:53:57.135830 master-0 kubenswrapper[7454]: E0319 11:53:57.135406 7454 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 11:53:57.135830 master-0 kubenswrapper[7454]: E0319 11:53:57.135432 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/921791b6-51d2-4d7c-995b-488a37f85b3f-serving-cert podName:921791b6-51d2-4d7c-995b-488a37f85b3f nodeName:}" failed. No retries permitted until 2026-03-19 11:53:59.135424196 +0000 UTC m=+8.765890099 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/921791b6-51d2-4d7c-995b-488a37f85b3f-serving-cert") pod "controller-manager-6484d6777d-wmpqv" (UID: "921791b6-51d2-4d7c-995b-488a37f85b3f") : secret "serving-cert" not found Mar 19 11:53:58.598215 master-0 kubenswrapper[7454]: I0319 11:53:58.597821 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:53:58.843873 master-0 kubenswrapper[7454]: I0319 11:53:58.843677 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" event={"ID":"aef8e03f-0363-4e13-b7ca-4fa871d77c62","Type":"ContainerStarted","Data":"1dd2940995583a19410f74ab256d2834a4c83d4ba579f4590af5fea605682788"} Mar 19 11:53:58.844092 master-0 kubenswrapper[7454]: I0319 11:53:58.844067 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:53:58.845176 master-0 kubenswrapper[7454]: I0319 11:53:58.845144 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" event={"ID":"944eac68-e72b-4aed-b5dc-d7d9703178a3","Type":"ContainerStarted","Data":"bdf696c39db6c9beaa009fbd69e576a7d8040c99b8de9bd67204a49a32f0a1ba"} Mar 19 11:53:58.846501 master-0 kubenswrapper[7454]: I0319 11:53:58.846457 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-99fgs" event={"ID":"d975e831-7348-41b9-9622-f4a503674c38","Type":"ContainerStarted","Data":"dac753d5ae65b711ee92b5cdd998147ca151ea3cdd525cf507d8d3e7bde8d7d9"} Mar 19 11:53:58.846558 master-0 kubenswrapper[7454]: I0319 11:53:58.846502 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-99fgs" event={"ID":"d975e831-7348-41b9-9622-f4a503674c38","Type":"ContainerStarted","Data":"1ec3a5c11014607ae789800ea132be5e60dbfa80d0dca40cebcb0fc936cf3151"} Mar 19 11:53:58.848393 master-0 kubenswrapper[7454]: I0319 11:53:58.848347 7454 generic.go:334] "Generic (PLEG): container finished" podID="c2dbd8b3-0e02-4747-a166-80aa6a94b060" containerID="2457fc795f5fa01ac43b0f615c5a28446422acb5259e051c1c008795c84b021b" exitCode=0 Mar 19 11:53:58.848466 master-0 kubenswrapper[7454]: I0319 11:53:58.848405 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" event={"ID":"c2dbd8b3-0e02-4747-a166-80aa6a94b060","Type":"ContainerDied","Data":"2457fc795f5fa01ac43b0f615c5a28446422acb5259e051c1c008795c84b021b"} Mar 19 11:53:58.849576 master-0 kubenswrapper[7454]: I0319 11:53:58.849542 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" event={"ID":"661b8957-a890-4032-9e57-45e2e0b35249","Type":"ContainerStarted","Data":"48511943c8e0f8f2cb56a0dbe005be6b65b3cfab069bdef05e341ca254849587"} Mar 19 11:53:58.852826 master-0 kubenswrapper[7454]: I0319 11:53:58.852742 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" event={"ID":"f08c5930-44f0-48e4-80dd-2563f2733b2f","Type":"ContainerStarted","Data":"41d4637f09562b9b79d583fb65c9acfd7f81986cff143ad48c1c09b266f39b23"} Mar 19 11:53:59.173191 master-0 kubenswrapper[7454]: I0319 11:53:59.172753 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-client-ca\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:53:59.173191 master-0 kubenswrapper[7454]: E0319 11:53:59.173063 7454 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 11:53:59.173464 master-0 kubenswrapper[7454]: E0319 11:53:59.173225 7454 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 11:53:59.173464 master-0 kubenswrapper[7454]: I0319 11:53:59.173125 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/921791b6-51d2-4d7c-995b-488a37f85b3f-serving-cert\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:53:59.173464 master-0 kubenswrapper[7454]: E0319 11:53:59.173231 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-client-ca podName:921791b6-51d2-4d7c-995b-488a37f85b3f nodeName:}" failed. No retries permitted until 2026-03-19 11:54:03.173201218 +0000 UTC m=+12.803667311 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-client-ca") pod "controller-manager-6484d6777d-wmpqv" (UID: "921791b6-51d2-4d7c-995b-488a37f85b3f") : configmap "client-ca" not found Mar 19 11:53:59.173464 master-0 kubenswrapper[7454]: E0319 11:53:59.173378 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/921791b6-51d2-4d7c-995b-488a37f85b3f-serving-cert podName:921791b6-51d2-4d7c-995b-488a37f85b3f nodeName:}" failed. No retries permitted until 2026-03-19 11:54:03.173354613 +0000 UTC m=+12.803820526 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/921791b6-51d2-4d7c-995b-488a37f85b3f-serving-cert") pod "controller-manager-6484d6777d-wmpqv" (UID: "921791b6-51d2-4d7c-995b-488a37f85b3f") : secret "serving-cert" not found Mar 19 11:53:59.470091 master-0 kubenswrapper[7454]: I0319 11:53:59.470040 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:59.476231 master-0 kubenswrapper[7454]: I0319 11:53:59.476184 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:53:59.477063 master-0 kubenswrapper[7454]: I0319 11:53:59.476605 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:59.477063 master-0 kubenswrapper[7454]: I0319 11:53:59.476641 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:53:59.477063 master-0 kubenswrapper[7454]: I0319 11:53:59.476658 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:53:59.477063 master-0 kubenswrapper[7454]: I0319 11:53:59.476676 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:53:59.477063 master-0 kubenswrapper[7454]: E0319 11:53:59.476795 7454 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 19 11:53:59.477063 master-0 kubenswrapper[7454]: E0319 11:53:59.476845 7454 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 19 11:53:59.477063 master-0 kubenswrapper[7454]: E0319 11:53:59.476870 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert podName:63c12a89-1b49-4eba-8f5a-551b10d2246b nodeName:}" failed. No retries permitted until 2026-03-19 11:54:07.476858639 +0000 UTC m=+17.107324552 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-zfsqt" (UID: "63c12a89-1b49-4eba-8f5a-551b10d2246b") : secret "performance-addon-operator-webhook-cert" not found Mar 19 11:53:59.477063 master-0 kubenswrapper[7454]: I0319 11:53:59.476903 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-fz8cg\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:53:59.477063 master-0 kubenswrapper[7454]: I0319 11:53:59.476931 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:53:59.477063 master-0 kubenswrapper[7454]: I0319 11:53:59.476961 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:53:59.477063 master-0 kubenswrapper[7454]: E0319 11:53:59.476797 7454 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 19 11:53:59.477063 master-0 kubenswrapper[7454]: E0319 11:53:59.476999 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls podName:63c12a89-1b49-4eba-8f5a-551b10d2246b nodeName:}" failed. No retries permitted until 2026-03-19 11:54:07.476983213 +0000 UTC m=+17.107449126 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-zfsqt" (UID: "63c12a89-1b49-4eba-8f5a-551b10d2246b") : secret "node-tuning-operator-tls" not found Mar 19 11:53:59.477063 master-0 kubenswrapper[7454]: E0319 11:53:59.476818 7454 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 11:53:59.477063 master-0 kubenswrapper[7454]: E0319 11:53:59.477019 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs podName:398bcaca-1bea-4633-a78f-717e3d015ddd nodeName:}" failed. No retries permitted until 2026-03-19 11:54:07.477012584 +0000 UTC m=+17.107478497 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs") pod "network-metrics-daemon-6t6sn" (UID: "398bcaca-1bea-4633-a78f-717e3d015ddd") : secret "metrics-daemon-secret" not found Mar 19 11:53:59.477063 master-0 kubenswrapper[7454]: E0319 11:53:59.477039 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls podName:7241bf11-192e-47db-9d80-2324938ed34c nodeName:}" failed. No retries permitted until 2026-03-19 11:54:07.477026894 +0000 UTC m=+17.107492887 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-92c5d" (UID: "7241bf11-192e-47db-9d80-2324938ed34c") : secret "cluster-monitoring-operator-tls" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.477095 7454 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.477131 7454 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.477168 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls podName:82b98dca-59f9-42be-94ca-4a2a2b6fea0f nodeName:}" failed. No retries permitted until 2026-03-19 11:54:07.477149278 +0000 UTC m=+17.107615191 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-g6sn6" (UID: "82b98dca-59f9-42be-94ca-4a2a2b6fea0f") : secret "image-registry-operator-tls" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.477187 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs podName:806a4c30-7b93-4430-86da-f9e1f4f2d206 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:07.477179939 +0000 UTC m=+17.107645852 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-fz8cg" (UID: "806a4c30-7b93-4430-86da-f9e1f4f2d206") : secret "multus-admission-controller-secret" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.477227 7454 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.477255 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls podName:ab54833d-e57b-479d-b171-68155f6566f1 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:07.477247201 +0000 UTC m=+17.107713114 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls") pod "dns-operator-9c5679d8f-z6kvm" (UID: "ab54833d-e57b-479d-b171-68155f6566f1") : secret "metrics-tls" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.477262 7454 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.477299 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert podName:beb562de-402b-4d9f-b5ed-090b60847a95 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:07.477292443 +0000 UTC m=+17.107758356 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6j2nj" (UID: "beb562de-402b-4d9f-b5ed-090b60847a95") : secret "package-server-manager-serving-cert" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: I0319 11:53:59.477209 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: I0319 11:53:59.477366 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.477451 7454 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.477474 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert podName:85912908-c447-4868-871b-82c5eadbfdbe nodeName:}" failed. No retries permitted until 2026-03-19 11:54:07.477467868 +0000 UTC m=+17.107933781 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert") pod "cluster-version-operator-56d8475767-gjj5v" (UID: "85912908-c447-4868-871b-82c5eadbfdbe") : secret "cluster-version-operator-serving-cert" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: I0319 11:53:59.477492 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: I0319 11:53:59.477525 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.477546 7454 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.477589 7454 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: I0319 11:53:59.477558 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.477598 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls podName:d3541cbe-3be0-40d3-89d2-b5937b6a8f47 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:07.477587392 +0000 UTC m=+17.108053375 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls") pod "machine-config-operator-84d549f6d5-lswqw" (UID: "d3541cbe-3be0-40d3-89d2-b5937b6a8f47") : secret "mco-proxy-tls" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.477768 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls podName:b80027fd-7b39-477a-a337-ff9bb08e7eeb nodeName:}" failed. No retries permitted until 2026-03-19 11:54:07.477760587 +0000 UTC m=+17.108226500 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls") pod "ingress-operator-66b84d69b-btppx" (UID: "b80027fd-7b39-477a-a337-ff9bb08e7eeb") : secret "metrics-tls" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: I0319 11:53:59.477781 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.477636 7454 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.477840 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics podName:b0f5939c-48b1-4d6c-9712-9128a78d603b nodeName:}" failed. No retries permitted until 2026-03-19 11:54:07.477832719 +0000 UTC m=+17.108298632 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-pr7gk" (UID: "b0f5939c-48b1-4d6c-9712-9128a78d603b") : secret "marketplace-operator-metrics" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: I0319 11:53:59.477858 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.477868 7454 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: I0319 11:53:59.477884 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: I0319 11:53:59.477903 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.477949 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:07.477941582 +0000 UTC m=+17.108407585 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.477965 7454 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.477984 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert podName:87a3f546-e1c1-42a1-b80e-d45b6d5c0a04 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:07.477978834 +0000 UTC m=+17.108444747 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert") pod "olm-operator-5c9796789-8cldl" (UID: "87a3f546-e1c1-42a1-b80e-d45b6d5c0a04") : secret "olm-operator-serving-cert" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.477989 7454 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.478026 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:07.478016105 +0000 UTC m=+17.108482098 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-operator-tls" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.478026 7454 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 11:53:59.479127 master-0 kubenswrapper[7454]: E0319 11:53:59.478058 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert podName:bdcdb23d-ef1f-45e2-b9ac-7abf707637b6 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:07.478048716 +0000 UTC m=+17.108514739 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert") pod "catalog-operator-68f85b4d6c-2trz4" (UID: "bdcdb23d-ef1f-45e2-b9ac-7abf707637b6") : secret "catalog-operator-serving-cert" not found Mar 19 11:53:59.551638 master-0 kubenswrapper[7454]: I0319 11:53:59.551573 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:53:59.551883 master-0 kubenswrapper[7454]: I0319 11:53:59.551773 7454 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:53:59.551883 master-0 kubenswrapper[7454]: I0319 11:53:59.551782 7454 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:53:59.581619 master-0 kubenswrapper[7454]: I0319 11:53:59.581531 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:54:00.631641 master-0 kubenswrapper[7454]: I0319 11:54:00.631554 7454 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:54:00.725722 master-0 kubenswrapper[7454]: I0319 11:54:00.725647 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-client-ca\") pod \"route-controller-manager-5d6d7c9966-r7q9d\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:54:00.726246 master-0 kubenswrapper[7454]: E0319 11:54:00.726214 7454 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 19 11:54:00.726324 master-0 kubenswrapper[7454]: E0319 11:54:00.726304 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-client-ca podName:25be5572-c6f3-45df-8a9d-9d6f759200ac nodeName:}" failed. No retries permitted until 2026-03-19 11:54:08.726288078 +0000 UTC m=+18.356753991 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-client-ca") pod "route-controller-manager-5d6d7c9966-r7q9d" (UID: "25be5572-c6f3-45df-8a9d-9d6f759200ac") : configmap "client-ca" not found Mar 19 11:54:00.726381 master-0 kubenswrapper[7454]: I0319 11:54:00.726308 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25be5572-c6f3-45df-8a9d-9d6f759200ac-serving-cert\") pod \"route-controller-manager-5d6d7c9966-r7q9d\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:54:00.728342 master-0 kubenswrapper[7454]: E0319 11:54:00.727094 7454 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 11:54:00.728342 master-0 kubenswrapper[7454]: E0319 11:54:00.727131 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25be5572-c6f3-45df-8a9d-9d6f759200ac-serving-cert podName:25be5572-c6f3-45df-8a9d-9d6f759200ac nodeName:}" failed. No retries permitted until 2026-03-19 11:54:08.727123095 +0000 UTC m=+18.357589008 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/25be5572-c6f3-45df-8a9d-9d6f759200ac-serving-cert") pod "route-controller-manager-5d6d7c9966-r7q9d" (UID: "25be5572-c6f3-45df-8a9d-9d6f759200ac") : secret "serving-cert" not found Mar 19 11:54:01.502686 master-0 kubenswrapper[7454]: I0319 11:54:01.502424 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-5rbp5"] Mar 19 11:54:01.503141 master-0 kubenswrapper[7454]: I0319 11:54:01.503118 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-79bc6b8d76-5rbp5" Mar 19 11:54:01.505052 master-0 kubenswrapper[7454]: I0319 11:54:01.505025 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 19 11:54:01.505175 master-0 kubenswrapper[7454]: I0319 11:54:01.505119 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 19 11:54:01.506444 master-0 kubenswrapper[7454]: I0319 11:54:01.506405 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 19 11:54:01.509415 master-0 kubenswrapper[7454]: I0319 11:54:01.509363 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 19 11:54:01.511687 master-0 kubenswrapper[7454]: I0319 11:54:01.511646 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-5rbp5"] Mar 19 11:54:01.538476 master-0 kubenswrapper[7454]: I0319 11:54:01.538407 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6863b35c-44ac-4333-97b5-e8e38b440a20-signing-cabundle\") pod \"service-ca-79bc6b8d76-5rbp5\" (UID: \"6863b35c-44ac-4333-97b5-e8e38b440a20\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5rbp5" Mar 19 11:54:01.538719 master-0 kubenswrapper[7454]: I0319 11:54:01.538498 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddl8k\" (UniqueName: \"kubernetes.io/projected/6863b35c-44ac-4333-97b5-e8e38b440a20-kube-api-access-ddl8k\") pod \"service-ca-79bc6b8d76-5rbp5\" (UID: \"6863b35c-44ac-4333-97b5-e8e38b440a20\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5rbp5" Mar 19 11:54:01.538719 master-0 kubenswrapper[7454]: I0319 11:54:01.538562 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6863b35c-44ac-4333-97b5-e8e38b440a20-signing-key\") pod \"service-ca-79bc6b8d76-5rbp5\" (UID: \"6863b35c-44ac-4333-97b5-e8e38b440a20\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5rbp5" Mar 19 11:54:01.639774 master-0 kubenswrapper[7454]: I0319 11:54:01.639643 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6863b35c-44ac-4333-97b5-e8e38b440a20-signing-cabundle\") pod \"service-ca-79bc6b8d76-5rbp5\" (UID: \"6863b35c-44ac-4333-97b5-e8e38b440a20\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5rbp5" Mar 19 11:54:01.639774 master-0 kubenswrapper[7454]: I0319 11:54:01.639774 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddl8k\" (UniqueName: \"kubernetes.io/projected/6863b35c-44ac-4333-97b5-e8e38b440a20-kube-api-access-ddl8k\") pod \"service-ca-79bc6b8d76-5rbp5\" (UID: \"6863b35c-44ac-4333-97b5-e8e38b440a20\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5rbp5" Mar 19 11:54:01.640404 master-0 kubenswrapper[7454]: I0319 11:54:01.639856 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6863b35c-44ac-4333-97b5-e8e38b440a20-signing-key\") pod \"service-ca-79bc6b8d76-5rbp5\" (UID: \"6863b35c-44ac-4333-97b5-e8e38b440a20\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5rbp5" Mar 19 11:54:01.640635 master-0 kubenswrapper[7454]: I0319 11:54:01.640589 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6863b35c-44ac-4333-97b5-e8e38b440a20-signing-cabundle\") pod \"service-ca-79bc6b8d76-5rbp5\" (UID: \"6863b35c-44ac-4333-97b5-e8e38b440a20\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5rbp5" Mar 19 11:54:01.655984 master-0 kubenswrapper[7454]: I0319 11:54:01.655532 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6863b35c-44ac-4333-97b5-e8e38b440a20-signing-key\") pod \"service-ca-79bc6b8d76-5rbp5\" (UID: \"6863b35c-44ac-4333-97b5-e8e38b440a20\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5rbp5" Mar 19 11:54:01.663214 master-0 kubenswrapper[7454]: I0319 11:54:01.662949 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddl8k\" (UniqueName: \"kubernetes.io/projected/6863b35c-44ac-4333-97b5-e8e38b440a20-kube-api-access-ddl8k\") pod \"service-ca-79bc6b8d76-5rbp5\" (UID: \"6863b35c-44ac-4333-97b5-e8e38b440a20\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5rbp5" Mar 19 11:54:01.820499 master-0 kubenswrapper[7454]: I0319 11:54:01.820368 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-79bc6b8d76-5rbp5" Mar 19 11:54:01.881837 master-0 kubenswrapper[7454]: I0319 11:54:01.878463 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:54:02.144980 master-0 kubenswrapper[7454]: I0319 11:54:02.141581 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-5rbp5"] Mar 19 11:54:02.716055 master-0 kubenswrapper[7454]: W0319 11:54:02.716005 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6863b35c_44ac_4333_97b5_e8e38b440a20.slice/crio-5a539aaaf2dd4db935a04de17d4edc2ce062fa7a5a29f257bfd8c8188731698f WatchSource:0}: Error finding container 5a539aaaf2dd4db935a04de17d4edc2ce062fa7a5a29f257bfd8c8188731698f: Status 404 returned error can't find the container with id 5a539aaaf2dd4db935a04de17d4edc2ce062fa7a5a29f257bfd8c8188731698f Mar 19 11:54:03.269848 master-0 kubenswrapper[7454]: I0319 11:54:03.266876 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-client-ca\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:54:03.269848 master-0 kubenswrapper[7454]: I0319 11:54:03.266946 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/921791b6-51d2-4d7c-995b-488a37f85b3f-serving-cert\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:54:03.269848 master-0 kubenswrapper[7454]: E0319 11:54:03.267311 7454 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 11:54:03.269848 master-0 kubenswrapper[7454]: E0319 11:54:03.268204 7454 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 11:54:03.269848 master-0 kubenswrapper[7454]: E0319 11:54:03.268277 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-client-ca podName:921791b6-51d2-4d7c-995b-488a37f85b3f nodeName:}" failed. No retries permitted until 2026-03-19 11:54:11.268239364 +0000 UTC m=+20.898705277 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-client-ca") pod "controller-manager-6484d6777d-wmpqv" (UID: "921791b6-51d2-4d7c-995b-488a37f85b3f") : configmap "client-ca" not found Mar 19 11:54:03.272161 master-0 kubenswrapper[7454]: E0319 11:54:03.270708 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/921791b6-51d2-4d7c-995b-488a37f85b3f-serving-cert podName:921791b6-51d2-4d7c-995b-488a37f85b3f nodeName:}" failed. No retries permitted until 2026-03-19 11:54:11.270682532 +0000 UTC m=+20.901148455 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/921791b6-51d2-4d7c-995b-488a37f85b3f-serving-cert") pod "controller-manager-6484d6777d-wmpqv" (UID: "921791b6-51d2-4d7c-995b-488a37f85b3f") : secret "serving-cert" not found Mar 19 11:54:03.648599 master-0 kubenswrapper[7454]: I0319 11:54:03.648366 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-5rbp5" event={"ID":"6863b35c-44ac-4333-97b5-e8e38b440a20","Type":"ContainerStarted","Data":"cfe159d2d277cacc2f38fd5cc5b8a757b1c60decd10b533a7f9dbe0b1b48403c"} Mar 19 11:54:03.648599 master-0 kubenswrapper[7454]: I0319 11:54:03.648464 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-5rbp5" event={"ID":"6863b35c-44ac-4333-97b5-e8e38b440a20","Type":"ContainerStarted","Data":"5a539aaaf2dd4db935a04de17d4edc2ce062fa7a5a29f257bfd8c8188731698f"} Mar 19 11:54:03.652249 master-0 kubenswrapper[7454]: I0319 11:54:03.651354 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" event={"ID":"c2dbd8b3-0e02-4747-a166-80aa6a94b060","Type":"ContainerStarted","Data":"697b28a330e52c45053a0bb858d1df6049dfd854ab75b1f95587cbc7874588cd"} Mar 19 11:54:03.671551 master-0 kubenswrapper[7454]: I0319 11:54:03.671431 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-79bc6b8d76-5rbp5" podStartSLOduration=2.671395381 podStartE2EDuration="2.671395381s" podCreationTimestamp="2026-03-19 11:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:54:03.668275065 +0000 UTC m=+13.298740978" watchObservedRunningTime="2026-03-19 11:54:03.671395381 +0000 UTC m=+13.301861334" Mar 19 11:54:04.432961 master-0 kubenswrapper[7454]: I0319 11:54:04.432580 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:54:04.433813 master-0 kubenswrapper[7454]: I0319 11:54:04.433056 7454 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:54:04.456260 master-0 kubenswrapper[7454]: I0319 11:54:04.456213 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 11:54:07.508140 master-0 kubenswrapper[7454]: I0319 11:54:07.508077 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:54:07.508954 master-0 kubenswrapper[7454]: I0319 11:54:07.508155 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:54:07.508954 master-0 kubenswrapper[7454]: I0319 11:54:07.508180 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:54:07.508954 master-0 kubenswrapper[7454]: E0319 11:54:07.508329 7454 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 11:54:07.508954 master-0 kubenswrapper[7454]: E0319 11:54:07.508439 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert podName:87a3f546-e1c1-42a1-b80e-d45b6d5c0a04 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:23.508411177 +0000 UTC m=+33.138877120 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert") pod "olm-operator-5c9796789-8cldl" (UID: "87a3f546-e1c1-42a1-b80e-d45b6d5c0a04") : secret "olm-operator-serving-cert" not found Mar 19 11:54:07.508954 master-0 kubenswrapper[7454]: E0319 11:54:07.508438 7454 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 11:54:07.508954 master-0 kubenswrapper[7454]: E0319 11:54:07.508479 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:23.508470159 +0000 UTC m=+33.138936092 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-operator-tls" not found Mar 19 11:54:07.508954 master-0 kubenswrapper[7454]: I0319 11:54:07.508559 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:54:07.508954 master-0 kubenswrapper[7454]: I0319 11:54:07.508593 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:54:07.508954 master-0 kubenswrapper[7454]: I0319 11:54:07.508618 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:54:07.508954 master-0 kubenswrapper[7454]: E0319 11:54:07.508672 7454 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 19 11:54:07.508954 master-0 kubenswrapper[7454]: E0319 11:54:07.508697 7454 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 11:54:07.508954 master-0 kubenswrapper[7454]: E0319 11:54:07.508725 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs podName:398bcaca-1bea-4633-a78f-717e3d015ddd nodeName:}" failed. No retries permitted until 2026-03-19 11:54:23.508709936 +0000 UTC m=+33.139175849 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs") pod "network-metrics-daemon-6t6sn" (UID: "398bcaca-1bea-4633-a78f-717e3d015ddd") : secret "metrics-daemon-secret" not found Mar 19 11:54:07.508954 master-0 kubenswrapper[7454]: E0319 11:54:07.508737 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls podName:7241bf11-192e-47db-9d80-2324938ed34c nodeName:}" failed. No retries permitted until 2026-03-19 11:54:23.508732267 +0000 UTC m=+33.139198170 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-92c5d" (UID: "7241bf11-192e-47db-9d80-2324938ed34c") : secret "cluster-monitoring-operator-tls" not found Mar 19 11:54:07.508954 master-0 kubenswrapper[7454]: I0319 11:54:07.508825 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-fz8cg\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:54:07.508954 master-0 kubenswrapper[7454]: I0319 11:54:07.508879 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:54:07.508954 master-0 kubenswrapper[7454]: I0319 11:54:07.508925 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:54:07.509486 master-0 kubenswrapper[7454]: E0319 11:54:07.508986 7454 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 11:54:07.509486 master-0 kubenswrapper[7454]: E0319 11:54:07.509046 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs podName:806a4c30-7b93-4430-86da-f9e1f4f2d206 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:23.509027186 +0000 UTC m=+33.139493169 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-fz8cg" (UID: "806a4c30-7b93-4430-86da-f9e1f4f2d206") : secret "multus-admission-controller-secret" not found Mar 19 11:54:07.509486 master-0 kubenswrapper[7454]: I0319 11:54:07.509088 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:54:07.509486 master-0 kubenswrapper[7454]: I0319 11:54:07.509153 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:54:07.509486 master-0 kubenswrapper[7454]: I0319 11:54:07.509213 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:54:07.509486 master-0 kubenswrapper[7454]: I0319 11:54:07.509254 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:54:07.509486 master-0 kubenswrapper[7454]: I0319 11:54:07.509289 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:54:07.509486 master-0 kubenswrapper[7454]: I0319 11:54:07.509330 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:54:07.509486 master-0 kubenswrapper[7454]: I0319 11:54:07.509384 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:54:07.509486 master-0 kubenswrapper[7454]: E0319 11:54:07.509483 7454 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 11:54:07.509839 master-0 kubenswrapper[7454]: E0319 11:54:07.509522 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert podName:bdcdb23d-ef1f-45e2-b9ac-7abf707637b6 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:23.509507801 +0000 UTC m=+33.139973794 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert") pod "catalog-operator-68f85b4d6c-2trz4" (UID: "bdcdb23d-ef1f-45e2-b9ac-7abf707637b6") : secret "catalog-operator-serving-cert" not found Mar 19 11:54:07.509839 master-0 kubenswrapper[7454]: E0319 11:54:07.509593 7454 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 11:54:07.509839 master-0 kubenswrapper[7454]: E0319 11:54:07.509633 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert podName:beb562de-402b-4d9f-b5ed-090b60847a95 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:23.509620614 +0000 UTC m=+33.140086537 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6j2nj" (UID: "beb562de-402b-4d9f-b5ed-090b60847a95") : secret "package-server-manager-serving-cert" not found Mar 19 11:54:07.509839 master-0 kubenswrapper[7454]: E0319 11:54:07.509698 7454 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 11:54:07.509839 master-0 kubenswrapper[7454]: E0319 11:54:07.509727 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics podName:b0f5939c-48b1-4d6c-9712-9128a78d603b nodeName:}" failed. No retries permitted until 2026-03-19 11:54:23.509718867 +0000 UTC m=+33.140184790 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-pr7gk" (UID: "b0f5939c-48b1-4d6c-9712-9128a78d603b") : secret "marketplace-operator-metrics" not found Mar 19 11:54:07.509839 master-0 kubenswrapper[7454]: E0319 11:54:07.509768 7454 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:54:07.510060 master-0 kubenswrapper[7454]: E0319 11:54:07.509790 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert podName:19de6601-10d4-4112-a21f-0398d2b160d1 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:23.509783249 +0000 UTC m=+33.140249182 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert") pod "cluster-baremetal-operator-6f69995874-ftml6" (UID: "19de6601-10d4-4112-a21f-0398d2b160d1") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 11:54:07.510060 master-0 kubenswrapper[7454]: E0319 11:54:07.509698 7454 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 11:54:07.510060 master-0 kubenswrapper[7454]: E0319 11:54:07.509910 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls podName:b80027fd-7b39-477a-a337-ff9bb08e7eeb nodeName:}" failed. No retries permitted until 2026-03-19 11:54:23.509900683 +0000 UTC m=+33.140366616 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls") pod "ingress-operator-66b84d69b-btppx" (UID: "b80027fd-7b39-477a-a337-ff9bb08e7eeb") : secret "metrics-tls" not found Mar 19 11:54:07.510060 master-0 kubenswrapper[7454]: E0319 11:54:07.509963 7454 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 11:54:07.510060 master-0 kubenswrapper[7454]: E0319 11:54:07.509987 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls podName:d3541cbe-3be0-40d3-89d2-b5937b6a8f47 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:23.509979935 +0000 UTC m=+33.140445858 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls") pod "machine-config-operator-84d549f6d5-lswqw" (UID: "d3541cbe-3be0-40d3-89d2-b5937b6a8f47") : secret "mco-proxy-tls" not found Mar 19 11:54:07.514822 master-0 kubenswrapper[7454]: I0319 11:54:07.514653 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:54:07.515044 master-0 kubenswrapper[7454]: I0319 11:54:07.514791 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert\") pod \"cluster-version-operator-56d8475767-gjj5v\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:54:07.515553 master-0 kubenswrapper[7454]: I0319 11:54:07.515410 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:54:07.518077 master-0 kubenswrapper[7454]: I0319 11:54:07.517023 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:54:07.518077 master-0 kubenswrapper[7454]: I0319 11:54:07.517446 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:54:07.790217 master-0 kubenswrapper[7454]: I0319 11:54:07.790079 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 11:54:07.790217 master-0 kubenswrapper[7454]: I0319 11:54:07.790085 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 11:54:07.793326 master-0 kubenswrapper[7454]: I0319 11:54:07.793274 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 11:54:07.822276 master-0 kubenswrapper[7454]: I0319 11:54:07.820322 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:54:07.865975 master-0 kubenswrapper[7454]: I0319 11:54:07.865912 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 19 11:54:07.866993 master-0 kubenswrapper[7454]: I0319 11:54:07.866977 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 11:54:07.874145 master-0 kubenswrapper[7454]: I0319 11:54:07.872869 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 19 11:54:07.874145 master-0 kubenswrapper[7454]: I0319 11:54:07.873401 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 19 11:54:07.922481 master-0 kubenswrapper[7454]: I0319 11:54:07.922439 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26b64f77-181a-4129-a28a-3bfdf7eac7ae-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"26b64f77-181a-4129-a28a-3bfdf7eac7ae\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 11:54:07.922589 master-0 kubenswrapper[7454]: I0319 11:54:07.922513 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/26b64f77-181a-4129-a28a-3bfdf7eac7ae-var-lock\") pod \"installer-1-master-0\" (UID: \"26b64f77-181a-4129-a28a-3bfdf7eac7ae\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 11:54:07.922719 master-0 kubenswrapper[7454]: I0319 11:54:07.922663 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26b64f77-181a-4129-a28a-3bfdf7eac7ae-kube-api-access\") pod \"installer-1-master-0\" (UID: \"26b64f77-181a-4129-a28a-3bfdf7eac7ae\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 11:54:08.024006 master-0 kubenswrapper[7454]: I0319 11:54:08.023125 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26b64f77-181a-4129-a28a-3bfdf7eac7ae-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"26b64f77-181a-4129-a28a-3bfdf7eac7ae\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 11:54:08.024006 master-0 kubenswrapper[7454]: I0319 11:54:08.023179 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/26b64f77-181a-4129-a28a-3bfdf7eac7ae-var-lock\") pod \"installer-1-master-0\" (UID: \"26b64f77-181a-4129-a28a-3bfdf7eac7ae\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 11:54:08.024006 master-0 kubenswrapper[7454]: I0319 11:54:08.023273 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26b64f77-181a-4129-a28a-3bfdf7eac7ae-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"26b64f77-181a-4129-a28a-3bfdf7eac7ae\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 11:54:08.024006 master-0 kubenswrapper[7454]: I0319 11:54:08.023441 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26b64f77-181a-4129-a28a-3bfdf7eac7ae-kube-api-access\") pod \"installer-1-master-0\" (UID: \"26b64f77-181a-4129-a28a-3bfdf7eac7ae\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 11:54:08.024006 master-0 kubenswrapper[7454]: I0319 11:54:08.023724 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/26b64f77-181a-4129-a28a-3bfdf7eac7ae-var-lock\") pod \"installer-1-master-0\" (UID: \"26b64f77-181a-4129-a28a-3bfdf7eac7ae\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 11:54:08.042224 master-0 kubenswrapper[7454]: I0319 11:54:08.041422 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6"] Mar 19 11:54:08.047374 master-0 kubenswrapper[7454]: I0319 11:54:08.047323 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26b64f77-181a-4129-a28a-3bfdf7eac7ae-kube-api-access\") pod \"installer-1-master-0\" (UID: \"26b64f77-181a-4129-a28a-3bfdf7eac7ae\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 11:54:08.047676 master-0 kubenswrapper[7454]: I0319 11:54:08.047640 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt"] Mar 19 11:54:08.049756 master-0 kubenswrapper[7454]: W0319 11:54:08.049723 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82b98dca_59f9_42be_94ca_4a2a2b6fea0f.slice/crio-0c17be488f74c65475492714ea2841534c84f72d155a2152b6dab678c10b46b6 WatchSource:0}: Error finding container 0c17be488f74c65475492714ea2841534c84f72d155a2152b6dab678c10b46b6: Status 404 returned error can't find the container with id 0c17be488f74c65475492714ea2841534c84f72d155a2152b6dab678c10b46b6 Mar 19 11:54:08.056052 master-0 kubenswrapper[7454]: W0319 11:54:08.056008 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63c12a89_1b49_4eba_8f5a_551b10d2246b.slice/crio-de6a10425187cbc938b44bf02e39e9ceb0c27562adc9c491a8cdb29f071cbb62 WatchSource:0}: Error finding container de6a10425187cbc938b44bf02e39e9ceb0c27562adc9c491a8cdb29f071cbb62: Status 404 returned error can't find the container with id de6a10425187cbc938b44bf02e39e9ceb0c27562adc9c491a8cdb29f071cbb62 Mar 19 11:54:08.073514 master-0 kubenswrapper[7454]: I0319 11:54:08.073254 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-z6kvm"] Mar 19 11:54:08.089099 master-0 kubenswrapper[7454]: W0319 11:54:08.089042 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab54833d_e57b_479d_b171_68155f6566f1.slice/crio-f37f04bee18930433857e4757f6c0b0cea46719c10be7aeeafbea9a7d2df628f WatchSource:0}: Error finding container f37f04bee18930433857e4757f6c0b0cea46719c10be7aeeafbea9a7d2df628f: Status 404 returned error can't find the container with id f37f04bee18930433857e4757f6c0b0cea46719c10be7aeeafbea9a7d2df628f Mar 19 11:54:08.192379 master-0 kubenswrapper[7454]: I0319 11:54:08.192322 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 11:54:08.355429 master-0 kubenswrapper[7454]: I0319 11:54:08.355376 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 19 11:54:08.671418 master-0 kubenswrapper[7454]: I0319 11:54:08.671019 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"26b64f77-181a-4129-a28a-3bfdf7eac7ae","Type":"ContainerStarted","Data":"e0872c5a2d5561d0225dfd392b85facd7a9b9a7df9e38158520ca6c2a2f1b1d9"} Mar 19 11:54:08.675147 master-0 kubenswrapper[7454]: I0319 11:54:08.675054 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" event={"ID":"85912908-c447-4868-871b-82c5eadbfdbe","Type":"ContainerStarted","Data":"f0dd3ad0c31c50755d9a1e00840e55c34c92c7b9022f8e6526d575378ba152f4"} Mar 19 11:54:08.676616 master-0 kubenswrapper[7454]: I0319 11:54:08.676569 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" event={"ID":"ab54833d-e57b-479d-b171-68155f6566f1","Type":"ContainerStarted","Data":"f37f04bee18930433857e4757f6c0b0cea46719c10be7aeeafbea9a7d2df628f"} Mar 19 11:54:08.677759 master-0 kubenswrapper[7454]: I0319 11:54:08.677729 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" event={"ID":"63c12a89-1b49-4eba-8f5a-551b10d2246b","Type":"ContainerStarted","Data":"de6a10425187cbc938b44bf02e39e9ceb0c27562adc9c491a8cdb29f071cbb62"} Mar 19 11:54:08.679764 master-0 kubenswrapper[7454]: I0319 11:54:08.679721 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" event={"ID":"82b98dca-59f9-42be-94ca-4a2a2b6fea0f","Type":"ContainerStarted","Data":"0c17be488f74c65475492714ea2841534c84f72d155a2152b6dab678c10b46b6"} Mar 19 11:54:08.745833 master-0 kubenswrapper[7454]: I0319 11:54:08.745755 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25be5572-c6f3-45df-8a9d-9d6f759200ac-serving-cert\") pod \"route-controller-manager-5d6d7c9966-r7q9d\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:54:08.746072 master-0 kubenswrapper[7454]: E0319 11:54:08.746017 7454 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 11:54:08.746131 master-0 kubenswrapper[7454]: E0319 11:54:08.746103 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25be5572-c6f3-45df-8a9d-9d6f759200ac-serving-cert podName:25be5572-c6f3-45df-8a9d-9d6f759200ac nodeName:}" failed. No retries permitted until 2026-03-19 11:54:24.746081341 +0000 UTC m=+34.376547254 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/25be5572-c6f3-45df-8a9d-9d6f759200ac-serving-cert") pod "route-controller-manager-5d6d7c9966-r7q9d" (UID: "25be5572-c6f3-45df-8a9d-9d6f759200ac") : secret "serving-cert" not found Mar 19 11:54:08.746595 master-0 kubenswrapper[7454]: I0319 11:54:08.746242 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-client-ca\") pod \"route-controller-manager-5d6d7c9966-r7q9d\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:54:08.746595 master-0 kubenswrapper[7454]: E0319 11:54:08.746420 7454 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 19 11:54:08.746595 master-0 kubenswrapper[7454]: E0319 11:54:08.746503 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-client-ca podName:25be5572-c6f3-45df-8a9d-9d6f759200ac nodeName:}" failed. No retries permitted until 2026-03-19 11:54:24.746479683 +0000 UTC m=+34.376945596 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-client-ca") pod "route-controller-manager-5d6d7c9966-r7q9d" (UID: "25be5572-c6f3-45df-8a9d-9d6f759200ac") : configmap "client-ca" not found Mar 19 11:54:09.361095 master-0 kubenswrapper[7454]: I0319 11:54:09.360354 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-594b8bbb67-8fxxb"] Mar 19 11:54:09.362005 master-0 kubenswrapper[7454]: I0319 11:54:09.361822 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.366036 master-0 kubenswrapper[7454]: I0319 11:54:09.365960 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 19 11:54:09.366185 master-0 kubenswrapper[7454]: I0319 11:54:09.366168 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 19 11:54:09.366345 master-0 kubenswrapper[7454]: I0319 11:54:09.366329 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Mar 19 11:54:09.366448 master-0 kubenswrapper[7454]: I0319 11:54:09.366434 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 19 11:54:09.366571 master-0 kubenswrapper[7454]: I0319 11:54:09.366557 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 19 11:54:09.366669 master-0 kubenswrapper[7454]: I0319 11:54:09.366655 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Mar 19 11:54:09.367620 master-0 kubenswrapper[7454]: I0319 11:54:09.367180 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 19 11:54:09.368510 master-0 kubenswrapper[7454]: I0319 11:54:09.368494 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 19 11:54:09.368871 master-0 kubenswrapper[7454]: I0319 11:54:09.368858 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 19 11:54:09.371003 master-0 kubenswrapper[7454]: I0319 11:54:09.370975 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-594b8bbb67-8fxxb"] Mar 19 11:54:09.390077 master-0 kubenswrapper[7454]: I0319 11:54:09.390039 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 19 11:54:09.453740 master-0 kubenswrapper[7454]: I0319 11:54:09.453684 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-etcd-serving-ca\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.453740 master-0 kubenswrapper[7454]: I0319 11:54:09.453735 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ncfx\" (UniqueName: \"kubernetes.io/projected/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-kube-api-access-8ncfx\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.453971 master-0 kubenswrapper[7454]: I0319 11:54:09.453760 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-node-pullsecrets\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.453971 master-0 kubenswrapper[7454]: I0319 11:54:09.453787 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit-dir\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.453971 master-0 kubenswrapper[7454]: I0319 11:54:09.453850 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-config\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.453971 master-0 kubenswrapper[7454]: I0319 11:54:09.453870 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-etcd-client\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.454086 master-0 kubenswrapper[7454]: I0319 11:54:09.453970 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-encryption-config\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.454086 master-0 kubenswrapper[7454]: I0319 11:54:09.453996 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-image-import-ca\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.454204 master-0 kubenswrapper[7454]: I0319 11:54:09.454175 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-serving-cert\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.454272 master-0 kubenswrapper[7454]: I0319 11:54:09.454246 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-trusted-ca-bundle\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.454307 master-0 kubenswrapper[7454]: I0319 11:54:09.454274 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.554994 master-0 kubenswrapper[7454]: I0319 11:54:09.554936 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.555194 master-0 kubenswrapper[7454]: I0319 11:54:09.555039 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-etcd-serving-ca\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.555194 master-0 kubenswrapper[7454]: E0319 11:54:09.555078 7454 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 19 11:54:09.555194 master-0 kubenswrapper[7454]: E0319 11:54:09.555161 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit podName:65b01c13-2eb4-4820-b3c6-0b45da4ffca5 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:10.05513927 +0000 UTC m=+19.685605193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit") pod "apiserver-594b8bbb67-8fxxb" (UID: "65b01c13-2eb4-4820-b3c6-0b45da4ffca5") : configmap "audit-0" not found Mar 19 11:54:09.555372 master-0 kubenswrapper[7454]: I0319 11:54:09.555337 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ncfx\" (UniqueName: \"kubernetes.io/projected/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-kube-api-access-8ncfx\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.555431 master-0 kubenswrapper[7454]: I0319 11:54:09.555392 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-node-pullsecrets\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.555431 master-0 kubenswrapper[7454]: I0319 11:54:09.555426 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit-dir\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.555672 master-0 kubenswrapper[7454]: I0319 11:54:09.555642 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-config\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.555717 master-0 kubenswrapper[7454]: I0319 11:54:09.555685 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-etcd-client\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.555749 master-0 kubenswrapper[7454]: I0319 11:54:09.555676 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-node-pullsecrets\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.555971 master-0 kubenswrapper[7454]: I0319 11:54:09.555940 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-encryption-config\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.556024 master-0 kubenswrapper[7454]: I0319 11:54:09.555997 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-image-import-ca\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.556171 master-0 kubenswrapper[7454]: I0319 11:54:09.556141 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-serving-cert\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.556232 master-0 kubenswrapper[7454]: I0319 11:54:09.556183 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-trusted-ca-bundle\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.556232 master-0 kubenswrapper[7454]: I0319 11:54:09.556192 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-etcd-serving-ca\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.556362 master-0 kubenswrapper[7454]: I0319 11:54:09.556333 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit-dir\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.556558 master-0 kubenswrapper[7454]: E0319 11:54:09.556491 7454 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 19 11:54:09.557140 master-0 kubenswrapper[7454]: E0319 11:54:09.556577 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-serving-cert podName:65b01c13-2eb4-4820-b3c6-0b45da4ffca5 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:10.056550024 +0000 UTC m=+19.687015947 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-serving-cert") pod "apiserver-594b8bbb67-8fxxb" (UID: "65b01c13-2eb4-4820-b3c6-0b45da4ffca5") : secret "serving-cert" not found Mar 19 11:54:09.557140 master-0 kubenswrapper[7454]: I0319 11:54:09.556922 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-image-import-ca\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.557637 master-0 kubenswrapper[7454]: I0319 11:54:09.557586 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-config\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.558616 master-0 kubenswrapper[7454]: I0319 11:54:09.558546 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-trusted-ca-bundle\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.563719 master-0 kubenswrapper[7454]: I0319 11:54:09.563672 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-etcd-client\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.577022 master-0 kubenswrapper[7454]: I0319 11:54:09.576262 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-encryption-config\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.584433 master-0 kubenswrapper[7454]: I0319 11:54:09.584392 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ncfx\" (UniqueName: \"kubernetes.io/projected/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-kube-api-access-8ncfx\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:09.689861 master-0 kubenswrapper[7454]: I0319 11:54:09.689702 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"26b64f77-181a-4129-a28a-3bfdf7eac7ae","Type":"ContainerStarted","Data":"e273c8440ea0683d116b4eeafcafc60dfd6a87717b0c4b4c9829c8626e0e905a"} Mar 19 11:54:09.724087 master-0 kubenswrapper[7454]: I0319 11:54:09.724003 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=2.723982509 podStartE2EDuration="2.723982509s" podCreationTimestamp="2026-03-19 11:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:54:09.722566234 +0000 UTC m=+19.353032147" watchObservedRunningTime="2026-03-19 11:54:09.723982509 +0000 UTC m=+19.354448422" Mar 19 11:54:10.065716 master-0 kubenswrapper[7454]: I0319 11:54:10.065659 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-serving-cert\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:10.065716 master-0 kubenswrapper[7454]: I0319 11:54:10.065710 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:10.065968 master-0 kubenswrapper[7454]: E0319 11:54:10.065835 7454 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 19 11:54:10.065968 master-0 kubenswrapper[7454]: E0319 11:54:10.065894 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit podName:65b01c13-2eb4-4820-b3c6-0b45da4ffca5 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:11.065876486 +0000 UTC m=+20.696342409 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit") pod "apiserver-594b8bbb67-8fxxb" (UID: "65b01c13-2eb4-4820-b3c6-0b45da4ffca5") : configmap "audit-0" not found Mar 19 11:54:10.066385 master-0 kubenswrapper[7454]: E0319 11:54:10.066341 7454 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 19 11:54:10.066437 master-0 kubenswrapper[7454]: E0319 11:54:10.066422 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-serving-cert podName:65b01c13-2eb4-4820-b3c6-0b45da4ffca5 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:11.066404503 +0000 UTC m=+20.696870416 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-serving-cert") pod "apiserver-594b8bbb67-8fxxb" (UID: "65b01c13-2eb4-4820-b3c6-0b45da4ffca5") : secret "serving-cert" not found Mar 19 11:54:11.081033 master-0 kubenswrapper[7454]: I0319 11:54:11.080992 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-serving-cert\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:11.081033 master-0 kubenswrapper[7454]: I0319 11:54:11.081035 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:11.082018 master-0 kubenswrapper[7454]: E0319 11:54:11.081201 7454 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 19 11:54:11.082018 master-0 kubenswrapper[7454]: E0319 11:54:11.081290 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit podName:65b01c13-2eb4-4820-b3c6-0b45da4ffca5 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:13.081269049 +0000 UTC m=+22.711734972 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit") pod "apiserver-594b8bbb67-8fxxb" (UID: "65b01c13-2eb4-4820-b3c6-0b45da4ffca5") : configmap "audit-0" not found Mar 19 11:54:11.082018 master-0 kubenswrapper[7454]: E0319 11:54:11.081396 7454 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 19 11:54:11.082018 master-0 kubenswrapper[7454]: E0319 11:54:11.081473 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-serving-cert podName:65b01c13-2eb4-4820-b3c6-0b45da4ffca5 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:13.081453605 +0000 UTC m=+22.711919578 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-serving-cert") pod "apiserver-594b8bbb67-8fxxb" (UID: "65b01c13-2eb4-4820-b3c6-0b45da4ffca5") : secret "serving-cert" not found Mar 19 11:54:11.283032 master-0 kubenswrapper[7454]: I0319 11:54:11.282978 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-client-ca\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:54:11.283032 master-0 kubenswrapper[7454]: I0319 11:54:11.283024 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/921791b6-51d2-4d7c-995b-488a37f85b3f-serving-cert\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:54:11.283264 master-0 kubenswrapper[7454]: E0319 11:54:11.283179 7454 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 11:54:11.283264 master-0 kubenswrapper[7454]: E0319 11:54:11.283239 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-client-ca podName:921791b6-51d2-4d7c-995b-488a37f85b3f nodeName:}" failed. No retries permitted until 2026-03-19 11:54:27.283224034 +0000 UTC m=+36.913689947 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-client-ca") pod "controller-manager-6484d6777d-wmpqv" (UID: "921791b6-51d2-4d7c-995b-488a37f85b3f") : configmap "client-ca" not found Mar 19 11:54:11.292241 master-0 kubenswrapper[7454]: I0319 11:54:11.292206 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/921791b6-51d2-4d7c-995b-488a37f85b3f-serving-cert\") pod \"controller-manager-6484d6777d-wmpqv\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:54:13.107109 master-0 kubenswrapper[7454]: I0319 11:54:13.106685 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-serving-cert\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:13.108074 master-0 kubenswrapper[7454]: I0319 11:54:13.107131 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:13.108074 master-0 kubenswrapper[7454]: E0319 11:54:13.106945 7454 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 19 11:54:13.108074 master-0 kubenswrapper[7454]: E0319 11:54:13.107265 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-serving-cert podName:65b01c13-2eb4-4820-b3c6-0b45da4ffca5 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:17.107235792 +0000 UTC m=+26.737701725 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-serving-cert") pod "apiserver-594b8bbb67-8fxxb" (UID: "65b01c13-2eb4-4820-b3c6-0b45da4ffca5") : secret "serving-cert" not found Mar 19 11:54:13.108074 master-0 kubenswrapper[7454]: E0319 11:54:13.107377 7454 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 19 11:54:13.108074 master-0 kubenswrapper[7454]: E0319 11:54:13.107457 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit podName:65b01c13-2eb4-4820-b3c6-0b45da4ffca5 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:17.107437949 +0000 UTC m=+26.737903862 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit") pod "apiserver-594b8bbb67-8fxxb" (UID: "65b01c13-2eb4-4820-b3c6-0b45da4ffca5") : configmap "audit-0" not found Mar 19 11:54:14.875686 master-0 kubenswrapper[7454]: I0319 11:54:14.875591 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 19 11:54:14.876387 master-0 kubenswrapper[7454]: I0319 11:54:14.876316 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 19 11:54:14.878510 master-0 kubenswrapper[7454]: I0319 11:54:14.878432 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 19 11:54:15.029853 master-0 kubenswrapper[7454]: I0319 11:54:15.029735 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/11f83dfb-da04-483f-b281-ebdb39f3ab27-var-lock\") pod \"installer-1-master-0\" (UID: \"11f83dfb-da04-483f-b281-ebdb39f3ab27\") " pod="openshift-etcd/installer-1-master-0" Mar 19 11:54:15.030127 master-0 kubenswrapper[7454]: I0319 11:54:15.030025 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11f83dfb-da04-483f-b281-ebdb39f3ab27-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"11f83dfb-da04-483f-b281-ebdb39f3ab27\") " pod="openshift-etcd/installer-1-master-0" Mar 19 11:54:15.030127 master-0 kubenswrapper[7454]: I0319 11:54:15.030121 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11f83dfb-da04-483f-b281-ebdb39f3ab27-kube-api-access\") pod \"installer-1-master-0\" (UID: \"11f83dfb-da04-483f-b281-ebdb39f3ab27\") " pod="openshift-etcd/installer-1-master-0" Mar 19 11:54:15.131555 master-0 kubenswrapper[7454]: I0319 11:54:15.131312 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11f83dfb-da04-483f-b281-ebdb39f3ab27-kube-api-access\") pod \"installer-1-master-0\" (UID: \"11f83dfb-da04-483f-b281-ebdb39f3ab27\") " pod="openshift-etcd/installer-1-master-0" Mar 19 11:54:15.131555 master-0 kubenswrapper[7454]: I0319 11:54:15.131522 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/11f83dfb-da04-483f-b281-ebdb39f3ab27-var-lock\") pod \"installer-1-master-0\" (UID: \"11f83dfb-da04-483f-b281-ebdb39f3ab27\") " pod="openshift-etcd/installer-1-master-0" Mar 19 11:54:15.131916 master-0 kubenswrapper[7454]: I0319 11:54:15.131866 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11f83dfb-da04-483f-b281-ebdb39f3ab27-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"11f83dfb-da04-483f-b281-ebdb39f3ab27\") " pod="openshift-etcd/installer-1-master-0" Mar 19 11:54:15.133503 master-0 kubenswrapper[7454]: I0319 11:54:15.133417 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/11f83dfb-da04-483f-b281-ebdb39f3ab27-var-lock\") pod \"installer-1-master-0\" (UID: \"11f83dfb-da04-483f-b281-ebdb39f3ab27\") " pod="openshift-etcd/installer-1-master-0" Mar 19 11:54:15.133714 master-0 kubenswrapper[7454]: I0319 11:54:15.133648 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11f83dfb-da04-483f-b281-ebdb39f3ab27-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"11f83dfb-da04-483f-b281-ebdb39f3ab27\") " pod="openshift-etcd/installer-1-master-0" Mar 19 11:54:15.745827 master-0 kubenswrapper[7454]: I0319 11:54:15.745735 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 19 11:54:16.153269 master-0 kubenswrapper[7454]: I0319 11:54:16.153136 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11f83dfb-da04-483f-b281-ebdb39f3ab27-kube-api-access\") pod \"installer-1-master-0\" (UID: \"11f83dfb-da04-483f-b281-ebdb39f3ab27\") " pod="openshift-etcd/installer-1-master-0" Mar 19 11:54:16.396167 master-0 kubenswrapper[7454]: I0319 11:54:16.396119 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 19 11:54:17.169242 master-0 kubenswrapper[7454]: I0319 11:54:17.169150 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-serving-cert\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:17.169969 master-0 kubenswrapper[7454]: I0319 11:54:17.169267 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit\") pod \"apiserver-594b8bbb67-8fxxb\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:17.169969 master-0 kubenswrapper[7454]: E0319 11:54:17.169427 7454 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 19 11:54:17.169969 master-0 kubenswrapper[7454]: E0319 11:54:17.169557 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-serving-cert podName:65b01c13-2eb4-4820-b3c6-0b45da4ffca5 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:25.169521192 +0000 UTC m=+34.799987155 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-serving-cert") pod "apiserver-594b8bbb67-8fxxb" (UID: "65b01c13-2eb4-4820-b3c6-0b45da4ffca5") : secret "serving-cert" not found Mar 19 11:54:17.169969 master-0 kubenswrapper[7454]: E0319 11:54:17.169443 7454 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 19 11:54:17.169969 master-0 kubenswrapper[7454]: E0319 11:54:17.169647 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit podName:65b01c13-2eb4-4820-b3c6-0b45da4ffca5 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:25.169624145 +0000 UTC m=+34.800090098 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit") pod "apiserver-594b8bbb67-8fxxb" (UID: "65b01c13-2eb4-4820-b3c6-0b45da4ffca5") : configmap "audit-0" not found Mar 19 11:54:17.717823 master-0 kubenswrapper[7454]: I0319 11:54:17.712958 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r"] Mar 19 11:54:17.717823 master-0 kubenswrapper[7454]: I0319 11:54:17.713762 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:17.738873 master-0 kubenswrapper[7454]: I0319 11:54:17.738276 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 19 11:54:17.748871 master-0 kubenswrapper[7454]: I0319 11:54:17.745401 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 19 11:54:17.748871 master-0 kubenswrapper[7454]: I0319 11:54:17.745728 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 19 11:54:17.755211 master-0 kubenswrapper[7454]: I0319 11:54:17.753508 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 19 11:54:17.755211 master-0 kubenswrapper[7454]: I0319 11:54:17.753774 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 19 11:54:17.755211 master-0 kubenswrapper[7454]: I0319 11:54:17.754098 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 19 11:54:17.755211 master-0 kubenswrapper[7454]: I0319 11:54:17.754253 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 19 11:54:17.755211 master-0 kubenswrapper[7454]: I0319 11:54:17.754317 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 19 11:54:17.755211 master-0 kubenswrapper[7454]: I0319 11:54:17.755098 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r"] Mar 19 11:54:17.844845 master-0 kubenswrapper[7454]: I0319 11:54:17.841643 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd"] Mar 19 11:54:17.844845 master-0 kubenswrapper[7454]: I0319 11:54:17.842310 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:54:17.844845 master-0 kubenswrapper[7454]: I0319 11:54:17.844463 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 19 11:54:17.848854 master-0 kubenswrapper[7454]: I0319 11:54:17.846450 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 19 11:54:17.848854 master-0 kubenswrapper[7454]: I0319 11:54:17.846591 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 19 11:54:17.861029 master-0 kubenswrapper[7454]: I0319 11:54:17.860957 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z"] Mar 19 11:54:17.863866 master-0 kubenswrapper[7454]: I0319 11:54:17.861691 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:17.867419 master-0 kubenswrapper[7454]: I0319 11:54:17.867188 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 19 11:54:17.867419 master-0 kubenswrapper[7454]: I0319 11:54:17.867312 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 19 11:54:17.875149 master-0 kubenswrapper[7454]: I0319 11:54:17.873439 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 19 11:54:17.875387 master-0 kubenswrapper[7454]: I0319 11:54:17.875330 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd"] Mar 19 11:54:17.882596 master-0 kubenswrapper[7454]: I0319 11:54:17.882411 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 19 11:54:17.884887 master-0 kubenswrapper[7454]: I0319 11:54:17.883585 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5238840f-3bef-43ad-ae68-ac187f073019-cache\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:54:17.884887 master-0 kubenswrapper[7454]: I0319 11:54:17.883626 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/5238840f-3bef-43ad-ae68-ac187f073019-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:54:17.884887 master-0 kubenswrapper[7454]: I0319 11:54:17.883653 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/919daf8d-763a-44bc-8916-86b425a27cbd-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:17.884887 master-0 kubenswrapper[7454]: I0319 11:54:17.883723 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/919daf8d-763a-44bc-8916-86b425a27cbd-cache\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:17.884887 master-0 kubenswrapper[7454]: I0319 11:54:17.883748 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/979ba8cc-5a7b-4188-bf9e-c22d810888e9-audit-policies\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:17.884887 master-0 kubenswrapper[7454]: I0319 11:54:17.883769 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxdts\" (UniqueName: \"kubernetes.io/projected/5238840f-3bef-43ad-ae68-ac187f073019-kube-api-access-vxdts\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:54:17.884887 master-0 kubenswrapper[7454]: I0319 11:54:17.883790 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/5238840f-3bef-43ad-ae68-ac187f073019-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:54:17.884887 master-0 kubenswrapper[7454]: I0319 11:54:17.883835 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/979ba8cc-5a7b-4188-bf9e-c22d810888e9-etcd-serving-ca\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:17.884887 master-0 kubenswrapper[7454]: I0319 11:54:17.883863 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28ljd\" (UniqueName: \"kubernetes.io/projected/979ba8cc-5a7b-4188-bf9e-c22d810888e9-kube-api-access-28ljd\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:17.884887 master-0 kubenswrapper[7454]: I0319 11:54:17.883899 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/919daf8d-763a-44bc-8916-86b425a27cbd-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:17.884887 master-0 kubenswrapper[7454]: I0319 11:54:17.883901 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z"] Mar 19 11:54:17.884887 master-0 kubenswrapper[7454]: I0319 11:54:17.883921 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/919daf8d-763a-44bc-8916-86b425a27cbd-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:17.884887 master-0 kubenswrapper[7454]: I0319 11:54:17.883944 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-serving-cert\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:17.884887 master-0 kubenswrapper[7454]: I0319 11:54:17.883967 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/919daf8d-763a-44bc-8916-86b425a27cbd-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:17.884887 master-0 kubenswrapper[7454]: I0319 11:54:17.884075 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8brwr\" (UniqueName: \"kubernetes.io/projected/919daf8d-763a-44bc-8916-86b425a27cbd-kube-api-access-8brwr\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:17.884887 master-0 kubenswrapper[7454]: I0319 11:54:17.884130 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-encryption-config\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:17.884887 master-0 kubenswrapper[7454]: I0319 11:54:17.884209 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/5238840f-3bef-43ad-ae68-ac187f073019-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:54:17.884887 master-0 kubenswrapper[7454]: I0319 11:54:17.884242 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-etcd-client\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:17.884887 master-0 kubenswrapper[7454]: I0319 11:54:17.884267 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/979ba8cc-5a7b-4188-bf9e-c22d810888e9-audit-dir\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:17.884887 master-0 kubenswrapper[7454]: I0319 11:54:17.884307 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/979ba8cc-5a7b-4188-bf9e-c22d810888e9-trusted-ca-bundle\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:17.987933 master-0 kubenswrapper[7454]: I0319 11:54:17.986325 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/919daf8d-763a-44bc-8916-86b425a27cbd-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:17.987933 master-0 kubenswrapper[7454]: I0319 11:54:17.986375 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-serving-cert\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:17.987933 master-0 kubenswrapper[7454]: I0319 11:54:17.986393 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/919daf8d-763a-44bc-8916-86b425a27cbd-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:17.987933 master-0 kubenswrapper[7454]: I0319 11:54:17.986409 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/919daf8d-763a-44bc-8916-86b425a27cbd-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:17.987933 master-0 kubenswrapper[7454]: I0319 11:54:17.986433 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8brwr\" (UniqueName: \"kubernetes.io/projected/919daf8d-763a-44bc-8916-86b425a27cbd-kube-api-access-8brwr\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:17.987933 master-0 kubenswrapper[7454]: I0319 11:54:17.986453 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-encryption-config\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:17.987933 master-0 kubenswrapper[7454]: I0319 11:54:17.986753 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/5238840f-3bef-43ad-ae68-ac187f073019-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:54:17.987933 master-0 kubenswrapper[7454]: I0319 11:54:17.986821 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-etcd-client\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:17.987933 master-0 kubenswrapper[7454]: I0319 11:54:17.986882 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/979ba8cc-5a7b-4188-bf9e-c22d810888e9-audit-dir\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:17.987933 master-0 kubenswrapper[7454]: I0319 11:54:17.986943 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/979ba8cc-5a7b-4188-bf9e-c22d810888e9-trusted-ca-bundle\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:17.987933 master-0 kubenswrapper[7454]: I0319 11:54:17.986981 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5238840f-3bef-43ad-ae68-ac187f073019-cache\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:54:17.987933 master-0 kubenswrapper[7454]: I0319 11:54:17.986999 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/5238840f-3bef-43ad-ae68-ac187f073019-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:54:17.987933 master-0 kubenswrapper[7454]: I0319 11:54:17.987014 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/919daf8d-763a-44bc-8916-86b425a27cbd-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:17.987933 master-0 kubenswrapper[7454]: I0319 11:54:17.987066 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/919daf8d-763a-44bc-8916-86b425a27cbd-cache\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:17.987933 master-0 kubenswrapper[7454]: I0319 11:54:17.987087 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/979ba8cc-5a7b-4188-bf9e-c22d810888e9-audit-policies\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:17.987933 master-0 kubenswrapper[7454]: I0319 11:54:17.987107 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxdts\" (UniqueName: \"kubernetes.io/projected/5238840f-3bef-43ad-ae68-ac187f073019-kube-api-access-vxdts\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:54:17.987933 master-0 kubenswrapper[7454]: I0319 11:54:17.987125 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/5238840f-3bef-43ad-ae68-ac187f073019-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:54:17.987933 master-0 kubenswrapper[7454]: I0319 11:54:17.987151 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/979ba8cc-5a7b-4188-bf9e-c22d810888e9-etcd-serving-ca\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:17.987933 master-0 kubenswrapper[7454]: I0319 11:54:17.987168 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28ljd\" (UniqueName: \"kubernetes.io/projected/979ba8cc-5a7b-4188-bf9e-c22d810888e9-kube-api-access-28ljd\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:17.989336 master-0 kubenswrapper[7454]: I0319 11:54:17.989290 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/5238840f-3bef-43ad-ae68-ac187f073019-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:54:17.989504 master-0 kubenswrapper[7454]: I0319 11:54:17.989473 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/5238840f-3bef-43ad-ae68-ac187f073019-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:54:17.989614 master-0 kubenswrapper[7454]: I0319 11:54:17.989584 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-serving-cert\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:17.990074 master-0 kubenswrapper[7454]: I0319 11:54:17.990049 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/919daf8d-763a-44bc-8916-86b425a27cbd-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:17.990140 master-0 kubenswrapper[7454]: I0319 11:54:17.990102 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/979ba8cc-5a7b-4188-bf9e-c22d810888e9-audit-dir\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:17.990204 master-0 kubenswrapper[7454]: I0319 11:54:17.990177 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/919daf8d-763a-44bc-8916-86b425a27cbd-cache\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:17.990318 master-0 kubenswrapper[7454]: E0319 11:54:17.990269 7454 projected.go:301] Couldn't get configMap payload openshift-operator-controller/operator-controller-trusted-ca-bundle: configmap references non-existent config key: ca-bundle.crt Mar 19 11:54:17.990368 master-0 kubenswrapper[7454]: E0319 11:54:17.990327 7454 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd: configmap references non-existent config key: ca-bundle.crt Mar 19 11:54:17.990423 master-0 kubenswrapper[7454]: I0319 11:54:17.990400 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/979ba8cc-5a7b-4188-bf9e-c22d810888e9-audit-policies\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:17.990462 master-0 kubenswrapper[7454]: E0319 11:54:17.990408 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5238840f-3bef-43ad-ae68-ac187f073019-ca-certs podName:5238840f-3bef-43ad-ae68-ac187f073019 nodeName:}" failed. No retries permitted until 2026-03-19 11:54:18.490381152 +0000 UTC m=+28.120847065 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/5238840f-3bef-43ad-ae68-ac187f073019-ca-certs") pod "operator-controller-controller-manager-57777556ff-9mpxd" (UID: "5238840f-3bef-43ad-ae68-ac187f073019") : configmap references non-existent config key: ca-bundle.crt Mar 19 11:54:17.990503 master-0 kubenswrapper[7454]: E0319 11:54:17.990477 7454 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Mar 19 11:54:17.990534 master-0 kubenswrapper[7454]: E0319 11:54:17.990528 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/919daf8d-763a-44bc-8916-86b425a27cbd-catalogserver-certs podName:919daf8d-763a-44bc-8916-86b425a27cbd nodeName:}" failed. No retries permitted until 2026-03-19 11:54:18.490511096 +0000 UTC m=+28.120976999 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/919daf8d-763a-44bc-8916-86b425a27cbd-catalogserver-certs") pod "catalogd-controller-manager-6864dc98f7-j2w8z" (UID: "919daf8d-763a-44bc-8916-86b425a27cbd") : secret "catalogserver-cert" not found Mar 19 11:54:17.991123 master-0 kubenswrapper[7454]: I0319 11:54:17.991091 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5238840f-3bef-43ad-ae68-ac187f073019-cache\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:54:17.991333 master-0 kubenswrapper[7454]: I0319 11:54:17.991299 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/919daf8d-763a-44bc-8916-86b425a27cbd-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:17.991390 master-0 kubenswrapper[7454]: I0319 11:54:17.991354 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/979ba8cc-5a7b-4188-bf9e-c22d810888e9-trusted-ca-bundle\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:17.991707 master-0 kubenswrapper[7454]: I0319 11:54:17.991675 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/979ba8cc-5a7b-4188-bf9e-c22d810888e9-etcd-serving-ca\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:18.003822 master-0 kubenswrapper[7454]: I0319 11:54:17.993531 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-etcd-client\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:18.003822 master-0 kubenswrapper[7454]: I0319 11:54:17.997635 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-594b8bbb67-8fxxb"] Mar 19 11:54:18.003822 master-0 kubenswrapper[7454]: E0319 11:54:17.998044 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" podUID="65b01c13-2eb4-4820-b3c6-0b45da4ffca5" Mar 19 11:54:18.003822 master-0 kubenswrapper[7454]: I0319 11:54:17.998716 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/919daf8d-763a-44bc-8916-86b425a27cbd-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:18.010813 master-0 kubenswrapper[7454]: I0319 11:54:18.010441 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-encryption-config\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:18.050509 master-0 kubenswrapper[7454]: I0319 11:54:18.050373 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28ljd\" (UniqueName: \"kubernetes.io/projected/979ba8cc-5a7b-4188-bf9e-c22d810888e9-kube-api-access-28ljd\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:18.061837 master-0 kubenswrapper[7454]: I0319 11:54:18.060948 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8brwr\" (UniqueName: \"kubernetes.io/projected/919daf8d-763a-44bc-8916-86b425a27cbd-kube-api-access-8brwr\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:18.081830 master-0 kubenswrapper[7454]: I0319 11:54:18.077556 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxdts\" (UniqueName: \"kubernetes.io/projected/5238840f-3bef-43ad-ae68-ac187f073019-kube-api-access-vxdts\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:54:18.123705 master-0 kubenswrapper[7454]: I0319 11:54:18.123348 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:18.140829 master-0 kubenswrapper[7454]: I0319 11:54:18.139918 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6484d6777d-wmpqv"] Mar 19 11:54:18.153823 master-0 kubenswrapper[7454]: E0319 11:54:18.149649 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" podUID="921791b6-51d2-4d7c-995b-488a37f85b3f" Mar 19 11:54:18.243583 master-0 kubenswrapper[7454]: I0319 11:54:18.242037 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d"] Mar 19 11:54:18.243583 master-0 kubenswrapper[7454]: E0319 11:54:18.242492 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" podUID="25be5572-c6f3-45df-8a9d-9d6f759200ac" Mar 19 11:54:18.499230 master-0 kubenswrapper[7454]: I0319 11:54:18.499098 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/5238840f-3bef-43ad-ae68-ac187f073019-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:54:18.499230 master-0 kubenswrapper[7454]: I0319 11:54:18.499171 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/919daf8d-763a-44bc-8916-86b425a27cbd-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:18.499658 master-0 kubenswrapper[7454]: E0319 11:54:18.499471 7454 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Mar 19 11:54:18.499658 master-0 kubenswrapper[7454]: E0319 11:54:18.499521 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/919daf8d-763a-44bc-8916-86b425a27cbd-catalogserver-certs podName:919daf8d-763a-44bc-8916-86b425a27cbd nodeName:}" failed. No retries permitted until 2026-03-19 11:54:19.499506228 +0000 UTC m=+29.129972141 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/919daf8d-763a-44bc-8916-86b425a27cbd-catalogserver-certs") pod "catalogd-controller-manager-6864dc98f7-j2w8z" (UID: "919daf8d-763a-44bc-8916-86b425a27cbd") : secret "catalogserver-cert" not found Mar 19 11:54:18.502930 master-0 kubenswrapper[7454]: I0319 11:54:18.502834 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/5238840f-3bef-43ad-ae68-ac187f073019-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:54:18.538057 master-0 kubenswrapper[7454]: I0319 11:54:18.537995 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:54:18.733790 master-0 kubenswrapper[7454]: I0319 11:54:18.733717 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:54:18.734178 master-0 kubenswrapper[7454]: I0319 11:54:18.734146 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:18.734288 master-0 kubenswrapper[7454]: I0319 11:54:18.734266 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:54:18.742277 master-0 kubenswrapper[7454]: I0319 11:54:18.742224 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:54:18.748984 master-0 kubenswrapper[7454]: I0319 11:54:18.748541 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:18.754192 master-0 kubenswrapper[7454]: I0319 11:54:18.754125 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:54:18.763658 master-0 kubenswrapper[7454]: I0319 11:54:18.763596 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 19 11:54:18.763994 master-0 kubenswrapper[7454]: I0319 11:54:18.763960 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="26b64f77-181a-4129-a28a-3bfdf7eac7ae" containerName="installer" containerID="cri-o://e273c8440ea0683d116b4eeafcafc60dfd6a87717b0c4b4c9829c8626e0e905a" gracePeriod=30 Mar 19 11:54:18.803433 master-0 kubenswrapper[7454]: I0319 11:54:18.803363 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit-dir\") pod \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " Mar 19 11:54:18.803659 master-0 kubenswrapper[7454]: I0319 11:54:18.803463 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-trusted-ca-bundle\") pod \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " Mar 19 11:54:18.803659 master-0 kubenswrapper[7454]: I0319 11:54:18.803484 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "65b01c13-2eb4-4820-b3c6-0b45da4ffca5" (UID: "65b01c13-2eb4-4820-b3c6-0b45da4ffca5"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:54:18.803659 master-0 kubenswrapper[7454]: I0319 11:54:18.803533 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-config\") pod \"25be5572-c6f3-45df-8a9d-9d6f759200ac\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " Mar 19 11:54:18.804041 master-0 kubenswrapper[7454]: I0319 11:54:18.804003 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "65b01c13-2eb4-4820-b3c6-0b45da4ffca5" (UID: "65b01c13-2eb4-4820-b3c6-0b45da4ffca5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:54:18.804128 master-0 kubenswrapper[7454]: I0319 11:54:18.804091 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-config" (OuterVolumeSpecName: "config") pod "25be5572-c6f3-45df-8a9d-9d6f759200ac" (UID: "25be5572-c6f3-45df-8a9d-9d6f759200ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:54:18.804198 master-0 kubenswrapper[7454]: I0319 11:54:18.804169 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-node-pullsecrets\") pod \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " Mar 19 11:54:18.804363 master-0 kubenswrapper[7454]: I0319 11:54:18.804325 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-encryption-config\") pod \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " Mar 19 11:54:18.804903 master-0 kubenswrapper[7454]: I0319 11:54:18.804264 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "65b01c13-2eb4-4820-b3c6-0b45da4ffca5" (UID: "65b01c13-2eb4-4820-b3c6-0b45da4ffca5"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:54:18.804903 master-0 kubenswrapper[7454]: I0319 11:54:18.804889 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmq9j\" (UniqueName: \"kubernetes.io/projected/25be5572-c6f3-45df-8a9d-9d6f759200ac-kube-api-access-pmq9j\") pod \"25be5572-c6f3-45df-8a9d-9d6f759200ac\" (UID: \"25be5572-c6f3-45df-8a9d-9d6f759200ac\") " Mar 19 11:54:18.804996 master-0 kubenswrapper[7454]: I0319 11:54:18.804925 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-proxy-ca-bundles\") pod \"921791b6-51d2-4d7c-995b-488a37f85b3f\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " Mar 19 11:54:18.805043 master-0 kubenswrapper[7454]: I0319 11:54:18.804994 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-etcd-client\") pod \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " Mar 19 11:54:18.805043 master-0 kubenswrapper[7454]: I0319 11:54:18.805026 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9glx9\" (UniqueName: \"kubernetes.io/projected/921791b6-51d2-4d7c-995b-488a37f85b3f-kube-api-access-9glx9\") pod \"921791b6-51d2-4d7c-995b-488a37f85b3f\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " Mar 19 11:54:18.805123 master-0 kubenswrapper[7454]: I0319 11:54:18.805050 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-config\") pod \"921791b6-51d2-4d7c-995b-488a37f85b3f\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " Mar 19 11:54:18.805123 master-0 kubenswrapper[7454]: I0319 11:54:18.805079 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-image-import-ca\") pod \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " Mar 19 11:54:18.805331 master-0 kubenswrapper[7454]: I0319 11:54:18.805298 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ncfx\" (UniqueName: \"kubernetes.io/projected/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-kube-api-access-8ncfx\") pod \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " Mar 19 11:54:18.805384 master-0 kubenswrapper[7454]: I0319 11:54:18.805336 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-config\") pod \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " Mar 19 11:54:18.805384 master-0 kubenswrapper[7454]: I0319 11:54:18.805361 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-etcd-serving-ca\") pod \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\" (UID: \"65b01c13-2eb4-4820-b3c6-0b45da4ffca5\") " Mar 19 11:54:18.805467 master-0 kubenswrapper[7454]: I0319 11:54:18.805383 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "921791b6-51d2-4d7c-995b-488a37f85b3f" (UID: "921791b6-51d2-4d7c-995b-488a37f85b3f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:54:18.805647 master-0 kubenswrapper[7454]: I0319 11:54:18.805615 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "65b01c13-2eb4-4820-b3c6-0b45da4ffca5" (UID: "65b01c13-2eb4-4820-b3c6-0b45da4ffca5"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:54:18.805818 master-0 kubenswrapper[7454]: I0319 11:54:18.805386 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/921791b6-51d2-4d7c-995b-488a37f85b3f-serving-cert\") pod \"921791b6-51d2-4d7c-995b-488a37f85b3f\" (UID: \"921791b6-51d2-4d7c-995b-488a37f85b3f\") " Mar 19 11:54:18.805886 master-0 kubenswrapper[7454]: I0319 11:54:18.805858 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "65b01c13-2eb4-4820-b3c6-0b45da4ffca5" (UID: "65b01c13-2eb4-4820-b3c6-0b45da4ffca5"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:54:18.805949 master-0 kubenswrapper[7454]: I0319 11:54:18.805908 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-config" (OuterVolumeSpecName: "config") pod "65b01c13-2eb4-4820-b3c6-0b45da4ffca5" (UID: "65b01c13-2eb4-4820-b3c6-0b45da4ffca5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:54:18.806154 master-0 kubenswrapper[7454]: I0319 11:54:18.806112 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-config" (OuterVolumeSpecName: "config") pod "921791b6-51d2-4d7c-995b-488a37f85b3f" (UID: "921791b6-51d2-4d7c-995b-488a37f85b3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:54:18.806215 master-0 kubenswrapper[7454]: I0319 11:54:18.806168 7454 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:18.806215 master-0 kubenswrapper[7454]: I0319 11:54:18.806189 7454 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-image-import-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:18.806215 master-0 kubenswrapper[7454]: I0319 11:54:18.806200 7454 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-config\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:18.806215 master-0 kubenswrapper[7454]: I0319 11:54:18.806210 7454 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:18.806370 master-0 kubenswrapper[7454]: I0319 11:54:18.806222 7454 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:18.806370 master-0 kubenswrapper[7454]: I0319 11:54:18.806236 7454 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:18.806370 master-0 kubenswrapper[7454]: I0319 11:54:18.806248 7454 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-config\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:18.806370 master-0 kubenswrapper[7454]: I0319 11:54:18.806260 7454 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:18.806908 master-0 kubenswrapper[7454]: I0319 11:54:18.806880 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "65b01c13-2eb4-4820-b3c6-0b45da4ffca5" (UID: "65b01c13-2eb4-4820-b3c6-0b45da4ffca5"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 11:54:18.807693 master-0 kubenswrapper[7454]: I0319 11:54:18.807663 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921791b6-51d2-4d7c-995b-488a37f85b3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "921791b6-51d2-4d7c-995b-488a37f85b3f" (UID: "921791b6-51d2-4d7c-995b-488a37f85b3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 11:54:18.807693 master-0 kubenswrapper[7454]: I0319 11:54:18.807661 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/921791b6-51d2-4d7c-995b-488a37f85b3f-kube-api-access-9glx9" (OuterVolumeSpecName: "kube-api-access-9glx9") pod "921791b6-51d2-4d7c-995b-488a37f85b3f" (UID: "921791b6-51d2-4d7c-995b-488a37f85b3f"). InnerVolumeSpecName "kube-api-access-9glx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:54:18.808848 master-0 kubenswrapper[7454]: I0319 11:54:18.808819 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25be5572-c6f3-45df-8a9d-9d6f759200ac-kube-api-access-pmq9j" (OuterVolumeSpecName: "kube-api-access-pmq9j") pod "25be5572-c6f3-45df-8a9d-9d6f759200ac" (UID: "25be5572-c6f3-45df-8a9d-9d6f759200ac"). InnerVolumeSpecName "kube-api-access-pmq9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:54:18.809093 master-0 kubenswrapper[7454]: I0319 11:54:18.809050 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "65b01c13-2eb4-4820-b3c6-0b45da4ffca5" (UID: "65b01c13-2eb4-4820-b3c6-0b45da4ffca5"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 11:54:18.810543 master-0 kubenswrapper[7454]: I0319 11:54:18.810510 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-kube-api-access-8ncfx" (OuterVolumeSpecName: "kube-api-access-8ncfx") pod "65b01c13-2eb4-4820-b3c6-0b45da4ffca5" (UID: "65b01c13-2eb4-4820-b3c6-0b45da4ffca5"). InnerVolumeSpecName "kube-api-access-8ncfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:54:18.907280 master-0 kubenswrapper[7454]: I0319 11:54:18.907226 7454 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:18.907280 master-0 kubenswrapper[7454]: I0319 11:54:18.907266 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmq9j\" (UniqueName: \"kubernetes.io/projected/25be5572-c6f3-45df-8a9d-9d6f759200ac-kube-api-access-pmq9j\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:18.907280 master-0 kubenswrapper[7454]: I0319 11:54:18.907279 7454 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:18.907280 master-0 kubenswrapper[7454]: I0319 11:54:18.907292 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9glx9\" (UniqueName: \"kubernetes.io/projected/921791b6-51d2-4d7c-995b-488a37f85b3f-kube-api-access-9glx9\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:18.907637 master-0 kubenswrapper[7454]: I0319 11:54:18.907303 7454 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-config\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:18.907637 master-0 kubenswrapper[7454]: I0319 11:54:18.907316 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ncfx\" (UniqueName: \"kubernetes.io/projected/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-kube-api-access-8ncfx\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:18.907637 master-0 kubenswrapper[7454]: I0319 11:54:18.907326 7454 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/921791b6-51d2-4d7c-995b-488a37f85b3f-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:19.515163 master-0 kubenswrapper[7454]: I0319 11:54:19.515079 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/919daf8d-763a-44bc-8916-86b425a27cbd-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:19.518529 master-0 kubenswrapper[7454]: I0319 11:54:19.518488 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/919daf8d-763a-44bc-8916-86b425a27cbd-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:19.737099 master-0 kubenswrapper[7454]: I0319 11:54:19.737064 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d" Mar 19 11:54:19.737099 master-0 kubenswrapper[7454]: I0319 11:54:19.737083 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6484d6777d-wmpqv" Mar 19 11:54:19.737296 master-0 kubenswrapper[7454]: I0319 11:54:19.737110 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-594b8bbb67-8fxxb" Mar 19 11:54:19.772637 master-0 kubenswrapper[7454]: I0319 11:54:19.772549 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:19.799452 master-0 kubenswrapper[7454]: I0319 11:54:19.796965 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6484d6777d-wmpqv"] Mar 19 11:54:19.806253 master-0 kubenswrapper[7454]: I0319 11:54:19.806208 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5"] Mar 19 11:54:19.808865 master-0 kubenswrapper[7454]: I0319 11:54:19.808787 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6484d6777d-wmpqv"] Mar 19 11:54:19.810438 master-0 kubenswrapper[7454]: I0319 11:54:19.810405 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" Mar 19 11:54:19.815398 master-0 kubenswrapper[7454]: I0319 11:54:19.815357 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 19 11:54:19.815398 master-0 kubenswrapper[7454]: I0319 11:54:19.815386 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 19 11:54:19.816245 master-0 kubenswrapper[7454]: I0319 11:54:19.816182 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 19 11:54:19.816373 master-0 kubenswrapper[7454]: I0319 11:54:19.816351 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 19 11:54:19.816575 master-0 kubenswrapper[7454]: I0319 11:54:19.816552 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5"] Mar 19 11:54:19.816925 master-0 kubenswrapper[7454]: I0319 11:54:19.816912 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 19 11:54:19.833501 master-0 kubenswrapper[7454]: I0319 11:54:19.833448 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 19 11:54:19.835127 master-0 kubenswrapper[7454]: I0319 11:54:19.835082 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d"] Mar 19 11:54:19.837662 master-0 kubenswrapper[7454]: I0319 11:54:19.837620 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d6d7c9966-r7q9d"] Mar 19 11:54:19.892225 master-0 kubenswrapper[7454]: I0319 11:54:19.892159 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-594b8bbb67-8fxxb"] Mar 19 11:54:19.932468 master-0 kubenswrapper[7454]: I0319 11:54:19.932410 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-proxy-ca-bundles\") pod \"controller-manager-54b4cfc58b-pjsj5\" (UID: \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\") " pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" Mar 19 11:54:19.932468 master-0 kubenswrapper[7454]: I0319 11:54:19.932457 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-serving-cert\") pod \"controller-manager-54b4cfc58b-pjsj5\" (UID: \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\") " pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" Mar 19 11:54:19.932762 master-0 kubenswrapper[7454]: I0319 11:54:19.932671 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-client-ca\") pod \"controller-manager-54b4cfc58b-pjsj5\" (UID: \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\") " pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" Mar 19 11:54:19.932762 master-0 kubenswrapper[7454]: I0319 11:54:19.932716 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qfdd\" (UniqueName: \"kubernetes.io/projected/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-kube-api-access-2qfdd\") pod \"controller-manager-54b4cfc58b-pjsj5\" (UID: \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\") " pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" Mar 19 11:54:19.932886 master-0 kubenswrapper[7454]: I0319 11:54:19.932844 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-config\") pod \"controller-manager-54b4cfc58b-pjsj5\" (UID: \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\") " pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" Mar 19 11:54:19.933049 master-0 kubenswrapper[7454]: I0319 11:54:19.933029 7454 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25be5572-c6f3-45df-8a9d-9d6f759200ac-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:19.933107 master-0 kubenswrapper[7454]: I0319 11:54:19.933053 7454 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25be5572-c6f3-45df-8a9d-9d6f759200ac-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:19.933107 master-0 kubenswrapper[7454]: I0319 11:54:19.933066 7454 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/921791b6-51d2-4d7c-995b-488a37f85b3f-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:20.033685 master-0 kubenswrapper[7454]: I0319 11:54:20.033569 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-client-ca\") pod \"controller-manager-54b4cfc58b-pjsj5\" (UID: \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\") " pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" Mar 19 11:54:20.033685 master-0 kubenswrapper[7454]: I0319 11:54:20.033615 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qfdd\" (UniqueName: \"kubernetes.io/projected/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-kube-api-access-2qfdd\") pod \"controller-manager-54b4cfc58b-pjsj5\" (UID: \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\") " pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" Mar 19 11:54:20.033685 master-0 kubenswrapper[7454]: I0319 11:54:20.033665 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-config\") pod \"controller-manager-54b4cfc58b-pjsj5\" (UID: \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\") " pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" Mar 19 11:54:20.033950 master-0 kubenswrapper[7454]: I0319 11:54:20.033764 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-proxy-ca-bundles\") pod \"controller-manager-54b4cfc58b-pjsj5\" (UID: \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\") " pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" Mar 19 11:54:20.033950 master-0 kubenswrapper[7454]: I0319 11:54:20.033786 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-serving-cert\") pod \"controller-manager-54b4cfc58b-pjsj5\" (UID: \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\") " pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" Mar 19 11:54:20.035326 master-0 kubenswrapper[7454]: I0319 11:54:20.035289 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-proxy-ca-bundles\") pod \"controller-manager-54b4cfc58b-pjsj5\" (UID: \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\") " pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" Mar 19 11:54:20.035546 master-0 kubenswrapper[7454]: I0319 11:54:20.035483 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-config\") pod \"controller-manager-54b4cfc58b-pjsj5\" (UID: \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\") " pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" Mar 19 11:54:20.036023 master-0 kubenswrapper[7454]: I0319 11:54:20.035988 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-client-ca\") pod \"controller-manager-54b4cfc58b-pjsj5\" (UID: \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\") " pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" Mar 19 11:54:20.037052 master-0 kubenswrapper[7454]: I0319 11:54:20.037014 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-serving-cert\") pod \"controller-manager-54b4cfc58b-pjsj5\" (UID: \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\") " pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" Mar 19 11:54:20.449778 master-0 kubenswrapper[7454]: I0319 11:54:20.448729 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-594b8bbb67-8fxxb"] Mar 19 11:54:20.466661 master-0 kubenswrapper[7454]: I0319 11:54:20.466615 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qfdd\" (UniqueName: \"kubernetes.io/projected/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-kube-api-access-2qfdd\") pod \"controller-manager-54b4cfc58b-pjsj5\" (UID: \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\") " pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" Mar 19 11:54:20.639037 master-0 kubenswrapper[7454]: I0319 11:54:20.638205 7454 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-audit\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:20.639673 master-0 kubenswrapper[7454]: I0319 11:54:20.639239 7454 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65b01c13-2eb4-4820-b3c6-0b45da4ffca5-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:20.639673 master-0 kubenswrapper[7454]: I0319 11:54:20.638425 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25be5572-c6f3-45df-8a9d-9d6f759200ac" path="/var/lib/kubelet/pods/25be5572-c6f3-45df-8a9d-9d6f759200ac/volumes" Mar 19 11:54:20.639985 master-0 kubenswrapper[7454]: I0319 11:54:20.639949 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b01c13-2eb4-4820-b3c6-0b45da4ffca5" path="/var/lib/kubelet/pods/65b01c13-2eb4-4820-b3c6-0b45da4ffca5/volumes" Mar 19 11:54:20.640403 master-0 kubenswrapper[7454]: I0319 11:54:20.640369 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="921791b6-51d2-4d7c-995b-488a37f85b3f" path="/var/lib/kubelet/pods/921791b6-51d2-4d7c-995b-488a37f85b3f/volumes" Mar 19 11:54:20.734784 master-0 kubenswrapper[7454]: I0319 11:54:20.734678 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" Mar 19 11:54:20.983205 master-0 kubenswrapper[7454]: I0319 11:54:20.982885 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 19 11:54:20.999720 master-0 kubenswrapper[7454]: W0319 11:54:20.999466 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod11f83dfb_da04_483f_b281_ebdb39f3ab27.slice/crio-8bc9b9c94d7c2fc35e88bdf943a6e373d9be7c1dc5c7edff2198406e6c44db25 WatchSource:0}: Error finding container 8bc9b9c94d7c2fc35e88bdf943a6e373d9be7c1dc5c7edff2198406e6c44db25: Status 404 returned error can't find the container with id 8bc9b9c94d7c2fc35e88bdf943a6e373d9be7c1dc5c7edff2198406e6c44db25 Mar 19 11:54:21.079399 master-0 kubenswrapper[7454]: I0319 11:54:21.079356 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd"] Mar 19 11:54:21.110029 master-0 kubenswrapper[7454]: I0319 11:54:21.107276 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z"] Mar 19 11:54:21.120292 master-0 kubenswrapper[7454]: I0319 11:54:21.120256 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r"] Mar 19 11:54:21.125829 master-0 kubenswrapper[7454]: W0319 11:54:21.125584 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod919daf8d_763a_44bc_8916_86b425a27cbd.slice/crio-3f20a730c4d5f1f1345d78c2bd60c5b238848ecf855493b53e0f599fc51845ac WatchSource:0}: Error finding container 3f20a730c4d5f1f1345d78c2bd60c5b238848ecf855493b53e0f599fc51845ac: Status 404 returned error can't find the container with id 3f20a730c4d5f1f1345d78c2bd60c5b238848ecf855493b53e0f599fc51845ac Mar 19 11:54:21.145316 master-0 kubenswrapper[7454]: W0319 11:54:21.145165 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod979ba8cc_5a7b_4188_bf9e_c22d810888e9.slice/crio-d29fd7441baad9596ad5ac5569da64fe277e18af3046f4e5da7f49044fe8fd7f WatchSource:0}: Error finding container d29fd7441baad9596ad5ac5569da64fe277e18af3046f4e5da7f49044fe8fd7f: Status 404 returned error can't find the container with id d29fd7441baad9596ad5ac5569da64fe277e18af3046f4e5da7f49044fe8fd7f Mar 19 11:54:21.190319 master-0 kubenswrapper[7454]: I0319 11:54:21.189829 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 19 11:54:21.190851 master-0 kubenswrapper[7454]: I0319 11:54:21.190423 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 11:54:21.211955 master-0 kubenswrapper[7454]: I0319 11:54:21.211821 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 19 11:54:21.245710 master-0 kubenswrapper[7454]: I0319 11:54:21.245655 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3b3768b-e1fc-4b91-9046-c1e43c6b8134-kube-api-access\") pod \"installer-2-master-0\" (UID: \"b3b3768b-e1fc-4b91-9046-c1e43c6b8134\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 11:54:21.245968 master-0 kubenswrapper[7454]: I0319 11:54:21.245945 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3b3768b-e1fc-4b91-9046-c1e43c6b8134-var-lock\") pod \"installer-2-master-0\" (UID: \"b3b3768b-e1fc-4b91-9046-c1e43c6b8134\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 11:54:21.246037 master-0 kubenswrapper[7454]: I0319 11:54:21.246016 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3b3768b-e1fc-4b91-9046-c1e43c6b8134-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"b3b3768b-e1fc-4b91-9046-c1e43c6b8134\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 11:54:21.248977 master-0 kubenswrapper[7454]: I0319 11:54:21.248553 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5"] Mar 19 11:54:21.275210 master-0 kubenswrapper[7454]: I0319 11:54:21.275174 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-dc5br"] Mar 19 11:54:21.275755 master-0 kubenswrapper[7454]: I0319 11:54:21.275735 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.348035 master-0 kubenswrapper[7454]: I0319 11:54:21.347636 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-kubernetes\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.348243 master-0 kubenswrapper[7454]: I0319 11:54:21.348049 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-sysctl-conf\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.348243 master-0 kubenswrapper[7454]: I0319 11:54:21.348105 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-sysctl-d\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.348243 master-0 kubenswrapper[7454]: I0319 11:54:21.348131 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3b3768b-e1fc-4b91-9046-c1e43c6b8134-kube-api-access\") pod \"installer-2-master-0\" (UID: \"b3b3768b-e1fc-4b91-9046-c1e43c6b8134\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 11:54:21.348243 master-0 kubenswrapper[7454]: I0319 11:54:21.348168 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-lib-modules\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.348243 master-0 kubenswrapper[7454]: I0319 11:54:21.348187 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e559e487-18b0-4622-92fa-d06e7397b312-tmp\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.348495 master-0 kubenswrapper[7454]: I0319 11:54:21.348356 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-sys\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.348495 master-0 kubenswrapper[7454]: I0319 11:54:21.348410 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4p7s\" (UniqueName: \"kubernetes.io/projected/e559e487-18b0-4622-92fa-d06e7397b312-kube-api-access-c4p7s\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.348495 master-0 kubenswrapper[7454]: I0319 11:54:21.348426 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-systemd\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.348495 master-0 kubenswrapper[7454]: I0319 11:54:21.348441 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-var-lib-kubelet\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.348495 master-0 kubenswrapper[7454]: I0319 11:54:21.348464 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-run\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.348495 master-0 kubenswrapper[7454]: I0319 11:54:21.348486 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-host\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.348495 master-0 kubenswrapper[7454]: I0319 11:54:21.348500 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-modprobe-d\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.348866 master-0 kubenswrapper[7454]: I0319 11:54:21.348516 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/e559e487-18b0-4622-92fa-d06e7397b312-etc-tuned\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.348866 master-0 kubenswrapper[7454]: I0319 11:54:21.348534 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-sysconfig\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.348866 master-0 kubenswrapper[7454]: I0319 11:54:21.348554 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3b3768b-e1fc-4b91-9046-c1e43c6b8134-var-lock\") pod \"installer-2-master-0\" (UID: \"b3b3768b-e1fc-4b91-9046-c1e43c6b8134\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 11:54:21.348866 master-0 kubenswrapper[7454]: I0319 11:54:21.348572 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3b3768b-e1fc-4b91-9046-c1e43c6b8134-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"b3b3768b-e1fc-4b91-9046-c1e43c6b8134\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 11:54:21.348866 master-0 kubenswrapper[7454]: I0319 11:54:21.348643 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3b3768b-e1fc-4b91-9046-c1e43c6b8134-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"b3b3768b-e1fc-4b91-9046-c1e43c6b8134\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 11:54:21.348866 master-0 kubenswrapper[7454]: I0319 11:54:21.348718 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3b3768b-e1fc-4b91-9046-c1e43c6b8134-var-lock\") pod \"installer-2-master-0\" (UID: \"b3b3768b-e1fc-4b91-9046-c1e43c6b8134\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 11:54:21.379698 master-0 kubenswrapper[7454]: I0319 11:54:21.378740 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3b3768b-e1fc-4b91-9046-c1e43c6b8134-kube-api-access\") pod \"installer-2-master-0\" (UID: \"b3b3768b-e1fc-4b91-9046-c1e43c6b8134\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 11:54:21.449934 master-0 kubenswrapper[7454]: I0319 11:54:21.449239 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-sysconfig\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.449934 master-0 kubenswrapper[7454]: I0319 11:54:21.449411 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-sysconfig\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.449934 master-0 kubenswrapper[7454]: I0319 11:54:21.449474 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-kubernetes\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.449934 master-0 kubenswrapper[7454]: I0319 11:54:21.449544 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-sysctl-conf\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.449934 master-0 kubenswrapper[7454]: I0319 11:54:21.449570 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-sysctl-d\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.449934 master-0 kubenswrapper[7454]: I0319 11:54:21.449615 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-lib-modules\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.449934 master-0 kubenswrapper[7454]: I0319 11:54:21.449636 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e559e487-18b0-4622-92fa-d06e7397b312-tmp\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.449934 master-0 kubenswrapper[7454]: I0319 11:54:21.449686 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-sys\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.449934 master-0 kubenswrapper[7454]: I0319 11:54:21.449739 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4p7s\" (UniqueName: \"kubernetes.io/projected/e559e487-18b0-4622-92fa-d06e7397b312-kube-api-access-c4p7s\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.449934 master-0 kubenswrapper[7454]: I0319 11:54:21.449762 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-systemd\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.449934 master-0 kubenswrapper[7454]: I0319 11:54:21.449782 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-var-lib-kubelet\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.449934 master-0 kubenswrapper[7454]: I0319 11:54:21.449836 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-run\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.449934 master-0 kubenswrapper[7454]: I0319 11:54:21.449872 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-host\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.449934 master-0 kubenswrapper[7454]: I0319 11:54:21.449896 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-modprobe-d\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.449934 master-0 kubenswrapper[7454]: I0319 11:54:21.449920 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/e559e487-18b0-4622-92fa-d06e7397b312-etc-tuned\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.450744 master-0 kubenswrapper[7454]: I0319 11:54:21.450483 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-kubernetes\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.450820 master-0 kubenswrapper[7454]: I0319 11:54:21.450740 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-sysctl-conf\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.450961 master-0 kubenswrapper[7454]: I0319 11:54:21.450915 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-systemd\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.451027 master-0 kubenswrapper[7454]: I0319 11:54:21.451004 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-var-lib-kubelet\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.451902 master-0 kubenswrapper[7454]: I0319 11:54:21.451135 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-run\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.451902 master-0 kubenswrapper[7454]: I0319 11:54:21.451179 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-host\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.451902 master-0 kubenswrapper[7454]: I0319 11:54:21.451343 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-modprobe-d\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.451902 master-0 kubenswrapper[7454]: I0319 11:54:21.451425 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-lib-modules\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.451902 master-0 kubenswrapper[7454]: I0319 11:54:21.451466 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-sysctl-d\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.451902 master-0 kubenswrapper[7454]: I0319 11:54:21.451580 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-sys\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.461260 master-0 kubenswrapper[7454]: I0319 11:54:21.460759 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/e559e487-18b0-4622-92fa-d06e7397b312-etc-tuned\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.466076 master-0 kubenswrapper[7454]: I0319 11:54:21.466039 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e559e487-18b0-4622-92fa-d06e7397b312-tmp\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.470348 master-0 kubenswrapper[7454]: I0319 11:54:21.469592 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4p7s\" (UniqueName: \"kubernetes.io/projected/e559e487-18b0-4622-92fa-d06e7397b312-kube-api-access-c4p7s\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.533700 master-0 kubenswrapper[7454]: I0319 11:54:21.531718 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 11:54:21.670931 master-0 kubenswrapper[7454]: I0319 11:54:21.670868 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 11:54:21.864025 master-0 kubenswrapper[7454]: I0319 11:54:21.863986 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-897cc986b-vpg2l"] Mar 19 11:54:21.864826 master-0 kubenswrapper[7454]: I0319 11:54:21.864783 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:21.868856 master-0 kubenswrapper[7454]: I0319 11:54:21.865926 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4"] Mar 19 11:54:21.868856 master-0 kubenswrapper[7454]: I0319 11:54:21.866823 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" Mar 19 11:54:21.868856 master-0 kubenswrapper[7454]: I0319 11:54:21.867354 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 19 11:54:21.868856 master-0 kubenswrapper[7454]: I0319 11:54:21.868441 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 19 11:54:21.868856 master-0 kubenswrapper[7454]: I0319 11:54:21.868593 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 19 11:54:21.868856 master-0 kubenswrapper[7454]: I0319 11:54:21.868737 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 19 11:54:21.869165 master-0 kubenswrapper[7454]: I0319 11:54:21.868960 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 19 11:54:21.869165 master-0 kubenswrapper[7454]: I0319 11:54:21.869101 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 19 11:54:21.872181 master-0 kubenswrapper[7454]: I0319 11:54:21.869250 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 19 11:54:21.872181 master-0 kubenswrapper[7454]: I0319 11:54:21.869436 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 19 11:54:21.872181 master-0 kubenswrapper[7454]: I0319 11:54:21.869537 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 19 11:54:21.872181 master-0 kubenswrapper[7454]: I0319 11:54:21.869586 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 19 11:54:21.872181 master-0 kubenswrapper[7454]: I0319 11:54:21.869774 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 19 11:54:21.872181 master-0 kubenswrapper[7454]: I0319 11:54:21.869859 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 19 11:54:21.872181 master-0 kubenswrapper[7454]: I0319 11:54:21.870072 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 19 11:54:21.872181 master-0 kubenswrapper[7454]: I0319 11:54:21.870423 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 19 11:54:21.872181 master-0 kubenswrapper[7454]: I0319 11:54:21.870841 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" event={"ID":"5586a731-0e66-4ed1-a49e-a7f2dfb4a805","Type":"ContainerStarted","Data":"b54704a4cddbd896cf2a6a351c9a09473ff5d720f7719001acfafe762110baa6"} Mar 19 11:54:21.874064 master-0 kubenswrapper[7454]: I0319 11:54:21.874031 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 19 11:54:21.875982 master-0 kubenswrapper[7454]: I0319 11:54:21.875931 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" event={"ID":"85912908-c447-4868-871b-82c5eadbfdbe","Type":"ContainerStarted","Data":"63735c56ea8ab8ee9516357a36132132901d193c0ab862bebb886c53a74fc8ca"} Mar 19 11:54:22.314361 master-0 kubenswrapper[7454]: I0319 11:54:21.891664 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-897cc986b-vpg2l"] Mar 19 11:54:22.314361 master-0 kubenswrapper[7454]: I0319 11:54:21.891704 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4"] Mar 19 11:54:22.314361 master-0 kubenswrapper[7454]: I0319 11:54:21.893695 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" event={"ID":"ab54833d-e57b-479d-b171-68155f6566f1","Type":"ContainerStarted","Data":"c43c493f38104509cfb5708b9cebefc1abc98e8d691cd10e65b4d9e3690268c7"} Mar 19 11:54:22.314361 master-0 kubenswrapper[7454]: I0319 11:54:21.893720 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" event={"ID":"ab54833d-e57b-479d-b171-68155f6566f1","Type":"ContainerStarted","Data":"aa406412e7480c5c0c2c74f7645d58cea47aa01587bafbffb9108e78bf66ced5"} Mar 19 11:54:22.314361 master-0 kubenswrapper[7454]: I0319 11:54:21.896218 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" event={"ID":"5238840f-3bef-43ad-ae68-ac187f073019","Type":"ContainerStarted","Data":"387948abcb2cbae673b88cb3d7a8d043f5ef4d37ef318a38ca6b5a6a836dff73"} Mar 19 11:54:22.314361 master-0 kubenswrapper[7454]: I0319 11:54:21.896286 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" event={"ID":"5238840f-3bef-43ad-ae68-ac187f073019","Type":"ContainerStarted","Data":"1da3868b3838b62f3e5d20f215a32847d5bb12874480e83fc7036c9466a82c5e"} Mar 19 11:54:22.314361 master-0 kubenswrapper[7454]: I0319 11:54:21.897607 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" event={"ID":"63c12a89-1b49-4eba-8f5a-551b10d2246b","Type":"ContainerStarted","Data":"6ac8e8579c0ebfdadc086759fbdf40ea5414eafee2c9bc39524d9d89f97caa57"} Mar 19 11:54:22.314361 master-0 kubenswrapper[7454]: I0319 11:54:21.899634 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" event={"ID":"82b98dca-59f9-42be-94ca-4a2a2b6fea0f","Type":"ContainerStarted","Data":"0cad0d98b9796d68dc73696fd20f929b07466599b9df93138fb58cbadddd23fb"} Mar 19 11:54:22.314361 master-0 kubenswrapper[7454]: I0319 11:54:21.900878 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" event={"ID":"979ba8cc-5a7b-4188-bf9e-c22d810888e9","Type":"ContainerStarted","Data":"d29fd7441baad9596ad5ac5569da64fe277e18af3046f4e5da7f49044fe8fd7f"} Mar 19 11:54:22.314361 master-0 kubenswrapper[7454]: I0319 11:54:21.902111 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" event={"ID":"919daf8d-763a-44bc-8916-86b425a27cbd","Type":"ContainerStarted","Data":"d2d6a8de6820bede4da4ce3d6a3c3b9da035057124818c5512ee72e31ef2f19c"} Mar 19 11:54:22.314361 master-0 kubenswrapper[7454]: I0319 11:54:21.902135 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" event={"ID":"919daf8d-763a-44bc-8916-86b425a27cbd","Type":"ContainerStarted","Data":"3f20a730c4d5f1f1345d78c2bd60c5b238848ecf855493b53e0f599fc51845ac"} Mar 19 11:54:22.314361 master-0 kubenswrapper[7454]: I0319 11:54:21.903237 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"11f83dfb-da04-483f-b281-ebdb39f3ab27","Type":"ContainerStarted","Data":"b09cf9e92d522e2b105a0b4a4e50ff7409083b9260caed07cdd2a78e778f9e16"} Mar 19 11:54:22.314361 master-0 kubenswrapper[7454]: I0319 11:54:21.903260 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"11f83dfb-da04-483f-b281-ebdb39f3ab27","Type":"ContainerStarted","Data":"8bc9b9c94d7c2fc35e88bdf943a6e373d9be7c1dc5c7edff2198406e6c44db25"} Mar 19 11:54:22.320046 master-0 kubenswrapper[7454]: I0319 11:54:22.319986 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-encryption-config\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.320186 master-0 kubenswrapper[7454]: I0319 11:54:22.320078 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/13503fef-09b2-4dbe-9537-a5b361e7b591-node-pullsecrets\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.320186 master-0 kubenswrapper[7454]: I0319 11:54:22.320102 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-image-import-ca\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.320186 master-0 kubenswrapper[7454]: I0319 11:54:22.320139 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13503fef-09b2-4dbe-9537-a5b361e7b591-audit-dir\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.320186 master-0 kubenswrapper[7454]: I0319 11:54:22.320164 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgdlc\" (UniqueName: \"kubernetes.io/projected/13503fef-09b2-4dbe-9537-a5b361e7b591-kube-api-access-mgdlc\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.320415 master-0 kubenswrapper[7454]: I0319 11:54:22.320249 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-audit\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.320415 master-0 kubenswrapper[7454]: I0319 11:54:22.320283 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-config\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.320415 master-0 kubenswrapper[7454]: I0319 11:54:22.320308 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-serving-cert\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.320415 master-0 kubenswrapper[7454]: I0319 11:54:22.320351 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-etcd-serving-ca\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.320415 master-0 kubenswrapper[7454]: I0319 11:54:22.320403 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-etcd-client\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.320590 master-0 kubenswrapper[7454]: I0319 11:54:22.320434 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-trusted-ca-bundle\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.333350 master-0 kubenswrapper[7454]: I0319 11:54:22.333257 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 19 11:54:22.346250 master-0 kubenswrapper[7454]: W0319 11:54:22.346201 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb3b3768b_e1fc_4b91_9046_c1e43c6b8134.slice/crio-9a073e35be6d59a9851a5c060772899f43a09f7bb2ad8d779ede0b7fe0c488a3 WatchSource:0}: Error finding container 9a073e35be6d59a9851a5c060772899f43a09f7bb2ad8d779ede0b7fe0c488a3: Status 404 returned error can't find the container with id 9a073e35be6d59a9851a5c060772899f43a09f7bb2ad8d779ede0b7fe0c488a3 Mar 19 11:54:22.381998 master-0 kubenswrapper[7454]: I0319 11:54:22.381878 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-zjdkm"] Mar 19 11:54:22.382552 master-0 kubenswrapper[7454]: I0319 11:54:22.382528 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zjdkm" Mar 19 11:54:22.385318 master-0 kubenswrapper[7454]: I0319 11:54:22.384285 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 19 11:54:22.385318 master-0 kubenswrapper[7454]: I0319 11:54:22.384502 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 19 11:54:22.385318 master-0 kubenswrapper[7454]: I0319 11:54:22.384667 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 19 11:54:22.385318 master-0 kubenswrapper[7454]: I0319 11:54:22.384973 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 19 11:54:22.396634 master-0 kubenswrapper[7454]: I0319 11:54:22.396113 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zjdkm"] Mar 19 11:54:22.411498 master-0 kubenswrapper[7454]: I0319 11:54:22.411444 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=9.411425282 podStartE2EDuration="9.411425282s" podCreationTimestamp="2026-03-19 11:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:54:22.411206405 +0000 UTC m=+32.041672318" watchObservedRunningTime="2026-03-19 11:54:22.411425282 +0000 UTC m=+32.041891195" Mar 19 11:54:22.421901 master-0 kubenswrapper[7454]: I0319 11:54:22.421421 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06726494-b3aa-45f2-9b1f-5ee0ea45275e-serving-cert\") pod \"route-controller-manager-7f9d586bf8-khff4\" (UID: \"06726494-b3aa-45f2-9b1f-5ee0ea45275e\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" Mar 19 11:54:22.421901 master-0 kubenswrapper[7454]: I0319 11:54:22.421510 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-audit\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.421901 master-0 kubenswrapper[7454]: I0319 11:54:22.421539 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-config\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.421901 master-0 kubenswrapper[7454]: I0319 11:54:22.421567 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-serving-cert\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.421901 master-0 kubenswrapper[7454]: I0319 11:54:22.421593 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-etcd-serving-ca\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.421901 master-0 kubenswrapper[7454]: I0319 11:54:22.421617 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-etcd-client\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.421901 master-0 kubenswrapper[7454]: I0319 11:54:22.421648 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06726494-b3aa-45f2-9b1f-5ee0ea45275e-config\") pod \"route-controller-manager-7f9d586bf8-khff4\" (UID: \"06726494-b3aa-45f2-9b1f-5ee0ea45275e\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" Mar 19 11:54:22.421901 master-0 kubenswrapper[7454]: I0319 11:54:22.421669 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-trusted-ca-bundle\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.421901 master-0 kubenswrapper[7454]: I0319 11:54:22.421692 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06726494-b3aa-45f2-9b1f-5ee0ea45275e-client-ca\") pod \"route-controller-manager-7f9d586bf8-khff4\" (UID: \"06726494-b3aa-45f2-9b1f-5ee0ea45275e\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" Mar 19 11:54:22.421901 master-0 kubenswrapper[7454]: I0319 11:54:22.421716 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8jrq\" (UniqueName: \"kubernetes.io/projected/06726494-b3aa-45f2-9b1f-5ee0ea45275e-kube-api-access-j8jrq\") pod \"route-controller-manager-7f9d586bf8-khff4\" (UID: \"06726494-b3aa-45f2-9b1f-5ee0ea45275e\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" Mar 19 11:54:22.421901 master-0 kubenswrapper[7454]: I0319 11:54:22.421791 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-encryption-config\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.421901 master-0 kubenswrapper[7454]: I0319 11:54:22.421845 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/13503fef-09b2-4dbe-9537-a5b361e7b591-node-pullsecrets\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.421901 master-0 kubenswrapper[7454]: I0319 11:54:22.421873 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-image-import-ca\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.421901 master-0 kubenswrapper[7454]: I0319 11:54:22.421911 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13503fef-09b2-4dbe-9537-a5b361e7b591-audit-dir\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.422676 master-0 kubenswrapper[7454]: I0319 11:54:22.421951 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgdlc\" (UniqueName: \"kubernetes.io/projected/13503fef-09b2-4dbe-9537-a5b361e7b591-kube-api-access-mgdlc\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.423285 master-0 kubenswrapper[7454]: I0319 11:54:22.422887 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/13503fef-09b2-4dbe-9537-a5b361e7b591-node-pullsecrets\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.424208 master-0 kubenswrapper[7454]: I0319 11:54:22.424159 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13503fef-09b2-4dbe-9537-a5b361e7b591-audit-dir\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.424437 master-0 kubenswrapper[7454]: I0319 11:54:22.424272 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-trusted-ca-bundle\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.427326 master-0 kubenswrapper[7454]: I0319 11:54:22.427284 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-audit\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.434333 master-0 kubenswrapper[7454]: I0319 11:54:22.431536 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-image-import-ca\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.434333 master-0 kubenswrapper[7454]: I0319 11:54:22.432484 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-config\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.434333 master-0 kubenswrapper[7454]: I0319 11:54:22.432635 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-etcd-serving-ca\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.435476 master-0 kubenswrapper[7454]: I0319 11:54:22.435365 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-encryption-config\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.452405 master-0 kubenswrapper[7454]: I0319 11:54:22.451773 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-etcd-client\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.452405 master-0 kubenswrapper[7454]: I0319 11:54:22.452256 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-serving-cert\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.454844 master-0 kubenswrapper[7454]: I0319 11:54:22.454759 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgdlc\" (UniqueName: \"kubernetes.io/projected/13503fef-09b2-4dbe-9537-a5b361e7b591-kube-api-access-mgdlc\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.522906 master-0 kubenswrapper[7454]: I0319 11:54:22.522708 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f236a5ab-b400-46fc-94ee-1fff476d6458-config-volume\") pod \"dns-default-zjdkm\" (UID: \"f236a5ab-b400-46fc-94ee-1fff476d6458\") " pod="openshift-dns/dns-default-zjdkm" Mar 19 11:54:22.522906 master-0 kubenswrapper[7454]: I0319 11:54:22.522823 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06726494-b3aa-45f2-9b1f-5ee0ea45275e-serving-cert\") pod \"route-controller-manager-7f9d586bf8-khff4\" (UID: \"06726494-b3aa-45f2-9b1f-5ee0ea45275e\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" Mar 19 11:54:22.522906 master-0 kubenswrapper[7454]: I0319 11:54:22.522863 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f236a5ab-b400-46fc-94ee-1fff476d6458-metrics-tls\") pod \"dns-default-zjdkm\" (UID: \"f236a5ab-b400-46fc-94ee-1fff476d6458\") " pod="openshift-dns/dns-default-zjdkm" Mar 19 11:54:22.524060 master-0 kubenswrapper[7454]: I0319 11:54:22.523351 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06726494-b3aa-45f2-9b1f-5ee0ea45275e-config\") pod \"route-controller-manager-7f9d586bf8-khff4\" (UID: \"06726494-b3aa-45f2-9b1f-5ee0ea45275e\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" Mar 19 11:54:22.524060 master-0 kubenswrapper[7454]: I0319 11:54:22.523590 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06726494-b3aa-45f2-9b1f-5ee0ea45275e-client-ca\") pod \"route-controller-manager-7f9d586bf8-khff4\" (UID: \"06726494-b3aa-45f2-9b1f-5ee0ea45275e\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" Mar 19 11:54:22.524060 master-0 kubenswrapper[7454]: I0319 11:54:22.523690 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps4k8\" (UniqueName: \"kubernetes.io/projected/f236a5ab-b400-46fc-94ee-1fff476d6458-kube-api-access-ps4k8\") pod \"dns-default-zjdkm\" (UID: \"f236a5ab-b400-46fc-94ee-1fff476d6458\") " pod="openshift-dns/dns-default-zjdkm" Mar 19 11:54:22.524060 master-0 kubenswrapper[7454]: I0319 11:54:22.523750 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8jrq\" (UniqueName: \"kubernetes.io/projected/06726494-b3aa-45f2-9b1f-5ee0ea45275e-kube-api-access-j8jrq\") pod \"route-controller-manager-7f9d586bf8-khff4\" (UID: \"06726494-b3aa-45f2-9b1f-5ee0ea45275e\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" Mar 19 11:54:22.524711 master-0 kubenswrapper[7454]: I0319 11:54:22.524685 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06726494-b3aa-45f2-9b1f-5ee0ea45275e-config\") pod \"route-controller-manager-7f9d586bf8-khff4\" (UID: \"06726494-b3aa-45f2-9b1f-5ee0ea45275e\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" Mar 19 11:54:22.527758 master-0 kubenswrapper[7454]: I0319 11:54:22.527734 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06726494-b3aa-45f2-9b1f-5ee0ea45275e-client-ca\") pod \"route-controller-manager-7f9d586bf8-khff4\" (UID: \"06726494-b3aa-45f2-9b1f-5ee0ea45275e\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" Mar 19 11:54:22.527977 master-0 kubenswrapper[7454]: I0319 11:54:22.527934 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06726494-b3aa-45f2-9b1f-5ee0ea45275e-serving-cert\") pod \"route-controller-manager-7f9d586bf8-khff4\" (UID: \"06726494-b3aa-45f2-9b1f-5ee0ea45275e\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" Mar 19 11:54:22.544775 master-0 kubenswrapper[7454]: I0319 11:54:22.544718 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:22.550284 master-0 kubenswrapper[7454]: I0319 11:54:22.550240 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8jrq\" (UniqueName: \"kubernetes.io/projected/06726494-b3aa-45f2-9b1f-5ee0ea45275e-kube-api-access-j8jrq\") pod \"route-controller-manager-7f9d586bf8-khff4\" (UID: \"06726494-b3aa-45f2-9b1f-5ee0ea45275e\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" Mar 19 11:54:22.627458 master-0 kubenswrapper[7454]: I0319 11:54:22.627397 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f236a5ab-b400-46fc-94ee-1fff476d6458-metrics-tls\") pod \"dns-default-zjdkm\" (UID: \"f236a5ab-b400-46fc-94ee-1fff476d6458\") " pod="openshift-dns/dns-default-zjdkm" Mar 19 11:54:22.627697 master-0 kubenswrapper[7454]: I0319 11:54:22.627503 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps4k8\" (UniqueName: \"kubernetes.io/projected/f236a5ab-b400-46fc-94ee-1fff476d6458-kube-api-access-ps4k8\") pod \"dns-default-zjdkm\" (UID: \"f236a5ab-b400-46fc-94ee-1fff476d6458\") " pod="openshift-dns/dns-default-zjdkm" Mar 19 11:54:22.627697 master-0 kubenswrapper[7454]: I0319 11:54:22.627658 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f236a5ab-b400-46fc-94ee-1fff476d6458-config-volume\") pod \"dns-default-zjdkm\" (UID: \"f236a5ab-b400-46fc-94ee-1fff476d6458\") " pod="openshift-dns/dns-default-zjdkm" Mar 19 11:54:22.629571 master-0 kubenswrapper[7454]: I0319 11:54:22.629540 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f236a5ab-b400-46fc-94ee-1fff476d6458-config-volume\") pod \"dns-default-zjdkm\" (UID: \"f236a5ab-b400-46fc-94ee-1fff476d6458\") " pod="openshift-dns/dns-default-zjdkm" Mar 19 11:54:22.642955 master-0 kubenswrapper[7454]: I0319 11:54:22.642915 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f236a5ab-b400-46fc-94ee-1fff476d6458-metrics-tls\") pod \"dns-default-zjdkm\" (UID: \"f236a5ab-b400-46fc-94ee-1fff476d6458\") " pod="openshift-dns/dns-default-zjdkm" Mar 19 11:54:22.644402 master-0 kubenswrapper[7454]: I0319 11:54:22.644293 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps4k8\" (UniqueName: \"kubernetes.io/projected/f236a5ab-b400-46fc-94ee-1fff476d6458-kube-api-access-ps4k8\") pod \"dns-default-zjdkm\" (UID: \"f236a5ab-b400-46fc-94ee-1fff476d6458\") " pod="openshift-dns/dns-default-zjdkm" Mar 19 11:54:22.966117 master-0 kubenswrapper[7454]: I0319 11:54:22.965000 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" event={"ID":"5238840f-3bef-43ad-ae68-ac187f073019","Type":"ContainerStarted","Data":"26af4f815b89151cf9e6736b0f3e5cae3271189cf3655ba8cc103790de35f969"} Mar 19 11:54:22.966117 master-0 kubenswrapper[7454]: I0319 11:54:22.965091 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:54:22.967405 master-0 kubenswrapper[7454]: I0319 11:54:22.966376 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"b3b3768b-e1fc-4b91-9046-c1e43c6b8134","Type":"ContainerStarted","Data":"9a073e35be6d59a9851a5c060772899f43a09f7bb2ad8d779ede0b7fe0c488a3"} Mar 19 11:54:22.974239 master-0 kubenswrapper[7454]: I0319 11:54:22.972312 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-dc5br" event={"ID":"e559e487-18b0-4622-92fa-d06e7397b312","Type":"ContainerStarted","Data":"4034637044c10efb583efeee05cc731532761f9d295dfba3d4d37125d2414c07"} Mar 19 11:54:22.996405 master-0 kubenswrapper[7454]: I0319 11:54:22.996357 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" Mar 19 11:54:22.997575 master-0 kubenswrapper[7454]: I0319 11:54:22.997522 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" podStartSLOduration=5.997487174 podStartE2EDuration="5.997487174s" podCreationTimestamp="2026-03-19 11:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:54:22.996457822 +0000 UTC m=+32.626923745" watchObservedRunningTime="2026-03-19 11:54:22.997487174 +0000 UTC m=+32.627953087" Mar 19 11:54:23.236610 master-0 kubenswrapper[7454]: I0319 11:54:23.027569 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zjdkm" Mar 19 11:54:23.265650 master-0 kubenswrapper[7454]: I0319 11:54:23.264714 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-jqzxt"] Mar 19 11:54:23.266353 master-0 kubenswrapper[7454]: I0319 11:54:23.266179 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jqzxt" Mar 19 11:54:23.279010 master-0 kubenswrapper[7454]: I0319 11:54:23.277433 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-897cc986b-vpg2l"] Mar 19 11:54:23.285371 master-0 kubenswrapper[7454]: I0319 11:54:23.283900 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4800b72f-7e54-4069-b771-87fb459eeb78-hosts-file\") pod \"node-resolver-jqzxt\" (UID: \"4800b72f-7e54-4069-b771-87fb459eeb78\") " pod="openshift-dns/node-resolver-jqzxt" Mar 19 11:54:23.285371 master-0 kubenswrapper[7454]: I0319 11:54:23.284005 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lkzv\" (UniqueName: \"kubernetes.io/projected/4800b72f-7e54-4069-b771-87fb459eeb78-kube-api-access-4lkzv\") pod \"node-resolver-jqzxt\" (UID: \"4800b72f-7e54-4069-b771-87fb459eeb78\") " pod="openshift-dns/node-resolver-jqzxt" Mar 19 11:54:23.300282 master-0 kubenswrapper[7454]: W0319 11:54:23.297701 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13503fef_09b2_4dbe_9537_a5b361e7b591.slice/crio-b80d357d31adb7df8c525b85923de87b5edd8dd7bfe7187f3b2e54a41c8d8b6f WatchSource:0}: Error finding container b80d357d31adb7df8c525b85923de87b5edd8dd7bfe7187f3b2e54a41c8d8b6f: Status 404 returned error can't find the container with id b80d357d31adb7df8c525b85923de87b5edd8dd7bfe7187f3b2e54a41c8d8b6f Mar 19 11:54:23.385300 master-0 kubenswrapper[7454]: I0319 11:54:23.385228 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4800b72f-7e54-4069-b771-87fb459eeb78-hosts-file\") pod \"node-resolver-jqzxt\" (UID: \"4800b72f-7e54-4069-b771-87fb459eeb78\") " pod="openshift-dns/node-resolver-jqzxt" Mar 19 11:54:23.385300 master-0 kubenswrapper[7454]: I0319 11:54:23.385299 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lkzv\" (UniqueName: \"kubernetes.io/projected/4800b72f-7e54-4069-b771-87fb459eeb78-kube-api-access-4lkzv\") pod \"node-resolver-jqzxt\" (UID: \"4800b72f-7e54-4069-b771-87fb459eeb78\") " pod="openshift-dns/node-resolver-jqzxt" Mar 19 11:54:23.386582 master-0 kubenswrapper[7454]: I0319 11:54:23.386489 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4800b72f-7e54-4069-b771-87fb459eeb78-hosts-file\") pod \"node-resolver-jqzxt\" (UID: \"4800b72f-7e54-4069-b771-87fb459eeb78\") " pod="openshift-dns/node-resolver-jqzxt" Mar 19 11:54:23.429821 master-0 kubenswrapper[7454]: I0319 11:54:23.410357 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lkzv\" (UniqueName: \"kubernetes.io/projected/4800b72f-7e54-4069-b771-87fb459eeb78-kube-api-access-4lkzv\") pod \"node-resolver-jqzxt\" (UID: \"4800b72f-7e54-4069-b771-87fb459eeb78\") " pod="openshift-dns/node-resolver-jqzxt" Mar 19 11:54:23.429821 master-0 kubenswrapper[7454]: I0319 11:54:23.416141 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zjdkm"] Mar 19 11:54:23.493929 master-0 kubenswrapper[7454]: I0319 11:54:23.493752 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4"] Mar 19 11:54:23.593271 master-0 kubenswrapper[7454]: I0319 11:54:23.591943 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jqzxt" Mar 19 11:54:23.593271 master-0 kubenswrapper[7454]: I0319 11:54:23.593230 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:54:23.593271 master-0 kubenswrapper[7454]: I0319 11:54:23.593270 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:54:23.593587 master-0 kubenswrapper[7454]: I0319 11:54:23.593297 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:54:23.593587 master-0 kubenswrapper[7454]: I0319 11:54:23.593322 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:54:23.593587 master-0 kubenswrapper[7454]: I0319 11:54:23.593354 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:54:23.593587 master-0 kubenswrapper[7454]: I0319 11:54:23.593381 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:54:23.593587 master-0 kubenswrapper[7454]: I0319 11:54:23.593397 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:54:23.593587 master-0 kubenswrapper[7454]: I0319 11:54:23.593413 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:54:23.593587 master-0 kubenswrapper[7454]: I0319 11:54:23.593433 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:54:23.593587 master-0 kubenswrapper[7454]: I0319 11:54:23.593451 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:54:23.593587 master-0 kubenswrapper[7454]: I0319 11:54:23.593471 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-fz8cg\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:54:23.597682 master-0 kubenswrapper[7454]: I0319 11:54:23.597465 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-fz8cg\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:54:23.599412 master-0 kubenswrapper[7454]: I0319 11:54:23.599328 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:54:23.599876 master-0 kubenswrapper[7454]: I0319 11:54:23.599840 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:54:23.600483 master-0 kubenswrapper[7454]: I0319 11:54:23.600450 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:54:23.600716 master-0 kubenswrapper[7454]: I0319 11:54:23.600680 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:54:23.601459 master-0 kubenswrapper[7454]: I0319 11:54:23.601364 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:54:23.602965 master-0 kubenswrapper[7454]: I0319 11:54:23.602769 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:54:23.604872 master-0 kubenswrapper[7454]: I0319 11:54:23.604124 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:54:23.605439 master-0 kubenswrapper[7454]: I0319 11:54:23.605389 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:54:23.613377 master-0 kubenswrapper[7454]: W0319 11:54:23.612743 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4800b72f_7e54_4069_b771_87fb459eeb78.slice/crio-0dabb76ec554d4e59d0494fc5bb751b125c5d1b8f29112c6e51c360eb8f3c374 WatchSource:0}: Error finding container 0dabb76ec554d4e59d0494fc5bb751b125c5d1b8f29112c6e51c360eb8f3c374: Status 404 returned error can't find the container with id 0dabb76ec554d4e59d0494fc5bb751b125c5d1b8f29112c6e51c360eb8f3c374 Mar 19 11:54:23.613377 master-0 kubenswrapper[7454]: I0319 11:54:23.613244 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:54:23.614047 master-0 kubenswrapper[7454]: I0319 11:54:23.613995 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:54:23.690849 master-0 kubenswrapper[7454]: I0319 11:54:23.690765 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:54:23.703934 master-0 kubenswrapper[7454]: I0319 11:54:23.703489 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:54:23.704516 master-0 kubenswrapper[7454]: I0319 11:54:23.704487 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 11:54:23.704660 master-0 kubenswrapper[7454]: I0319 11:54:23.704618 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 11:54:23.707338 master-0 kubenswrapper[7454]: I0319 11:54:23.706899 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:54:23.707717 master-0 kubenswrapper[7454]: I0319 11:54:23.707699 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 11:54:23.707784 master-0 kubenswrapper[7454]: I0319 11:54:23.707770 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:54:23.709185 master-0 kubenswrapper[7454]: I0319 11:54:23.708671 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 11:54:23.709185 master-0 kubenswrapper[7454]: I0319 11:54:23.708674 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 11:54:23.709185 master-0 kubenswrapper[7454]: I0319 11:54:23.709067 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 11:54:23.936069 master-0 kubenswrapper[7454]: I0319 11:54:23.930521 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl"] Mar 19 11:54:24.000375 master-0 kubenswrapper[7454]: W0319 11:54:23.989964 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87a3f546_e1c1_42a1_b80e_d45b6d5c0a04.slice/crio-d4e38c98fa8bce43dfe4e7719d598500071054bc18ba5987f14232cdc265f588 WatchSource:0}: Error finding container d4e38c98fa8bce43dfe4e7719d598500071054bc18ba5987f14232cdc265f588: Status 404 returned error can't find the container with id d4e38c98fa8bce43dfe4e7719d598500071054bc18ba5987f14232cdc265f588 Mar 19 11:54:24.000375 master-0 kubenswrapper[7454]: I0319 11:54:23.997842 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-dc5br" event={"ID":"e559e487-18b0-4622-92fa-d06e7397b312","Type":"ContainerStarted","Data":"635345d5220bb071d47f6e0fa9438e01e7756721ec1e5c8fa394d042f28b84e1"} Mar 19 11:54:24.034941 master-0 kubenswrapper[7454]: I0319 11:54:24.033039 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zjdkm" event={"ID":"f236a5ab-b400-46fc-94ee-1fff476d6458","Type":"ContainerStarted","Data":"ef65cfa8e397b0d9fb626793071be85235d45f48e759141f7e306d3f038d0b06"} Mar 19 11:54:24.039363 master-0 kubenswrapper[7454]: I0319 11:54:24.039314 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" event={"ID":"06726494-b3aa-45f2-9b1f-5ee0ea45275e","Type":"ContainerStarted","Data":"966a9480718bf1964806fd74fc213f6acb41d1cb66534abba3f84706d8211a6a"} Mar 19 11:54:24.040385 master-0 kubenswrapper[7454]: I0319 11:54:24.040348 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-897cc986b-vpg2l" event={"ID":"13503fef-09b2-4dbe-9537-a5b361e7b591","Type":"ContainerStarted","Data":"b80d357d31adb7df8c525b85923de87b5edd8dd7bfe7187f3b2e54a41c8d8b6f"} Mar 19 11:54:24.041580 master-0 kubenswrapper[7454]: I0319 11:54:24.041558 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"b3b3768b-e1fc-4b91-9046-c1e43c6b8134","Type":"ContainerStarted","Data":"e5726d91b07d933b6cd79c95c2429a69ed26ff2d4c2a78358f9a47923b90cfea"} Mar 19 11:54:24.044467 master-0 kubenswrapper[7454]: I0319 11:54:24.044404 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jqzxt" event={"ID":"4800b72f-7e54-4069-b771-87fb459eeb78","Type":"ContainerStarted","Data":"f4c9dba1ac0af00f1bd1f9ba5b6f8b1637becdf4f24f3c1707ef61e82ea06ba1"} Mar 19 11:54:24.044467 master-0 kubenswrapper[7454]: I0319 11:54:24.044468 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jqzxt" event={"ID":"4800b72f-7e54-4069-b771-87fb459eeb78","Type":"ContainerStarted","Data":"0dabb76ec554d4e59d0494fc5bb751b125c5d1b8f29112c6e51c360eb8f3c374"} Mar 19 11:54:24.051313 master-0 kubenswrapper[7454]: I0319 11:54:24.050258 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" event={"ID":"919daf8d-763a-44bc-8916-86b425a27cbd","Type":"ContainerStarted","Data":"b41786c9c913f59caa3ab9f044ef31b0ba5e946f6fab91d0cf640d642dc24031"} Mar 19 11:54:24.051313 master-0 kubenswrapper[7454]: I0319 11:54:24.050318 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:24.061868 master-0 kubenswrapper[7454]: I0319 11:54:24.061577 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=3.061557981 podStartE2EDuration="3.061557981s" podCreationTimestamp="2026-03-19 11:54:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:54:24.058492495 +0000 UTC m=+33.688958418" watchObservedRunningTime="2026-03-19 11:54:24.061557981 +0000 UTC m=+33.692023894" Mar 19 11:54:24.061868 master-0 kubenswrapper[7454]: I0319 11:54:24.061654 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-dc5br" podStartSLOduration=3.061650874 podStartE2EDuration="3.061650874s" podCreationTimestamp="2026-03-19 11:54:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:54:24.022537159 +0000 UTC m=+33.653003102" watchObservedRunningTime="2026-03-19 11:54:24.061650874 +0000 UTC m=+33.692116787" Mar 19 11:54:24.074636 master-0 kubenswrapper[7454]: I0319 11:54:24.074566 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" podStartSLOduration=7.074550688 podStartE2EDuration="7.074550688s" podCreationTimestamp="2026-03-19 11:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:54:24.073112123 +0000 UTC m=+33.703578036" watchObservedRunningTime="2026-03-19 11:54:24.074550688 +0000 UTC m=+33.705016591" Mar 19 11:54:24.086783 master-0 kubenswrapper[7454]: I0319 11:54:24.086650 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-btppx"] Mar 19 11:54:24.096918 master-0 kubenswrapper[7454]: I0319 11:54:24.096853 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-jqzxt" podStartSLOduration=1.096831156 podStartE2EDuration="1.096831156s" podCreationTimestamp="2026-03-19 11:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:54:24.096344341 +0000 UTC m=+33.726810254" watchObservedRunningTime="2026-03-19 11:54:24.096831156 +0000 UTC m=+33.727297069" Mar 19 11:54:24.220031 master-0 kubenswrapper[7454]: W0319 11:54:24.219986 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb80027fd_7b39_477a_a337_ff9bb08e7eeb.slice/crio-7f1b2390d179c87af7aa642ae5d602040372528fd159e31c142302ed10484ef5 WatchSource:0}: Error finding container 7f1b2390d179c87af7aa642ae5d602040372528fd159e31c142302ed10484ef5: Status 404 returned error can't find the container with id 7f1b2390d179c87af7aa642ae5d602040372528fd159e31c142302ed10484ef5 Mar 19 11:54:24.427356 master-0 kubenswrapper[7454]: I0319 11:54:24.426435 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4"] Mar 19 11:54:24.577408 master-0 kubenswrapper[7454]: I0319 11:54:24.577298 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6"] Mar 19 11:54:24.578707 master-0 kubenswrapper[7454]: I0319 11:54:24.578412 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6t6sn"] Mar 19 11:54:24.579732 master-0 kubenswrapper[7454]: I0319 11:54:24.579708 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg"] Mar 19 11:54:24.618537 master-0 kubenswrapper[7454]: I0319 11:54:24.618480 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d"] Mar 19 11:54:24.639655 master-0 kubenswrapper[7454]: W0319 11:54:24.637023 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7241bf11_192e_47db_9d80_2324938ed34c.slice/crio-099f1cf5ddb64458132dd6fe55ba3878ce79ff183de73a0ef9c8fa9295853b5c WatchSource:0}: Error finding container 099f1cf5ddb64458132dd6fe55ba3878ce79ff183de73a0ef9c8fa9295853b5c: Status 404 returned error can't find the container with id 099f1cf5ddb64458132dd6fe55ba3878ce79ff183de73a0ef9c8fa9295853b5c Mar 19 11:54:24.651007 master-0 kubenswrapper[7454]: I0319 11:54:24.647725 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw"] Mar 19 11:54:24.651007 master-0 kubenswrapper[7454]: I0319 11:54:24.647769 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-pr7gk"] Mar 19 11:54:24.651007 master-0 kubenswrapper[7454]: I0319 11:54:24.647784 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj"] Mar 19 11:54:24.651007 master-0 kubenswrapper[7454]: W0319 11:54:24.649269 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3541cbe_3be0_40d3_89d2_b5937b6a8f47.slice/crio-71366739cc36c89d457d62d7f1f48c8768fc7ba64a4206c9c873e79bda714a8a WatchSource:0}: Error finding container 71366739cc36c89d457d62d7f1f48c8768fc7ba64a4206c9c873e79bda714a8a: Status 404 returned error can't find the container with id 71366739cc36c89d457d62d7f1f48c8768fc7ba64a4206c9c873e79bda714a8a Mar 19 11:54:24.653983 master-0 kubenswrapper[7454]: W0319 11:54:24.653929 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbeb562de_402b_4d9f_b5ed_090b60847a95.slice/crio-4d126177d3103b9726cb0abe507c291aeac9fb33c980d607daaa2352bbce8e96 WatchSource:0}: Error finding container 4d126177d3103b9726cb0abe507c291aeac9fb33c980d607daaa2352bbce8e96: Status 404 returned error can't find the container with id 4d126177d3103b9726cb0abe507c291aeac9fb33c980d607daaa2352bbce8e96 Mar 19 11:54:24.665741 master-0 kubenswrapper[7454]: W0319 11:54:24.665709 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0f5939c_48b1_4d6c_9712_9128a78d603b.slice/crio-ca37f4d8890aea843e2dd74f0a3fbd57188dcf29ebff0755845d7039996af375 WatchSource:0}: Error finding container ca37f4d8890aea843e2dd74f0a3fbd57188dcf29ebff0755845d7039996af375: Status 404 returned error can't find the container with id ca37f4d8890aea843e2dd74f0a3fbd57188dcf29ebff0755845d7039996af375 Mar 19 11:54:25.056513 master-0 kubenswrapper[7454]: I0319 11:54:25.056120 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" event={"ID":"b0f5939c-48b1-4d6c-9712-9128a78d603b","Type":"ContainerStarted","Data":"ca37f4d8890aea843e2dd74f0a3fbd57188dcf29ebff0755845d7039996af375"} Mar 19 11:54:25.057315 master-0 kubenswrapper[7454]: I0319 11:54:25.056784 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6t6sn" event={"ID":"398bcaca-1bea-4633-a78f-717e3d015ddd","Type":"ContainerStarted","Data":"37064f92bb167f0d220b06c690c09b197d0f10b42a8e406aad7f8d634bcea6be"} Mar 19 11:54:25.057614 master-0 kubenswrapper[7454]: I0319 11:54:25.057587 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" event={"ID":"b80027fd-7b39-477a-a337-ff9bb08e7eeb","Type":"ContainerStarted","Data":"7f1b2390d179c87af7aa642ae5d602040372528fd159e31c142302ed10484ef5"} Mar 19 11:54:25.058359 master-0 kubenswrapper[7454]: I0319 11:54:25.058334 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" event={"ID":"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6","Type":"ContainerStarted","Data":"fe4ada978b72bf0ece9f4bc3e07bb79fded8b5a5f73d4c83d93ade89f41d9473"} Mar 19 11:54:25.059357 master-0 kubenswrapper[7454]: I0319 11:54:25.059315 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" event={"ID":"beb562de-402b-4d9f-b5ed-090b60847a95","Type":"ContainerStarted","Data":"f55aaf29161ee197dbfff2ad97b4e9b04b7062af4ea1e5ed5532652557576b95"} Mar 19 11:54:25.059357 master-0 kubenswrapper[7454]: I0319 11:54:25.059345 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" event={"ID":"beb562de-402b-4d9f-b5ed-090b60847a95","Type":"ContainerStarted","Data":"4d126177d3103b9726cb0abe507c291aeac9fb33c980d607daaa2352bbce8e96"} Mar 19 11:54:25.060108 master-0 kubenswrapper[7454]: I0319 11:54:25.060065 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" event={"ID":"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04","Type":"ContainerStarted","Data":"d4e38c98fa8bce43dfe4e7719d598500071054bc18ba5987f14232cdc265f588"} Mar 19 11:54:25.060728 master-0 kubenswrapper[7454]: I0319 11:54:25.060700 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" event={"ID":"d3541cbe-3be0-40d3-89d2-b5937b6a8f47","Type":"ContainerStarted","Data":"71366739cc36c89d457d62d7f1f48c8768fc7ba64a4206c9c873e79bda714a8a"} Mar 19 11:54:25.061400 master-0 kubenswrapper[7454]: I0319 11:54:25.061358 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" event={"ID":"806a4c30-7b93-4430-86da-f9e1f4f2d206","Type":"ContainerStarted","Data":"eb304defbff285339483036ba9b4adeeac46981b039317b57ed5349a2d1f0ae3"} Mar 19 11:54:25.062060 master-0 kubenswrapper[7454]: I0319 11:54:25.062030 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" event={"ID":"19de6601-10d4-4112-a21f-0398d2b160d1","Type":"ContainerStarted","Data":"33355c55e294585ceaa17697d7356477785bdaba3177d324b39df2dc095c31c6"} Mar 19 11:54:25.063914 master-0 kubenswrapper[7454]: I0319 11:54:25.063882 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" event={"ID":"7241bf11-192e-47db-9d80-2324938ed34c","Type":"ContainerStarted","Data":"099f1cf5ddb64458132dd6fe55ba3878ce79ff183de73a0ef9c8fa9295853b5c"} Mar 19 11:54:26.072451 master-0 kubenswrapper[7454]: I0319 11:54:26.072077 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" event={"ID":"d3541cbe-3be0-40d3-89d2-b5937b6a8f47","Type":"ContainerStarted","Data":"84b7766afb41c82df0d892eb13ae81ecc0ff5bcd1fa0cc8dc4dc52da327f5626"} Mar 19 11:54:26.072451 master-0 kubenswrapper[7454]: I0319 11:54:26.072455 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" event={"ID":"d3541cbe-3be0-40d3-89d2-b5937b6a8f47","Type":"ContainerStarted","Data":"644c664e166fe582993989781f02b5e96f92a08deb77405802790fb0595a79d6"} Mar 19 11:54:28.268643 master-0 kubenswrapper[7454]: I0319 11:54:28.267146 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 19 11:54:28.274220 master-0 kubenswrapper[7454]: I0319 11:54:28.274125 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 11:54:28.281048 master-0 kubenswrapper[7454]: I0319 11:54:28.281016 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 19 11:54:28.282879 master-0 kubenswrapper[7454]: I0319 11:54:28.282837 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 19 11:54:28.385857 master-0 kubenswrapper[7454]: I0319 11:54:28.370595 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7fd0b13-489f-42b7-a52a-6194fdc9f665-var-lock\") pod \"installer-1-master-0\" (UID: \"f7fd0b13-489f-42b7-a52a-6194fdc9f665\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 11:54:28.385857 master-0 kubenswrapper[7454]: I0319 11:54:28.370688 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7fd0b13-489f-42b7-a52a-6194fdc9f665-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"f7fd0b13-489f-42b7-a52a-6194fdc9f665\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 11:54:28.385857 master-0 kubenswrapper[7454]: I0319 11:54:28.370709 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7fd0b13-489f-42b7-a52a-6194fdc9f665-kube-api-access\") pod \"installer-1-master-0\" (UID: \"f7fd0b13-489f-42b7-a52a-6194fdc9f665\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 11:54:28.472407 master-0 kubenswrapper[7454]: I0319 11:54:28.472335 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7fd0b13-489f-42b7-a52a-6194fdc9f665-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"f7fd0b13-489f-42b7-a52a-6194fdc9f665\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 11:54:28.472407 master-0 kubenswrapper[7454]: I0319 11:54:28.472393 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7fd0b13-489f-42b7-a52a-6194fdc9f665-kube-api-access\") pod \"installer-1-master-0\" (UID: \"f7fd0b13-489f-42b7-a52a-6194fdc9f665\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 11:54:28.472675 master-0 kubenswrapper[7454]: I0319 11:54:28.472456 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7fd0b13-489f-42b7-a52a-6194fdc9f665-var-lock\") pod \"installer-1-master-0\" (UID: \"f7fd0b13-489f-42b7-a52a-6194fdc9f665\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 11:54:28.472866 master-0 kubenswrapper[7454]: I0319 11:54:28.472825 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7fd0b13-489f-42b7-a52a-6194fdc9f665-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"f7fd0b13-489f-42b7-a52a-6194fdc9f665\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 11:54:28.472937 master-0 kubenswrapper[7454]: I0319 11:54:28.472833 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7fd0b13-489f-42b7-a52a-6194fdc9f665-var-lock\") pod \"installer-1-master-0\" (UID: \"f7fd0b13-489f-42b7-a52a-6194fdc9f665\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 11:54:28.500871 master-0 kubenswrapper[7454]: I0319 11:54:28.500808 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7fd0b13-489f-42b7-a52a-6194fdc9f665-kube-api-access\") pod \"installer-1-master-0\" (UID: \"f7fd0b13-489f-42b7-a52a-6194fdc9f665\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 11:54:28.544835 master-0 kubenswrapper[7454]: I0319 11:54:28.544682 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:54:28.610426 master-0 kubenswrapper[7454]: I0319 11:54:28.610383 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 11:54:28.642438 master-0 kubenswrapper[7454]: I0319 11:54:28.642393 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 11:54:28.652634 master-0 kubenswrapper[7454]: I0319 11:54:28.652599 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-ms2wn"] Mar 19 11:54:28.653689 master-0 kubenswrapper[7454]: I0319 11:54:28.653666 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 11:54:28.666175 master-0 kubenswrapper[7454]: I0319 11:54:28.665462 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 19 11:54:28.777009 master-0 kubenswrapper[7454]: I0319 11:54:28.776931 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2b87f8c3-1898-46dd-bcac-e8f22f31e812-mcd-auth-proxy-config\") pod \"machine-config-daemon-ms2wn\" (UID: \"2b87f8c3-1898-46dd-bcac-e8f22f31e812\") " pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 11:54:28.777009 master-0 kubenswrapper[7454]: I0319 11:54:28.776990 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2b87f8c3-1898-46dd-bcac-e8f22f31e812-proxy-tls\") pod \"machine-config-daemon-ms2wn\" (UID: \"2b87f8c3-1898-46dd-bcac-e8f22f31e812\") " pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 11:54:28.777009 master-0 kubenswrapper[7454]: I0319 11:54:28.777010 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbddm\" (UniqueName: \"kubernetes.io/projected/2b87f8c3-1898-46dd-bcac-e8f22f31e812-kube-api-access-kbddm\") pod \"machine-config-daemon-ms2wn\" (UID: \"2b87f8c3-1898-46dd-bcac-e8f22f31e812\") " pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 11:54:28.777340 master-0 kubenswrapper[7454]: I0319 11:54:28.777083 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2b87f8c3-1898-46dd-bcac-e8f22f31e812-rootfs\") pod \"machine-config-daemon-ms2wn\" (UID: \"2b87f8c3-1898-46dd-bcac-e8f22f31e812\") " pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 11:54:28.878656 master-0 kubenswrapper[7454]: I0319 11:54:28.878544 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2b87f8c3-1898-46dd-bcac-e8f22f31e812-mcd-auth-proxy-config\") pod \"machine-config-daemon-ms2wn\" (UID: \"2b87f8c3-1898-46dd-bcac-e8f22f31e812\") " pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 11:54:28.878656 master-0 kubenswrapper[7454]: I0319 11:54:28.878627 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2b87f8c3-1898-46dd-bcac-e8f22f31e812-proxy-tls\") pod \"machine-config-daemon-ms2wn\" (UID: \"2b87f8c3-1898-46dd-bcac-e8f22f31e812\") " pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 11:54:28.878656 master-0 kubenswrapper[7454]: I0319 11:54:28.878656 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbddm\" (UniqueName: \"kubernetes.io/projected/2b87f8c3-1898-46dd-bcac-e8f22f31e812-kube-api-access-kbddm\") pod \"machine-config-daemon-ms2wn\" (UID: \"2b87f8c3-1898-46dd-bcac-e8f22f31e812\") " pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 11:54:28.878944 master-0 kubenswrapper[7454]: I0319 11:54:28.878708 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2b87f8c3-1898-46dd-bcac-e8f22f31e812-rootfs\") pod \"machine-config-daemon-ms2wn\" (UID: \"2b87f8c3-1898-46dd-bcac-e8f22f31e812\") " pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 11:54:28.878944 master-0 kubenswrapper[7454]: I0319 11:54:28.878897 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2b87f8c3-1898-46dd-bcac-e8f22f31e812-rootfs\") pod \"machine-config-daemon-ms2wn\" (UID: \"2b87f8c3-1898-46dd-bcac-e8f22f31e812\") " pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 11:54:28.879815 master-0 kubenswrapper[7454]: I0319 11:54:28.879772 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2b87f8c3-1898-46dd-bcac-e8f22f31e812-mcd-auth-proxy-config\") pod \"machine-config-daemon-ms2wn\" (UID: \"2b87f8c3-1898-46dd-bcac-e8f22f31e812\") " pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 11:54:28.888886 master-0 kubenswrapper[7454]: I0319 11:54:28.888839 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2b87f8c3-1898-46dd-bcac-e8f22f31e812-proxy-tls\") pod \"machine-config-daemon-ms2wn\" (UID: \"2b87f8c3-1898-46dd-bcac-e8f22f31e812\") " pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 11:54:28.897877 master-0 kubenswrapper[7454]: I0319 11:54:28.897824 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbddm\" (UniqueName: \"kubernetes.io/projected/2b87f8c3-1898-46dd-bcac-e8f22f31e812-kube-api-access-kbddm\") pod \"machine-config-daemon-ms2wn\" (UID: \"2b87f8c3-1898-46dd-bcac-e8f22f31e812\") " pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 11:54:28.982159 master-0 kubenswrapper[7454]: I0319 11:54:28.982069 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 11:54:29.777151 master-0 kubenswrapper[7454]: I0319 11:54:29.777100 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:54:30.732895 master-0 kubenswrapper[7454]: I0319 11:54:30.731448 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5"] Mar 19 11:54:30.764823 master-0 kubenswrapper[7454]: I0319 11:54:30.761734 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4"] Mar 19 11:54:30.786823 master-0 kubenswrapper[7454]: I0319 11:54:30.785851 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 19 11:54:30.786823 master-0 kubenswrapper[7454]: I0319 11:54:30.786191 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="b3b3768b-e1fc-4b91-9046-c1e43c6b8134" containerName="installer" containerID="cri-o://e5726d91b07d933b6cd79c95c2429a69ed26ff2d4c2a78358f9a47923b90cfea" gracePeriod=30 Mar 19 11:54:32.762439 master-0 kubenswrapper[7454]: I0319 11:54:32.762381 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 19 11:54:32.763089 master-0 kubenswrapper[7454]: I0319 11:54:32.763056 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 11:54:32.773454 master-0 kubenswrapper[7454]: I0319 11:54:32.773402 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 19 11:54:32.902403 master-0 kubenswrapper[7454]: I0319 11:54:32.901248 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b49f09f-2efa-4657-9f5a-fbddd42bee0d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4b49f09f-2efa-4657-9f5a-fbddd42bee0d\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 11:54:32.902403 master-0 kubenswrapper[7454]: I0319 11:54:32.901380 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b49f09f-2efa-4657-9f5a-fbddd42bee0d-var-lock\") pod \"installer-3-master-0\" (UID: \"4b49f09f-2efa-4657-9f5a-fbddd42bee0d\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 11:54:32.902403 master-0 kubenswrapper[7454]: I0319 11:54:32.901411 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b49f09f-2efa-4657-9f5a-fbddd42bee0d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4b49f09f-2efa-4657-9f5a-fbddd42bee0d\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 11:54:33.002758 master-0 kubenswrapper[7454]: I0319 11:54:33.002600 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b49f09f-2efa-4657-9f5a-fbddd42bee0d-var-lock\") pod \"installer-3-master-0\" (UID: \"4b49f09f-2efa-4657-9f5a-fbddd42bee0d\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 11:54:33.002758 master-0 kubenswrapper[7454]: I0319 11:54:33.002653 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b49f09f-2efa-4657-9f5a-fbddd42bee0d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4b49f09f-2efa-4657-9f5a-fbddd42bee0d\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 11:54:33.002758 master-0 kubenswrapper[7454]: I0319 11:54:33.002693 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b49f09f-2efa-4657-9f5a-fbddd42bee0d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4b49f09f-2efa-4657-9f5a-fbddd42bee0d\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 11:54:33.003137 master-0 kubenswrapper[7454]: I0319 11:54:33.002943 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b49f09f-2efa-4657-9f5a-fbddd42bee0d-var-lock\") pod \"installer-3-master-0\" (UID: \"4b49f09f-2efa-4657-9f5a-fbddd42bee0d\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 11:54:33.003496 master-0 kubenswrapper[7454]: I0319 11:54:33.003453 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b49f09f-2efa-4657-9f5a-fbddd42bee0d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4b49f09f-2efa-4657-9f5a-fbddd42bee0d\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 11:54:33.027662 master-0 kubenswrapper[7454]: I0319 11:54:33.027502 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b49f09f-2efa-4657-9f5a-fbddd42bee0d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4b49f09f-2efa-4657-9f5a-fbddd42bee0d\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 11:54:33.101556 master-0 kubenswrapper[7454]: I0319 11:54:33.101461 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 11:54:33.109449 master-0 kubenswrapper[7454]: I0319 11:54:33.109415 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_b3b3768b-e1fc-4b91-9046-c1e43c6b8134/installer/0.log" Mar 19 11:54:33.109648 master-0 kubenswrapper[7454]: I0319 11:54:33.109469 7454 generic.go:334] "Generic (PLEG): container finished" podID="b3b3768b-e1fc-4b91-9046-c1e43c6b8134" containerID="e5726d91b07d933b6cd79c95c2429a69ed26ff2d4c2a78358f9a47923b90cfea" exitCode=1 Mar 19 11:54:33.109648 master-0 kubenswrapper[7454]: I0319 11:54:33.109504 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"b3b3768b-e1fc-4b91-9046-c1e43c6b8134","Type":"ContainerDied","Data":"e5726d91b07d933b6cd79c95c2429a69ed26ff2d4c2a78358f9a47923b90cfea"} Mar 19 11:54:33.374668 master-0 kubenswrapper[7454]: I0319 11:54:33.374560 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 19 11:54:33.375351 master-0 kubenswrapper[7454]: I0319 11:54:33.375331 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 11:54:33.377685 master-0 kubenswrapper[7454]: I0319 11:54:33.377302 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 19 11:54:33.386286 master-0 kubenswrapper[7454]: I0319 11:54:33.383527 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 19 11:54:33.509258 master-0 kubenswrapper[7454]: I0319 11:54:33.509207 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/632bdf3b-0ba0-4874-a2ec-8396683c35c5-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"632bdf3b-0ba0-4874-a2ec-8396683c35c5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 11:54:33.509258 master-0 kubenswrapper[7454]: I0319 11:54:33.509259 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/632bdf3b-0ba0-4874-a2ec-8396683c35c5-kube-api-access\") pod \"installer-1-master-0\" (UID: \"632bdf3b-0ba0-4874-a2ec-8396683c35c5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 11:54:33.509258 master-0 kubenswrapper[7454]: I0319 11:54:33.509296 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/632bdf3b-0ba0-4874-a2ec-8396683c35c5-var-lock\") pod \"installer-1-master-0\" (UID: \"632bdf3b-0ba0-4874-a2ec-8396683c35c5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 11:54:33.612558 master-0 kubenswrapper[7454]: I0319 11:54:33.612440 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/632bdf3b-0ba0-4874-a2ec-8396683c35c5-var-lock\") pod \"installer-1-master-0\" (UID: \"632bdf3b-0ba0-4874-a2ec-8396683c35c5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 11:54:33.612821 master-0 kubenswrapper[7454]: I0319 11:54:33.612506 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/632bdf3b-0ba0-4874-a2ec-8396683c35c5-var-lock\") pod \"installer-1-master-0\" (UID: \"632bdf3b-0ba0-4874-a2ec-8396683c35c5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 11:54:33.612821 master-0 kubenswrapper[7454]: I0319 11:54:33.612655 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/632bdf3b-0ba0-4874-a2ec-8396683c35c5-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"632bdf3b-0ba0-4874-a2ec-8396683c35c5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 11:54:33.612821 master-0 kubenswrapper[7454]: I0319 11:54:33.612711 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/632bdf3b-0ba0-4874-a2ec-8396683c35c5-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"632bdf3b-0ba0-4874-a2ec-8396683c35c5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 11:54:33.612921 master-0 kubenswrapper[7454]: I0319 11:54:33.612821 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/632bdf3b-0ba0-4874-a2ec-8396683c35c5-kube-api-access\") pod \"installer-1-master-0\" (UID: \"632bdf3b-0ba0-4874-a2ec-8396683c35c5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 11:54:33.636685 master-0 kubenswrapper[7454]: I0319 11:54:33.636556 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/632bdf3b-0ba0-4874-a2ec-8396683c35c5-kube-api-access\") pod \"installer-1-master-0\" (UID: \"632bdf3b-0ba0-4874-a2ec-8396683c35c5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 11:54:33.711410 master-0 kubenswrapper[7454]: I0319 11:54:33.711344 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 11:54:35.936550 master-0 kubenswrapper[7454]: I0319 11:54:35.936462 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_b3b3768b-e1fc-4b91-9046-c1e43c6b8134/installer/0.log" Mar 19 11:54:35.936550 master-0 kubenswrapper[7454]: I0319 11:54:35.936562 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 11:54:35.941035 master-0 kubenswrapper[7454]: I0319 11:54:35.940835 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3b3768b-e1fc-4b91-9046-c1e43c6b8134-var-lock\") pod \"b3b3768b-e1fc-4b91-9046-c1e43c6b8134\" (UID: \"b3b3768b-e1fc-4b91-9046-c1e43c6b8134\") " Mar 19 11:54:35.941035 master-0 kubenswrapper[7454]: I0319 11:54:35.940935 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3b3768b-e1fc-4b91-9046-c1e43c6b8134-kube-api-access\") pod \"b3b3768b-e1fc-4b91-9046-c1e43c6b8134\" (UID: \"b3b3768b-e1fc-4b91-9046-c1e43c6b8134\") " Mar 19 11:54:35.941035 master-0 kubenswrapper[7454]: I0319 11:54:35.940960 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3b3768b-e1fc-4b91-9046-c1e43c6b8134-kubelet-dir\") pod \"b3b3768b-e1fc-4b91-9046-c1e43c6b8134\" (UID: \"b3b3768b-e1fc-4b91-9046-c1e43c6b8134\") " Mar 19 11:54:35.941035 master-0 kubenswrapper[7454]: I0319 11:54:35.940974 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3b3768b-e1fc-4b91-9046-c1e43c6b8134-var-lock" (OuterVolumeSpecName: "var-lock") pod "b3b3768b-e1fc-4b91-9046-c1e43c6b8134" (UID: "b3b3768b-e1fc-4b91-9046-c1e43c6b8134"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:54:35.941261 master-0 kubenswrapper[7454]: I0319 11:54:35.941192 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3b3768b-e1fc-4b91-9046-c1e43c6b8134-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b3b3768b-e1fc-4b91-9046-c1e43c6b8134" (UID: "b3b3768b-e1fc-4b91-9046-c1e43c6b8134"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:54:35.941304 master-0 kubenswrapper[7454]: I0319 11:54:35.941285 7454 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3b3768b-e1fc-4b91-9046-c1e43c6b8134-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:35.941304 master-0 kubenswrapper[7454]: I0319 11:54:35.941301 7454 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3b3768b-e1fc-4b91-9046-c1e43c6b8134-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:35.944862 master-0 kubenswrapper[7454]: I0319 11:54:35.943446 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3b3768b-e1fc-4b91-9046-c1e43c6b8134-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b3b3768b-e1fc-4b91-9046-c1e43c6b8134" (UID: "b3b3768b-e1fc-4b91-9046-c1e43c6b8134"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:54:36.041849 master-0 kubenswrapper[7454]: I0319 11:54:36.041783 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3b3768b-e1fc-4b91-9046-c1e43c6b8134-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:36.129300 master-0 kubenswrapper[7454]: I0319 11:54:36.129174 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_b3b3768b-e1fc-4b91-9046-c1e43c6b8134/installer/0.log" Mar 19 11:54:36.129300 master-0 kubenswrapper[7454]: I0319 11:54:36.129282 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"b3b3768b-e1fc-4b91-9046-c1e43c6b8134","Type":"ContainerDied","Data":"9a073e35be6d59a9851a5c060772899f43a09f7bb2ad8d779ede0b7fe0c488a3"} Mar 19 11:54:36.129534 master-0 kubenswrapper[7454]: I0319 11:54:36.129340 7454 scope.go:117] "RemoveContainer" containerID="e5726d91b07d933b6cd79c95c2429a69ed26ff2d4c2a78358f9a47923b90cfea" Mar 19 11:54:36.130243 master-0 kubenswrapper[7454]: I0319 11:54:36.129702 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 11:54:36.160567 master-0 kubenswrapper[7454]: I0319 11:54:36.160095 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 19 11:54:36.161910 master-0 kubenswrapper[7454]: I0319 11:54:36.161878 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 19 11:54:36.494498 master-0 kubenswrapper[7454]: I0319 11:54:36.494414 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v"] Mar 19 11:54:36.496170 master-0 kubenswrapper[7454]: I0319 11:54:36.494868 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" podUID="85912908-c447-4868-871b-82c5eadbfdbe" containerName="cluster-version-operator" containerID="cri-o://63735c56ea8ab8ee9516357a36132132901d193c0ab862bebb886c53a74fc8ca" gracePeriod=130 Mar 19 11:54:36.644429 master-0 kubenswrapper[7454]: I0319 11:54:36.644378 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3b3768b-e1fc-4b91-9046-c1e43c6b8134" path="/var/lib/kubelet/pods/b3b3768b-e1fc-4b91-9046-c1e43c6b8134/volumes" Mar 19 11:54:40.360431 master-0 kubenswrapper[7454]: I0319 11:54:40.360399 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:54:40.408727 master-0 kubenswrapper[7454]: I0319 11:54:40.407959 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85912908-c447-4868-871b-82c5eadbfdbe-service-ca\") pod \"85912908-c447-4868-871b-82c5eadbfdbe\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " Mar 19 11:54:40.408727 master-0 kubenswrapper[7454]: I0319 11:54:40.407993 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85912908-c447-4868-871b-82c5eadbfdbe-etc-cvo-updatepayloads\") pod \"85912908-c447-4868-871b-82c5eadbfdbe\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " Mar 19 11:54:40.408727 master-0 kubenswrapper[7454]: I0319 11:54:40.408029 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85912908-c447-4868-871b-82c5eadbfdbe-kube-api-access\") pod \"85912908-c447-4868-871b-82c5eadbfdbe\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " Mar 19 11:54:40.408727 master-0 kubenswrapper[7454]: I0319 11:54:40.408056 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert\") pod \"85912908-c447-4868-871b-82c5eadbfdbe\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " Mar 19 11:54:40.408727 master-0 kubenswrapper[7454]: I0319 11:54:40.408074 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85912908-c447-4868-871b-82c5eadbfdbe-etc-ssl-certs\") pod \"85912908-c447-4868-871b-82c5eadbfdbe\" (UID: \"85912908-c447-4868-871b-82c5eadbfdbe\") " Mar 19 11:54:40.408727 master-0 kubenswrapper[7454]: I0319 11:54:40.408222 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85912908-c447-4868-871b-82c5eadbfdbe-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "85912908-c447-4868-871b-82c5eadbfdbe" (UID: "85912908-c447-4868-871b-82c5eadbfdbe"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:54:40.408727 master-0 kubenswrapper[7454]: I0319 11:54:40.408622 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85912908-c447-4868-871b-82c5eadbfdbe-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "85912908-c447-4868-871b-82c5eadbfdbe" (UID: "85912908-c447-4868-871b-82c5eadbfdbe"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:54:40.409606 master-0 kubenswrapper[7454]: I0319 11:54:40.409469 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85912908-c447-4868-871b-82c5eadbfdbe-service-ca" (OuterVolumeSpecName: "service-ca") pod "85912908-c447-4868-871b-82c5eadbfdbe" (UID: "85912908-c447-4868-871b-82c5eadbfdbe"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:54:40.420865 master-0 kubenswrapper[7454]: I0319 11:54:40.417029 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "85912908-c447-4868-871b-82c5eadbfdbe" (UID: "85912908-c447-4868-871b-82c5eadbfdbe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 11:54:40.422138 master-0 kubenswrapper[7454]: I0319 11:54:40.422104 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85912908-c447-4868-871b-82c5eadbfdbe-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "85912908-c447-4868-871b-82c5eadbfdbe" (UID: "85912908-c447-4868-871b-82c5eadbfdbe"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:54:40.509898 master-0 kubenswrapper[7454]: I0319 11:54:40.509398 7454 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85912908-c447-4868-871b-82c5eadbfdbe-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:40.509898 master-0 kubenswrapper[7454]: I0319 11:54:40.509435 7454 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85912908-c447-4868-871b-82c5eadbfdbe-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:40.509898 master-0 kubenswrapper[7454]: I0319 11:54:40.509507 7454 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85912908-c447-4868-871b-82c5eadbfdbe-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:40.509898 master-0 kubenswrapper[7454]: I0319 11:54:40.509521 7454 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85912908-c447-4868-871b-82c5eadbfdbe-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:40.509898 master-0 kubenswrapper[7454]: I0319 11:54:40.509537 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85912908-c447-4868-871b-82c5eadbfdbe-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:40.698676 master-0 kubenswrapper[7454]: I0319 11:54:40.698631 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 19 11:54:40.698676 master-0 kubenswrapper[7454]: I0319 11:54:40.698666 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 19 11:54:40.741923 master-0 kubenswrapper[7454]: I0319 11:54:40.741036 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_26b64f77-181a-4129-a28a-3bfdf7eac7ae/installer/0.log" Mar 19 11:54:40.741923 master-0 kubenswrapper[7454]: I0319 11:54:40.741132 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 11:54:40.814490 master-0 kubenswrapper[7454]: I0319 11:54:40.814454 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 19 11:54:40.843446 master-0 kubenswrapper[7454]: I0319 11:54:40.839272 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/26b64f77-181a-4129-a28a-3bfdf7eac7ae-var-lock\") pod \"26b64f77-181a-4129-a28a-3bfdf7eac7ae\" (UID: \"26b64f77-181a-4129-a28a-3bfdf7eac7ae\") " Mar 19 11:54:40.843446 master-0 kubenswrapper[7454]: I0319 11:54:40.839317 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26b64f77-181a-4129-a28a-3bfdf7eac7ae-kubelet-dir\") pod \"26b64f77-181a-4129-a28a-3bfdf7eac7ae\" (UID: \"26b64f77-181a-4129-a28a-3bfdf7eac7ae\") " Mar 19 11:54:40.843446 master-0 kubenswrapper[7454]: I0319 11:54:40.839418 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26b64f77-181a-4129-a28a-3bfdf7eac7ae-kube-api-access\") pod \"26b64f77-181a-4129-a28a-3bfdf7eac7ae\" (UID: \"26b64f77-181a-4129-a28a-3bfdf7eac7ae\") " Mar 19 11:54:40.843446 master-0 kubenswrapper[7454]: I0319 11:54:40.839543 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b64f77-181a-4129-a28a-3bfdf7eac7ae-var-lock" (OuterVolumeSpecName: "var-lock") pod "26b64f77-181a-4129-a28a-3bfdf7eac7ae" (UID: "26b64f77-181a-4129-a28a-3bfdf7eac7ae"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:54:40.843446 master-0 kubenswrapper[7454]: I0319 11:54:40.839591 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b64f77-181a-4129-a28a-3bfdf7eac7ae-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "26b64f77-181a-4129-a28a-3bfdf7eac7ae" (UID: "26b64f77-181a-4129-a28a-3bfdf7eac7ae"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:54:40.843446 master-0 kubenswrapper[7454]: I0319 11:54:40.840442 7454 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/26b64f77-181a-4129-a28a-3bfdf7eac7ae-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:40.843446 master-0 kubenswrapper[7454]: I0319 11:54:40.840462 7454 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26b64f77-181a-4129-a28a-3bfdf7eac7ae-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:40.844728 master-0 kubenswrapper[7454]: I0319 11:54:40.844521 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26b64f77-181a-4129-a28a-3bfdf7eac7ae-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "26b64f77-181a-4129-a28a-3bfdf7eac7ae" (UID: "26b64f77-181a-4129-a28a-3bfdf7eac7ae"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:54:40.950583 master-0 kubenswrapper[7454]: I0319 11:54:40.942968 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26b64f77-181a-4129-a28a-3bfdf7eac7ae-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:41.176441 master-0 kubenswrapper[7454]: I0319 11:54:41.175902 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" event={"ID":"19de6601-10d4-4112-a21f-0398d2b160d1","Type":"ContainerStarted","Data":"612732ed0120924fb77ef10b06bafbb001e3d8734f333029971f71583a5972b4"} Mar 19 11:54:41.206832 master-0 kubenswrapper[7454]: I0319 11:54:41.204640 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" event={"ID":"7241bf11-192e-47db-9d80-2324938ed34c","Type":"ContainerStarted","Data":"46936b398aa765dec3ac6c2063128b52385f73ea170e41e1a3745f861f634b9b"} Mar 19 11:54:41.262587 master-0 kubenswrapper[7454]: I0319 11:54:41.260026 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" event={"ID":"2b87f8c3-1898-46dd-bcac-e8f22f31e812","Type":"ContainerStarted","Data":"9bdf362754165dba74e84552f2d3413bf45e9079de9f1df770bb75640715bfe0"} Mar 19 11:54:41.262587 master-0 kubenswrapper[7454]: I0319 11:54:41.260071 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" event={"ID":"2b87f8c3-1898-46dd-bcac-e8f22f31e812","Type":"ContainerStarted","Data":"e1150baa290a3898ec8c1b3b3de0ed9b6af20668ee360ed4984852f84f153bb0"} Mar 19 11:54:41.279224 master-0 kubenswrapper[7454]: I0319 11:54:41.279168 7454 generic.go:334] "Generic (PLEG): container finished" podID="85912908-c447-4868-871b-82c5eadbfdbe" containerID="63735c56ea8ab8ee9516357a36132132901d193c0ab862bebb886c53a74fc8ca" exitCode=0 Mar 19 11:54:41.279433 master-0 kubenswrapper[7454]: I0319 11:54:41.279285 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" event={"ID":"85912908-c447-4868-871b-82c5eadbfdbe","Type":"ContainerDied","Data":"63735c56ea8ab8ee9516357a36132132901d193c0ab862bebb886c53a74fc8ca"} Mar 19 11:54:41.279433 master-0 kubenswrapper[7454]: I0319 11:54:41.279324 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" event={"ID":"85912908-c447-4868-871b-82c5eadbfdbe","Type":"ContainerDied","Data":"f0dd3ad0c31c50755d9a1e00840e55c34c92c7b9022f8e6526d575378ba152f4"} Mar 19 11:54:41.279433 master-0 kubenswrapper[7454]: I0319 11:54:41.279351 7454 scope.go:117] "RemoveContainer" containerID="63735c56ea8ab8ee9516357a36132132901d193c0ab862bebb886c53a74fc8ca" Mar 19 11:54:41.279558 master-0 kubenswrapper[7454]: I0319 11:54:41.279513 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v" Mar 19 11:54:41.398047 master-0 kubenswrapper[7454]: I0319 11:54:41.396371 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v"] Mar 19 11:54:41.428822 master-0 kubenswrapper[7454]: I0319 11:54:41.421413 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" event={"ID":"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6","Type":"ContainerStarted","Data":"458c2d71cdd0676c8e27627b02fb9b9b4d631fcd53c824b025a175382113d2ee"} Mar 19 11:54:41.428822 master-0 kubenswrapper[7454]: I0319 11:54:41.422660 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:54:41.451822 master-0 kubenswrapper[7454]: I0319 11:54:41.443665 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 11:54:41.451822 master-0 kubenswrapper[7454]: I0319 11:54:41.448255 7454 scope.go:117] "RemoveContainer" containerID="63735c56ea8ab8ee9516357a36132132901d193c0ab862bebb886c53a74fc8ca" Mar 19 11:54:41.487347 master-0 kubenswrapper[7454]: I0319 11:54:41.479384 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"4b49f09f-2efa-4657-9f5a-fbddd42bee0d","Type":"ContainerStarted","Data":"df06fa6144150d2fd73d9f262bf2cf21b2895ff0830d1e0b601df841982f89d6"} Mar 19 11:54:41.487347 master-0 kubenswrapper[7454]: E0319 11:54:41.479558 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63735c56ea8ab8ee9516357a36132132901d193c0ab862bebb886c53a74fc8ca\": container with ID starting with 63735c56ea8ab8ee9516357a36132132901d193c0ab862bebb886c53a74fc8ca not found: ID does not exist" containerID="63735c56ea8ab8ee9516357a36132132901d193c0ab862bebb886c53a74fc8ca" Mar 19 11:54:41.487347 master-0 kubenswrapper[7454]: I0319 11:54:41.479585 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63735c56ea8ab8ee9516357a36132132901d193c0ab862bebb886c53a74fc8ca"} err="failed to get container status \"63735c56ea8ab8ee9516357a36132132901d193c0ab862bebb886c53a74fc8ca\": rpc error: code = NotFound desc = could not find container \"63735c56ea8ab8ee9516357a36132132901d193c0ab862bebb886c53a74fc8ca\": container with ID starting with 63735c56ea8ab8ee9516357a36132132901d193c0ab862bebb886c53a74fc8ca not found: ID does not exist" Mar 19 11:54:41.487347 master-0 kubenswrapper[7454]: I0319 11:54:41.481242 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-gjj5v"] Mar 19 11:54:41.509166 master-0 kubenswrapper[7454]: I0319 11:54:41.503624 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zjdkm" event={"ID":"f236a5ab-b400-46fc-94ee-1fff476d6458","Type":"ContainerStarted","Data":"386377d61dc0b4e7b2d3371edfd94c42b1f00094bcdd8af3274128d3a8d23207"} Mar 19 11:54:41.538939 master-0 kubenswrapper[7454]: I0319 11:54:41.538411 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" event={"ID":"806a4c30-7b93-4430-86da-f9e1f4f2d206","Type":"ContainerStarted","Data":"32946350fbb40f17e1bf84fa3bef60ee89587d671dd1dca0cb3ac265a9a51704"} Mar 19 11:54:41.603994 master-0 kubenswrapper[7454]: I0319 11:54:41.600906 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"632bdf3b-0ba0-4874-a2ec-8396683c35c5","Type":"ContainerStarted","Data":"1c8244ac71cff666f8f31eda66e91f3ec8411550f1be8d391239277f0b7cf02b"} Mar 19 11:54:41.612429 master-0 kubenswrapper[7454]: I0319 11:54:41.612357 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" event={"ID":"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04","Type":"ContainerStarted","Data":"4c2cf229f9eff36fca31050b4fee39c4ac2bf5047870446bc6071d33aa3da396"} Mar 19 11:54:41.620151 master-0 kubenswrapper[7454]: I0319 11:54:41.614276 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:54:41.639089 master-0 kubenswrapper[7454]: I0319 11:54:41.638214 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_26b64f77-181a-4129-a28a-3bfdf7eac7ae/installer/0.log" Mar 19 11:54:41.639089 master-0 kubenswrapper[7454]: I0319 11:54:41.638256 7454 generic.go:334] "Generic (PLEG): container finished" podID="26b64f77-181a-4129-a28a-3bfdf7eac7ae" containerID="e273c8440ea0683d116b4eeafcafc60dfd6a87717b0c4b4c9829c8626e0e905a" exitCode=1 Mar 19 11:54:41.639089 master-0 kubenswrapper[7454]: I0319 11:54:41.638342 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"26b64f77-181a-4129-a28a-3bfdf7eac7ae","Type":"ContainerDied","Data":"e273c8440ea0683d116b4eeafcafc60dfd6a87717b0c4b4c9829c8626e0e905a"} Mar 19 11:54:41.639089 master-0 kubenswrapper[7454]: I0319 11:54:41.638371 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"26b64f77-181a-4129-a28a-3bfdf7eac7ae","Type":"ContainerDied","Data":"e0872c5a2d5561d0225dfd392b85facd7a9b9a7df9e38158520ca6c2a2f1b1d9"} Mar 19 11:54:41.639089 master-0 kubenswrapper[7454]: I0319 11:54:41.638389 7454 scope.go:117] "RemoveContainer" containerID="e273c8440ea0683d116b4eeafcafc60dfd6a87717b0c4b4c9829c8626e0e905a" Mar 19 11:54:41.639089 master-0 kubenswrapper[7454]: I0319 11:54:41.638494 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 11:54:41.679547 master-0 kubenswrapper[7454]: I0319 11:54:41.674362 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 11:54:41.679547 master-0 kubenswrapper[7454]: I0319 11:54:41.676016 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7d58488df-czxxt"] Mar 19 11:54:41.679547 master-0 kubenswrapper[7454]: E0319 11:54:41.676283 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b3768b-e1fc-4b91-9046-c1e43c6b8134" containerName="installer" Mar 19 11:54:41.679547 master-0 kubenswrapper[7454]: I0319 11:54:41.676297 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b3768b-e1fc-4b91-9046-c1e43c6b8134" containerName="installer" Mar 19 11:54:41.679547 master-0 kubenswrapper[7454]: E0319 11:54:41.676312 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85912908-c447-4868-871b-82c5eadbfdbe" containerName="cluster-version-operator" Mar 19 11:54:41.679547 master-0 kubenswrapper[7454]: I0319 11:54:41.676320 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="85912908-c447-4868-871b-82c5eadbfdbe" containerName="cluster-version-operator" Mar 19 11:54:41.679547 master-0 kubenswrapper[7454]: E0319 11:54:41.676331 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b64f77-181a-4129-a28a-3bfdf7eac7ae" containerName="installer" Mar 19 11:54:41.679547 master-0 kubenswrapper[7454]: I0319 11:54:41.676338 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b64f77-181a-4129-a28a-3bfdf7eac7ae" containerName="installer" Mar 19 11:54:41.679547 master-0 kubenswrapper[7454]: I0319 11:54:41.676453 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="85912908-c447-4868-871b-82c5eadbfdbe" containerName="cluster-version-operator" Mar 19 11:54:41.679547 master-0 kubenswrapper[7454]: I0319 11:54:41.676472 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b64f77-181a-4129-a28a-3bfdf7eac7ae" containerName="installer" Mar 19 11:54:41.679547 master-0 kubenswrapper[7454]: I0319 11:54:41.676480 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b3768b-e1fc-4b91-9046-c1e43c6b8134" containerName="installer" Mar 19 11:54:41.679547 master-0 kubenswrapper[7454]: I0319 11:54:41.676927 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 11:54:41.683722 master-0 kubenswrapper[7454]: I0319 11:54:41.683250 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"f7fd0b13-489f-42b7-a52a-6194fdc9f665","Type":"ContainerStarted","Data":"d8308efe72c7c6664abd233543bc59b7b4013bcb4b0b94da4d2f18534b26e9f7"} Mar 19 11:54:41.712089 master-0 kubenswrapper[7454]: I0319 11:54:41.712018 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 19 11:54:41.712526 master-0 kubenswrapper[7454]: I0319 11:54:41.712500 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 19 11:54:41.712731 master-0 kubenswrapper[7454]: I0319 11:54:41.712707 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 19 11:54:41.726866 master-0 kubenswrapper[7454]: I0319 11:54:41.723902 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" event={"ID":"5586a731-0e66-4ed1-a49e-a7f2dfb4a805","Type":"ContainerStarted","Data":"3be9fa9281fbba557659d5db283ef5b1d8dce759a907385695b00c2055e9fe48"} Mar 19 11:54:41.726866 master-0 kubenswrapper[7454]: I0319 11:54:41.724200 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" podUID="5586a731-0e66-4ed1-a49e-a7f2dfb4a805" containerName="controller-manager" containerID="cri-o://3be9fa9281fbba557659d5db283ef5b1d8dce759a907385695b00c2055e9fe48" gracePeriod=30 Mar 19 11:54:41.726866 master-0 kubenswrapper[7454]: I0319 11:54:41.724539 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" Mar 19 11:54:41.744431 master-0 kubenswrapper[7454]: I0319 11:54:41.744297 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" Mar 19 11:54:41.790829 master-0 kubenswrapper[7454]: I0319 11:54:41.786507 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" event={"ID":"b80027fd-7b39-477a-a337-ff9bb08e7eeb","Type":"ContainerStarted","Data":"85ef4c835912214d79ee0e2491e95c939671fab04307a1604919b04165567448"} Mar 19 11:54:41.794818 master-0 kubenswrapper[7454]: I0319 11:54:41.791947 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=13.791919517 podStartE2EDuration="13.791919517s" podCreationTimestamp="2026-03-19 11:54:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:54:41.791154933 +0000 UTC m=+51.421620866" watchObservedRunningTime="2026-03-19 11:54:41.791919517 +0000 UTC m=+51.422385430" Mar 19 11:54:41.862313 master-0 kubenswrapper[7454]: I0319 11:54:41.861582 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 11:54:41.862313 master-0 kubenswrapper[7454]: I0319 11:54:41.861640 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 11:54:41.862313 master-0 kubenswrapper[7454]: I0319 11:54:41.861669 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-serving-cert\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 11:54:41.862313 master-0 kubenswrapper[7454]: I0319 11:54:41.861688 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-service-ca\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 11:54:41.862313 master-0 kubenswrapper[7454]: I0319 11:54:41.861707 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-kube-api-access\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 11:54:41.862663 master-0 kubenswrapper[7454]: I0319 11:54:41.862327 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" event={"ID":"b0f5939c-48b1-4d6c-9712-9128a78d603b","Type":"ContainerStarted","Data":"68ef893f247d25c990ee12be4a1311e23963264bd6e324255f2b26ed404f9f6a"} Mar 19 11:54:41.864241 master-0 kubenswrapper[7454]: I0319 11:54:41.863858 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:54:41.865761 master-0 kubenswrapper[7454]: I0319 11:54:41.865681 7454 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-pr7gk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.21:8080/healthz\": dial tcp 10.128.0.21:8080: connect: connection refused" start-of-body= Mar 19 11:54:41.865862 master-0 kubenswrapper[7454]: I0319 11:54:41.865783 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" podUID="b0f5939c-48b1-4d6c-9712-9128a78d603b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.21:8080/healthz\": dial tcp 10.128.0.21:8080: connect: connection refused" Mar 19 11:54:41.868422 master-0 kubenswrapper[7454]: I0319 11:54:41.868379 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" event={"ID":"06726494-b3aa-45f2-9b1f-5ee0ea45275e","Type":"ContainerStarted","Data":"3a1076e64cec1680225e189b2c6e90d9b594aa9efeca7c872f23b13cb6329486"} Mar 19 11:54:41.868744 master-0 kubenswrapper[7454]: I0319 11:54:41.868716 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" podUID="06726494-b3aa-45f2-9b1f-5ee0ea45275e" containerName="route-controller-manager" containerID="cri-o://3a1076e64cec1680225e189b2c6e90d9b594aa9efeca7c872f23b13cb6329486" gracePeriod=30 Mar 19 11:54:41.872258 master-0 kubenswrapper[7454]: I0319 11:54:41.871734 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" Mar 19 11:54:41.873342 master-0 kubenswrapper[7454]: I0319 11:54:41.873290 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" podStartSLOduration=7.540250054 podStartE2EDuration="23.873278045s" podCreationTimestamp="2026-03-19 11:54:18 +0000 UTC" firstStartedPulling="2026-03-19 11:54:21.296357245 +0000 UTC m=+30.926823148" lastFinishedPulling="2026-03-19 11:54:37.629385226 +0000 UTC m=+47.259851139" observedRunningTime="2026-03-19 11:54:41.871602412 +0000 UTC m=+51.502068325" watchObservedRunningTime="2026-03-19 11:54:41.873278045 +0000 UTC m=+51.503743958" Mar 19 11:54:41.886535 master-0 kubenswrapper[7454]: I0319 11:54:41.884925 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" Mar 19 11:54:41.888674 master-0 kubenswrapper[7454]: I0319 11:54:41.888470 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" event={"ID":"beb562de-402b-4d9f-b5ed-090b60847a95","Type":"ContainerStarted","Data":"73cf6a91cb51a6754ab2b247831cbefb5d0487d35910ee9fb82b702cb7bb210d"} Mar 19 11:54:41.889210 master-0 kubenswrapper[7454]: I0319 11:54:41.889174 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:54:42.111661 master-0 kubenswrapper[7454]: I0319 11:54:41.904746 7454 generic.go:334] "Generic (PLEG): container finished" podID="979ba8cc-5a7b-4188-bf9e-c22d810888e9" containerID="05182f5833dcf5495367d45fa2481464014605bf23633fb02f16821c8ed341bf" exitCode=0 Mar 19 11:54:42.111661 master-0 kubenswrapper[7454]: I0319 11:54:41.904857 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" event={"ID":"979ba8cc-5a7b-4188-bf9e-c22d810888e9","Type":"ContainerDied","Data":"05182f5833dcf5495367d45fa2481464014605bf23633fb02f16821c8ed341bf"} Mar 19 11:54:42.111661 master-0 kubenswrapper[7454]: I0319 11:54:41.962623 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 11:54:42.111661 master-0 kubenswrapper[7454]: I0319 11:54:41.962689 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 11:54:42.111661 master-0 kubenswrapper[7454]: I0319 11:54:41.962727 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-serving-cert\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 11:54:42.111661 master-0 kubenswrapper[7454]: I0319 11:54:41.962754 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-service-ca\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 11:54:42.111661 master-0 kubenswrapper[7454]: I0319 11:54:41.962782 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-kube-api-access\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 11:54:42.111661 master-0 kubenswrapper[7454]: I0319 11:54:41.964417 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 11:54:42.111661 master-0 kubenswrapper[7454]: I0319 11:54:41.964838 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 11:54:42.111661 master-0 kubenswrapper[7454]: I0319 11:54:41.967575 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-service-ca\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 11:54:42.111661 master-0 kubenswrapper[7454]: I0319 11:54:41.972432 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-serving-cert\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 11:54:42.111661 master-0 kubenswrapper[7454]: I0319 11:54:42.006766 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-kube-api-access\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 11:54:42.191478 master-0 kubenswrapper[7454]: E0319 11:54:42.188448 7454 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod26b64f77_181a_4129_a28a_3bfdf7eac7ae.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod26b64f77_181a_4129_a28a_3bfdf7eac7ae.slice/crio-e0872c5a2d5561d0225dfd392b85facd7a9b9a7df9e38158520ca6c2a2f1b1d9\": RecentStats: unable to find data in memory cache]" Mar 19 11:54:42.258913 master-0 kubenswrapper[7454]: I0319 11:54:42.258307 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" podStartSLOduration=12.319553692 podStartE2EDuration="24.258288963s" podCreationTimestamp="2026-03-19 11:54:18 +0000 UTC" firstStartedPulling="2026-03-19 11:54:23.519586267 +0000 UTC m=+33.150052180" lastFinishedPulling="2026-03-19 11:54:35.458321538 +0000 UTC m=+45.088787451" observedRunningTime="2026-03-19 11:54:42.257278992 +0000 UTC m=+51.887744905" watchObservedRunningTime="2026-03-19 11:54:42.258288963 +0000 UTC m=+51.888754876" Mar 19 11:54:42.274872 master-0 kubenswrapper[7454]: I0319 11:54:42.267658 7454 scope.go:117] "RemoveContainer" containerID="e273c8440ea0683d116b4eeafcafc60dfd6a87717b0c4b4c9829c8626e0e905a" Mar 19 11:54:42.274872 master-0 kubenswrapper[7454]: E0319 11:54:42.268041 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e273c8440ea0683d116b4eeafcafc60dfd6a87717b0c4b4c9829c8626e0e905a\": container with ID starting with e273c8440ea0683d116b4eeafcafc60dfd6a87717b0c4b4c9829c8626e0e905a not found: ID does not exist" containerID="e273c8440ea0683d116b4eeafcafc60dfd6a87717b0c4b4c9829c8626e0e905a" Mar 19 11:54:42.274872 master-0 kubenswrapper[7454]: I0319 11:54:42.268084 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e273c8440ea0683d116b4eeafcafc60dfd6a87717b0c4b4c9829c8626e0e905a"} err="failed to get container status \"e273c8440ea0683d116b4eeafcafc60dfd6a87717b0c4b4c9829c8626e0e905a\": rpc error: code = NotFound desc = could not find container \"e273c8440ea0683d116b4eeafcafc60dfd6a87717b0c4b4c9829c8626e0e905a\": container with ID starting with e273c8440ea0683d116b4eeafcafc60dfd6a87717b0c4b4c9829c8626e0e905a not found: ID does not exist" Mar 19 11:54:42.283918 master-0 kubenswrapper[7454]: I0319 11:54:42.282968 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 19 11:54:42.291867 master-0 kubenswrapper[7454]: I0319 11:54:42.291751 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 19 11:54:42.319847 master-0 kubenswrapper[7454]: I0319 11:54:42.313353 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 11:54:42.441102 master-0 kubenswrapper[7454]: I0319 11:54:42.441038 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" Mar 19 11:54:42.477094 master-0 kubenswrapper[7454]: I0319 11:54:42.477006 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kj4wv"] Mar 19 11:54:42.478017 master-0 kubenswrapper[7454]: E0319 11:54:42.477972 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5586a731-0e66-4ed1-a49e-a7f2dfb4a805" containerName="controller-manager" Mar 19 11:54:42.478083 master-0 kubenswrapper[7454]: I0319 11:54:42.478026 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="5586a731-0e66-4ed1-a49e-a7f2dfb4a805" containerName="controller-manager" Mar 19 11:54:42.479198 master-0 kubenswrapper[7454]: I0319 11:54:42.478203 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="5586a731-0e66-4ed1-a49e-a7f2dfb4a805" containerName="controller-manager" Mar 19 11:54:42.479198 master-0 kubenswrapper[7454]: I0319 11:54:42.478841 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kj4wv" Mar 19 11:54:42.493661 master-0 kubenswrapper[7454]: I0319 11:54:42.493423 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kj4wv"] Mar 19 11:54:42.542943 master-0 kubenswrapper[7454]: I0319 11:54:42.541394 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" Mar 19 11:54:42.601065 master-0 kubenswrapper[7454]: I0319 11:54:42.601019 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-serving-cert\") pod \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\" (UID: \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\") " Mar 19 11:54:42.601065 master-0 kubenswrapper[7454]: I0319 11:54:42.601062 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qfdd\" (UniqueName: \"kubernetes.io/projected/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-kube-api-access-2qfdd\") pod \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\" (UID: \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\") " Mar 19 11:54:42.601292 master-0 kubenswrapper[7454]: I0319 11:54:42.601090 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-config\") pod \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\" (UID: \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\") " Mar 19 11:54:42.601292 master-0 kubenswrapper[7454]: I0319 11:54:42.601120 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-proxy-ca-bundles\") pod \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\" (UID: \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\") " Mar 19 11:54:42.601292 master-0 kubenswrapper[7454]: I0319 11:54:42.601138 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-client-ca\") pod \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\" (UID: \"5586a731-0e66-4ed1-a49e-a7f2dfb4a805\") " Mar 19 11:54:42.601370 master-0 kubenswrapper[7454]: I0319 11:54:42.601308 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/903d114c-199f-46f9-b39b-afa52df71ea9-utilities\") pod \"redhat-operators-kj4wv\" (UID: \"903d114c-199f-46f9-b39b-afa52df71ea9\") " pod="openshift-marketplace/redhat-operators-kj4wv" Mar 19 11:54:42.601370 master-0 kubenswrapper[7454]: I0319 11:54:42.601329 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/903d114c-199f-46f9-b39b-afa52df71ea9-catalog-content\") pod \"redhat-operators-kj4wv\" (UID: \"903d114c-199f-46f9-b39b-afa52df71ea9\") " pod="openshift-marketplace/redhat-operators-kj4wv" Mar 19 11:54:42.601370 master-0 kubenswrapper[7454]: I0319 11:54:42.601358 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk5rc\" (UniqueName: \"kubernetes.io/projected/903d114c-199f-46f9-b39b-afa52df71ea9-kube-api-access-zk5rc\") pod \"redhat-operators-kj4wv\" (UID: \"903d114c-199f-46f9-b39b-afa52df71ea9\") " pod="openshift-marketplace/redhat-operators-kj4wv" Mar 19 11:54:42.604121 master-0 kubenswrapper[7454]: I0319 11:54:42.602189 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5586a731-0e66-4ed1-a49e-a7f2dfb4a805" (UID: "5586a731-0e66-4ed1-a49e-a7f2dfb4a805"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:54:42.604121 master-0 kubenswrapper[7454]: I0319 11:54:42.602271 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-client-ca" (OuterVolumeSpecName: "client-ca") pod "5586a731-0e66-4ed1-a49e-a7f2dfb4a805" (UID: "5586a731-0e66-4ed1-a49e-a7f2dfb4a805"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:54:42.605283 master-0 kubenswrapper[7454]: I0319 11:54:42.605251 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-config" (OuterVolumeSpecName: "config") pod "5586a731-0e66-4ed1-a49e-a7f2dfb4a805" (UID: "5586a731-0e66-4ed1-a49e-a7f2dfb4a805"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:54:42.605479 master-0 kubenswrapper[7454]: I0319 11:54:42.605448 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-kube-api-access-2qfdd" (OuterVolumeSpecName: "kube-api-access-2qfdd") pod "5586a731-0e66-4ed1-a49e-a7f2dfb4a805" (UID: "5586a731-0e66-4ed1-a49e-a7f2dfb4a805"). InnerVolumeSpecName "kube-api-access-2qfdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:54:42.608114 master-0 kubenswrapper[7454]: I0319 11:54:42.608077 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5586a731-0e66-4ed1-a49e-a7f2dfb4a805" (UID: "5586a731-0e66-4ed1-a49e-a7f2dfb4a805"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 11:54:42.649959 master-0 kubenswrapper[7454]: I0319 11:54:42.649859 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26b64f77-181a-4129-a28a-3bfdf7eac7ae" path="/var/lib/kubelet/pods/26b64f77-181a-4129-a28a-3bfdf7eac7ae/volumes" Mar 19 11:54:42.651503 master-0 kubenswrapper[7454]: I0319 11:54:42.651481 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85912908-c447-4868-871b-82c5eadbfdbe" path="/var/lib/kubelet/pods/85912908-c447-4868-871b-82c5eadbfdbe/volumes" Mar 19 11:54:42.702048 master-0 kubenswrapper[7454]: I0319 11:54:42.702009 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06726494-b3aa-45f2-9b1f-5ee0ea45275e-client-ca\") pod \"06726494-b3aa-45f2-9b1f-5ee0ea45275e\" (UID: \"06726494-b3aa-45f2-9b1f-5ee0ea45275e\") " Mar 19 11:54:42.702206 master-0 kubenswrapper[7454]: I0319 11:54:42.702067 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06726494-b3aa-45f2-9b1f-5ee0ea45275e-serving-cert\") pod \"06726494-b3aa-45f2-9b1f-5ee0ea45275e\" (UID: \"06726494-b3aa-45f2-9b1f-5ee0ea45275e\") " Mar 19 11:54:42.702706 master-0 kubenswrapper[7454]: I0319 11:54:42.702673 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06726494-b3aa-45f2-9b1f-5ee0ea45275e-client-ca" (OuterVolumeSpecName: "client-ca") pod "06726494-b3aa-45f2-9b1f-5ee0ea45275e" (UID: "06726494-b3aa-45f2-9b1f-5ee0ea45275e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:54:42.702765 master-0 kubenswrapper[7454]: I0319 11:54:42.702735 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8jrq\" (UniqueName: \"kubernetes.io/projected/06726494-b3aa-45f2-9b1f-5ee0ea45275e-kube-api-access-j8jrq\") pod \"06726494-b3aa-45f2-9b1f-5ee0ea45275e\" (UID: \"06726494-b3aa-45f2-9b1f-5ee0ea45275e\") " Mar 19 11:54:42.702896 master-0 kubenswrapper[7454]: I0319 11:54:42.702882 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06726494-b3aa-45f2-9b1f-5ee0ea45275e-config\") pod \"06726494-b3aa-45f2-9b1f-5ee0ea45275e\" (UID: \"06726494-b3aa-45f2-9b1f-5ee0ea45275e\") " Mar 19 11:54:42.703070 master-0 kubenswrapper[7454]: I0319 11:54:42.703042 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/903d114c-199f-46f9-b39b-afa52df71ea9-utilities\") pod \"redhat-operators-kj4wv\" (UID: \"903d114c-199f-46f9-b39b-afa52df71ea9\") " pod="openshift-marketplace/redhat-operators-kj4wv" Mar 19 11:54:42.703107 master-0 kubenswrapper[7454]: I0319 11:54:42.703082 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/903d114c-199f-46f9-b39b-afa52df71ea9-catalog-content\") pod \"redhat-operators-kj4wv\" (UID: \"903d114c-199f-46f9-b39b-afa52df71ea9\") " pod="openshift-marketplace/redhat-operators-kj4wv" Mar 19 11:54:42.703304 master-0 kubenswrapper[7454]: I0319 11:54:42.703261 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk5rc\" (UniqueName: \"kubernetes.io/projected/903d114c-199f-46f9-b39b-afa52df71ea9-kube-api-access-zk5rc\") pod \"redhat-operators-kj4wv\" (UID: \"903d114c-199f-46f9-b39b-afa52df71ea9\") " pod="openshift-marketplace/redhat-operators-kj4wv" Mar 19 11:54:42.703449 master-0 kubenswrapper[7454]: I0319 11:54:42.703420 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06726494-b3aa-45f2-9b1f-5ee0ea45275e-config" (OuterVolumeSpecName: "config") pod "06726494-b3aa-45f2-9b1f-5ee0ea45275e" (UID: "06726494-b3aa-45f2-9b1f-5ee0ea45275e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:54:42.703522 master-0 kubenswrapper[7454]: I0319 11:54:42.703503 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/903d114c-199f-46f9-b39b-afa52df71ea9-catalog-content\") pod \"redhat-operators-kj4wv\" (UID: \"903d114c-199f-46f9-b39b-afa52df71ea9\") " pod="openshift-marketplace/redhat-operators-kj4wv" Mar 19 11:54:42.703561 master-0 kubenswrapper[7454]: I0319 11:54:42.703530 7454 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:42.703561 master-0 kubenswrapper[7454]: I0319 11:54:42.703555 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qfdd\" (UniqueName: \"kubernetes.io/projected/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-kube-api-access-2qfdd\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:42.703615 master-0 kubenswrapper[7454]: I0319 11:54:42.703572 7454 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-config\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:42.703615 master-0 kubenswrapper[7454]: I0319 11:54:42.703585 7454 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06726494-b3aa-45f2-9b1f-5ee0ea45275e-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:42.703615 master-0 kubenswrapper[7454]: I0319 11:54:42.703597 7454 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:42.703615 master-0 kubenswrapper[7454]: I0319 11:54:42.703590 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/903d114c-199f-46f9-b39b-afa52df71ea9-utilities\") pod \"redhat-operators-kj4wv\" (UID: \"903d114c-199f-46f9-b39b-afa52df71ea9\") " pod="openshift-marketplace/redhat-operators-kj4wv" Mar 19 11:54:42.703724 master-0 kubenswrapper[7454]: I0319 11:54:42.703608 7454 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5586a731-0e66-4ed1-a49e-a7f2dfb4a805-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:42.706455 master-0 kubenswrapper[7454]: I0319 11:54:42.706416 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06726494-b3aa-45f2-9b1f-5ee0ea45275e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "06726494-b3aa-45f2-9b1f-5ee0ea45275e" (UID: "06726494-b3aa-45f2-9b1f-5ee0ea45275e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 11:54:42.723238 master-0 kubenswrapper[7454]: I0319 11:54:42.723189 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk5rc\" (UniqueName: \"kubernetes.io/projected/903d114c-199f-46f9-b39b-afa52df71ea9-kube-api-access-zk5rc\") pod \"redhat-operators-kj4wv\" (UID: \"903d114c-199f-46f9-b39b-afa52df71ea9\") " pod="openshift-marketplace/redhat-operators-kj4wv" Mar 19 11:54:42.723599 master-0 kubenswrapper[7454]: I0319 11:54:42.723561 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06726494-b3aa-45f2-9b1f-5ee0ea45275e-kube-api-access-j8jrq" (OuterVolumeSpecName: "kube-api-access-j8jrq") pod "06726494-b3aa-45f2-9b1f-5ee0ea45275e" (UID: "06726494-b3aa-45f2-9b1f-5ee0ea45275e"). InnerVolumeSpecName "kube-api-access-j8jrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:54:42.804588 master-0 kubenswrapper[7454]: I0319 11:54:42.804535 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8jrq\" (UniqueName: \"kubernetes.io/projected/06726494-b3aa-45f2-9b1f-5ee0ea45275e-kube-api-access-j8jrq\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:42.804588 master-0 kubenswrapper[7454]: I0319 11:54:42.804578 7454 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06726494-b3aa-45f2-9b1f-5ee0ea45275e-config\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:42.804588 master-0 kubenswrapper[7454]: I0319 11:54:42.804592 7454 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06726494-b3aa-45f2-9b1f-5ee0ea45275e-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 11:54:42.810264 master-0 kubenswrapper[7454]: I0319 11:54:42.810219 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p225c"] Mar 19 11:54:42.810412 master-0 kubenswrapper[7454]: E0319 11:54:42.810396 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06726494-b3aa-45f2-9b1f-5ee0ea45275e" containerName="route-controller-manager" Mar 19 11:54:42.810412 master-0 kubenswrapper[7454]: I0319 11:54:42.810407 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="06726494-b3aa-45f2-9b1f-5ee0ea45275e" containerName="route-controller-manager" Mar 19 11:54:42.810505 master-0 kubenswrapper[7454]: I0319 11:54:42.810486 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="06726494-b3aa-45f2-9b1f-5ee0ea45275e" containerName="route-controller-manager" Mar 19 11:54:42.811225 master-0 kubenswrapper[7454]: I0319 11:54:42.811182 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p225c" Mar 19 11:54:42.824505 master-0 kubenswrapper[7454]: I0319 11:54:42.824474 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p225c"] Mar 19 11:54:42.861952 master-0 kubenswrapper[7454]: I0319 11:54:42.861886 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kj4wv" Mar 19 11:54:42.907476 master-0 kubenswrapper[7454]: I0319 11:54:42.906411 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77497070-ffa8-45e5-935d-5281828d6962-utilities\") pod \"certified-operators-p225c\" (UID: \"77497070-ffa8-45e5-935d-5281828d6962\") " pod="openshift-marketplace/certified-operators-p225c" Mar 19 11:54:42.907476 master-0 kubenswrapper[7454]: I0319 11:54:42.906464 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77497070-ffa8-45e5-935d-5281828d6962-catalog-content\") pod \"certified-operators-p225c\" (UID: \"77497070-ffa8-45e5-935d-5281828d6962\") " pod="openshift-marketplace/certified-operators-p225c" Mar 19 11:54:42.907476 master-0 kubenswrapper[7454]: I0319 11:54:42.906515 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5rxc\" (UniqueName: \"kubernetes.io/projected/77497070-ffa8-45e5-935d-5281828d6962-kube-api-access-d5rxc\") pod \"certified-operators-p225c\" (UID: \"77497070-ffa8-45e5-935d-5281828d6962\") " pod="openshift-marketplace/certified-operators-p225c" Mar 19 11:54:42.916930 master-0 kubenswrapper[7454]: I0319 11:54:42.916884 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6t6sn" event={"ID":"398bcaca-1bea-4633-a78f-717e3d015ddd","Type":"ContainerStarted","Data":"cc582feedcdb4133c9c713c7d5a7d6d8071d6a4fbcdc944d5e285bb2c4a787e1"} Mar 19 11:54:42.916930 master-0 kubenswrapper[7454]: I0319 11:54:42.916931 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6t6sn" event={"ID":"398bcaca-1bea-4633-a78f-717e3d015ddd","Type":"ContainerStarted","Data":"f66d34c1f1962bb53da451423f6913be40372ebddaaf465e0413a999d0701802"} Mar 19 11:54:42.919501 master-0 kubenswrapper[7454]: I0319 11:54:42.919474 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" event={"ID":"979ba8cc-5a7b-4188-bf9e-c22d810888e9","Type":"ContainerStarted","Data":"537910930d42e84b8a324d52b43f4b7d640c508990109a837d2bdbc7c1c7b1af"} Mar 19 11:54:42.922812 master-0 kubenswrapper[7454]: I0319 11:54:42.922764 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" event={"ID":"2b87f8c3-1898-46dd-bcac-e8f22f31e812","Type":"ContainerStarted","Data":"aa3992871b9affcd2f36678675ff1b30e2e547d3eecef5a52cdc674369bd2049"} Mar 19 11:54:42.924126 master-0 kubenswrapper[7454]: I0319 11:54:42.924089 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" event={"ID":"3661faaa-2c9d-4fcd-a41f-71aa71a2e464","Type":"ContainerStarted","Data":"7da17bb33a379d340800eed592826e40470a02d8d02148c7a5b5f440bb389db1"} Mar 19 11:54:42.924126 master-0 kubenswrapper[7454]: I0319 11:54:42.924120 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" event={"ID":"3661faaa-2c9d-4fcd-a41f-71aa71a2e464","Type":"ContainerStarted","Data":"43f216a933b60c080a956b5e1d05307037754c5207355d8b96b4c2f7227054f0"} Mar 19 11:54:42.925953 master-0 kubenswrapper[7454]: I0319 11:54:42.925924 7454 generic.go:334] "Generic (PLEG): container finished" podID="13503fef-09b2-4dbe-9537-a5b361e7b591" containerID="02cfc804dc670307f6eb25b2923269cce58d61ddff2ed2ded28891fde86083af" exitCode=0 Mar 19 11:54:42.926060 master-0 kubenswrapper[7454]: I0319 11:54:42.925983 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-897cc986b-vpg2l" event={"ID":"13503fef-09b2-4dbe-9537-a5b361e7b591","Type":"ContainerDied","Data":"02cfc804dc670307f6eb25b2923269cce58d61ddff2ed2ded28891fde86083af"} Mar 19 11:54:42.930570 master-0 kubenswrapper[7454]: I0319 11:54:42.930517 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zjdkm" event={"ID":"f236a5ab-b400-46fc-94ee-1fff476d6458","Type":"ContainerStarted","Data":"5489d426a99dbec17de28ef3500d55926ac7628d563588a1aae905d9dd352f93"} Mar 19 11:54:42.930748 master-0 kubenswrapper[7454]: I0319 11:54:42.930734 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-zjdkm" Mar 19 11:54:42.931675 master-0 kubenswrapper[7454]: I0319 11:54:42.931629 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"632bdf3b-0ba0-4874-a2ec-8396683c35c5","Type":"ContainerStarted","Data":"0db01150a16f0758697f4004ab15abe194def9a3c61ba179de9b9e1316f2ccf4"} Mar 19 11:54:42.952961 master-0 kubenswrapper[7454]: I0319 11:54:42.952771 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" event={"ID":"806a4c30-7b93-4430-86da-f9e1f4f2d206","Type":"ContainerStarted","Data":"5af120083ccfa19775f3cfbcd29e655aebb641b4ecf435859e1f29291e7340f7"} Mar 19 11:54:42.955686 master-0 kubenswrapper[7454]: I0319 11:54:42.955652 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" event={"ID":"19de6601-10d4-4112-a21f-0398d2b160d1","Type":"ContainerStarted","Data":"68c205a7cfb2de9d6da9f6cfb9c833d6d4d07f368dbd0c842790192b58bcaa3f"} Mar 19 11:54:42.965958 master-0 kubenswrapper[7454]: I0319 11:54:42.965920 7454 generic.go:334] "Generic (PLEG): container finished" podID="06726494-b3aa-45f2-9b1f-5ee0ea45275e" containerID="3a1076e64cec1680225e189b2c6e90d9b594aa9efeca7c872f23b13cb6329486" exitCode=0 Mar 19 11:54:42.966175 master-0 kubenswrapper[7454]: I0319 11:54:42.965972 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" Mar 19 11:54:42.966674 master-0 kubenswrapper[7454]: I0319 11:54:42.965997 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" event={"ID":"06726494-b3aa-45f2-9b1f-5ee0ea45275e","Type":"ContainerDied","Data":"3a1076e64cec1680225e189b2c6e90d9b594aa9efeca7c872f23b13cb6329486"} Mar 19 11:54:42.966737 master-0 kubenswrapper[7454]: I0319 11:54:42.966693 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4" event={"ID":"06726494-b3aa-45f2-9b1f-5ee0ea45275e","Type":"ContainerDied","Data":"966a9480718bf1964806fd74fc213f6acb41d1cb66534abba3f84706d8211a6a"} Mar 19 11:54:42.966737 master-0 kubenswrapper[7454]: I0319 11:54:42.966721 7454 scope.go:117] "RemoveContainer" containerID="3a1076e64cec1680225e189b2c6e90d9b594aa9efeca7c872f23b13cb6329486" Mar 19 11:54:42.986157 master-0 kubenswrapper[7454]: I0319 11:54:42.985427 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" podStartSLOduration=11.288565852 podStartE2EDuration="25.985403087s" podCreationTimestamp="2026-03-19 11:54:17 +0000 UTC" firstStartedPulling="2026-03-19 11:54:21.182356425 +0000 UTC m=+30.812822338" lastFinishedPulling="2026-03-19 11:54:35.87919364 +0000 UTC m=+45.509659573" observedRunningTime="2026-03-19 11:54:42.983706244 +0000 UTC m=+52.614172157" watchObservedRunningTime="2026-03-19 11:54:42.985403087 +0000 UTC m=+52.615869000" Mar 19 11:54:42.997549 master-0 kubenswrapper[7454]: I0319 11:54:42.997503 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"f7fd0b13-489f-42b7-a52a-6194fdc9f665","Type":"ContainerStarted","Data":"65da2f47f4c8263662f98db014676bd0876e60b79722705d3aa8abd4a7e835b8"} Mar 19 11:54:43.001159 master-0 kubenswrapper[7454]: I0319 11:54:43.001031 7454 scope.go:117] "RemoveContainer" containerID="3a1076e64cec1680225e189b2c6e90d9b594aa9efeca7c872f23b13cb6329486" Mar 19 11:54:43.002200 master-0 kubenswrapper[7454]: E0319 11:54:43.002116 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a1076e64cec1680225e189b2c6e90d9b594aa9efeca7c872f23b13cb6329486\": container with ID starting with 3a1076e64cec1680225e189b2c6e90d9b594aa9efeca7c872f23b13cb6329486 not found: ID does not exist" containerID="3a1076e64cec1680225e189b2c6e90d9b594aa9efeca7c872f23b13cb6329486" Mar 19 11:54:43.002200 master-0 kubenswrapper[7454]: I0319 11:54:43.002159 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a1076e64cec1680225e189b2c6e90d9b594aa9efeca7c872f23b13cb6329486"} err="failed to get container status \"3a1076e64cec1680225e189b2c6e90d9b594aa9efeca7c872f23b13cb6329486\": rpc error: code = NotFound desc = could not find container \"3a1076e64cec1680225e189b2c6e90d9b594aa9efeca7c872f23b13cb6329486\": container with ID starting with 3a1076e64cec1680225e189b2c6e90d9b594aa9efeca7c872f23b13cb6329486 not found: ID does not exist" Mar 19 11:54:43.002692 master-0 kubenswrapper[7454]: I0319 11:54:43.002662 7454 generic.go:334] "Generic (PLEG): container finished" podID="5586a731-0e66-4ed1-a49e-a7f2dfb4a805" containerID="3be9fa9281fbba557659d5db283ef5b1d8dce759a907385695b00c2055e9fe48" exitCode=0 Mar 19 11:54:43.003019 master-0 kubenswrapper[7454]: I0319 11:54:43.003000 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" Mar 19 11:54:43.003130 master-0 kubenswrapper[7454]: I0319 11:54:43.003087 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" event={"ID":"5586a731-0e66-4ed1-a49e-a7f2dfb4a805","Type":"ContainerDied","Data":"3be9fa9281fbba557659d5db283ef5b1d8dce759a907385695b00c2055e9fe48"} Mar 19 11:54:43.003200 master-0 kubenswrapper[7454]: I0319 11:54:43.003145 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5" event={"ID":"5586a731-0e66-4ed1-a49e-a7f2dfb4a805","Type":"ContainerDied","Data":"b54704a4cddbd896cf2a6a351c9a09473ff5d720f7719001acfafe762110baa6"} Mar 19 11:54:43.003200 master-0 kubenswrapper[7454]: I0319 11:54:43.003165 7454 scope.go:117] "RemoveContainer" containerID="3be9fa9281fbba557659d5db283ef5b1d8dce759a907385695b00c2055e9fe48" Mar 19 11:54:43.008206 master-0 kubenswrapper[7454]: I0319 11:54:43.007248 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77497070-ffa8-45e5-935d-5281828d6962-utilities\") pod \"certified-operators-p225c\" (UID: \"77497070-ffa8-45e5-935d-5281828d6962\") " pod="openshift-marketplace/certified-operators-p225c" Mar 19 11:54:43.008206 master-0 kubenswrapper[7454]: I0319 11:54:43.007302 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77497070-ffa8-45e5-935d-5281828d6962-catalog-content\") pod \"certified-operators-p225c\" (UID: \"77497070-ffa8-45e5-935d-5281828d6962\") " pod="openshift-marketplace/certified-operators-p225c" Mar 19 11:54:43.008206 master-0 kubenswrapper[7454]: I0319 11:54:43.007487 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5rxc\" (UniqueName: \"kubernetes.io/projected/77497070-ffa8-45e5-935d-5281828d6962-kube-api-access-d5rxc\") pod \"certified-operators-p225c\" (UID: \"77497070-ffa8-45e5-935d-5281828d6962\") " pod="openshift-marketplace/certified-operators-p225c" Mar 19 11:54:43.015553 master-0 kubenswrapper[7454]: I0319 11:54:43.015517 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77497070-ffa8-45e5-935d-5281828d6962-catalog-content\") pod \"certified-operators-p225c\" (UID: \"77497070-ffa8-45e5-935d-5281828d6962\") " pod="openshift-marketplace/certified-operators-p225c" Mar 19 11:54:43.016367 master-0 kubenswrapper[7454]: I0319 11:54:43.016332 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77497070-ffa8-45e5-935d-5281828d6962-utilities\") pod \"certified-operators-p225c\" (UID: \"77497070-ffa8-45e5-935d-5281828d6962\") " pod="openshift-marketplace/certified-operators-p225c" Mar 19 11:54:43.031505 master-0 kubenswrapper[7454]: I0319 11:54:43.026085 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" event={"ID":"b80027fd-7b39-477a-a337-ff9bb08e7eeb","Type":"ContainerStarted","Data":"200a06c899a0965df44a32ca37904722fe84e2d824b3e5f590ceb43f513d3c8b"} Mar 19 11:54:43.036150 master-0 kubenswrapper[7454]: I0319 11:54:43.034955 7454 scope.go:117] "RemoveContainer" containerID="3be9fa9281fbba557659d5db283ef5b1d8dce759a907385695b00c2055e9fe48" Mar 19 11:54:43.042210 master-0 kubenswrapper[7454]: E0319 11:54:43.042153 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3be9fa9281fbba557659d5db283ef5b1d8dce759a907385695b00c2055e9fe48\": container with ID starting with 3be9fa9281fbba557659d5db283ef5b1d8dce759a907385695b00c2055e9fe48 not found: ID does not exist" containerID="3be9fa9281fbba557659d5db283ef5b1d8dce759a907385695b00c2055e9fe48" Mar 19 11:54:43.042210 master-0 kubenswrapper[7454]: I0319 11:54:43.042198 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3be9fa9281fbba557659d5db283ef5b1d8dce759a907385695b00c2055e9fe48"} err="failed to get container status \"3be9fa9281fbba557659d5db283ef5b1d8dce759a907385695b00c2055e9fe48\": rpc error: code = NotFound desc = could not find container \"3be9fa9281fbba557659d5db283ef5b1d8dce759a907385695b00c2055e9fe48\": container with ID starting with 3be9fa9281fbba557659d5db283ef5b1d8dce759a907385695b00c2055e9fe48 not found: ID does not exist" Mar 19 11:54:43.047990 master-0 kubenswrapper[7454]: I0319 11:54:43.045695 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5rxc\" (UniqueName: \"kubernetes.io/projected/77497070-ffa8-45e5-935d-5281828d6962-kube-api-access-d5rxc\") pod \"certified-operators-p225c\" (UID: \"77497070-ffa8-45e5-935d-5281828d6962\") " pod="openshift-marketplace/certified-operators-p225c" Mar 19 11:54:43.065218 master-0 kubenswrapper[7454]: I0319 11:54:43.064627 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"4b49f09f-2efa-4657-9f5a-fbddd42bee0d","Type":"ContainerStarted","Data":"1f0110e6404807316fe552282de736e25a5c73a98ca28c762d1ca02e35c0a306"} Mar 19 11:54:43.087985 master-0 kubenswrapper[7454]: I0319 11:54:43.083342 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-zjdkm" podStartSLOduration=8.65251155 podStartE2EDuration="21.083324634s" podCreationTimestamp="2026-03-19 11:54:22 +0000 UTC" firstStartedPulling="2026-03-19 11:54:23.445278279 +0000 UTC m=+33.075744192" lastFinishedPulling="2026-03-19 11:54:35.876091363 +0000 UTC m=+45.506557276" observedRunningTime="2026-03-19 11:54:43.032353788 +0000 UTC m=+52.662819701" watchObservedRunningTime="2026-03-19 11:54:43.083324634 +0000 UTC m=+52.713790547" Mar 19 11:54:43.087985 master-0 kubenswrapper[7454]: I0319 11:54:43.083443 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" podStartSLOduration=2.083438848 podStartE2EDuration="2.083438848s" podCreationTimestamp="2026-03-19 11:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:54:43.082051784 +0000 UTC m=+52.712517707" watchObservedRunningTime="2026-03-19 11:54:43.083438848 +0000 UTC m=+52.713904761" Mar 19 11:54:43.104025 master-0 kubenswrapper[7454]: I0319 11:54:43.100172 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:54:43.132844 master-0 kubenswrapper[7454]: I0319 11:54:43.128601 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:43.132844 master-0 kubenswrapper[7454]: I0319 11:54:43.128646 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:43.145289 master-0 kubenswrapper[7454]: I0319 11:54:43.139509 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p225c" Mar 19 11:54:43.182049 master-0 kubenswrapper[7454]: I0319 11:54:43.181976 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" podStartSLOduration=15.181958983 podStartE2EDuration="15.181958983s" podCreationTimestamp="2026-03-19 11:54:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:54:43.181298123 +0000 UTC m=+52.811764056" watchObservedRunningTime="2026-03-19 11:54:43.181958983 +0000 UTC m=+52.812424896" Mar 19 11:54:43.214338 master-0 kubenswrapper[7454]: I0319 11:54:43.208808 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=10.208772963 podStartE2EDuration="10.208772963s" podCreationTimestamp="2026-03-19 11:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:54:43.207123401 +0000 UTC m=+52.837589324" watchObservedRunningTime="2026-03-19 11:54:43.208772963 +0000 UTC m=+52.839238876" Mar 19 11:54:43.279296 master-0 kubenswrapper[7454]: I0319 11:54:43.277945 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:43.298817 master-0 kubenswrapper[7454]: I0319 11:54:43.296703 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=11.296681016 podStartE2EDuration="11.296681016s" podCreationTimestamp="2026-03-19 11:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:54:43.286149746 +0000 UTC m=+52.916615659" watchObservedRunningTime="2026-03-19 11:54:43.296681016 +0000 UTC m=+52.927146919" Mar 19 11:54:43.311257 master-0 kubenswrapper[7454]: I0319 11:54:43.311202 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5"] Mar 19 11:54:43.334815 master-0 kubenswrapper[7454]: I0319 11:54:43.331954 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-54b4cfc58b-pjsj5"] Mar 19 11:54:43.392831 master-0 kubenswrapper[7454]: I0319 11:54:43.392595 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4"] Mar 19 11:54:43.410041 master-0 kubenswrapper[7454]: I0319 11:54:43.407778 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9d586bf8-khff4"] Mar 19 11:54:43.609206 master-0 kubenswrapper[7454]: I0319 11:54:43.609161 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p225c"] Mar 19 11:54:43.629211 master-0 kubenswrapper[7454]: I0319 11:54:43.627571 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kj4wv"] Mar 19 11:54:43.654017 master-0 kubenswrapper[7454]: I0319 11:54:43.652518 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 19 11:54:43.696365 master-0 kubenswrapper[7454]: W0319 11:54:43.695364 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod903d114c_199f_46f9_b39b_afa52df71ea9.slice/crio-785658b3a5e114a93a0f8abff53d8f934cc7da626b174692818b21ff44c148b4 WatchSource:0}: Error finding container 785658b3a5e114a93a0f8abff53d8f934cc7da626b174692818b21ff44c148b4: Status 404 returned error can't find the container with id 785658b3a5e114a93a0f8abff53d8f934cc7da626b174692818b21ff44c148b4 Mar 19 11:54:44.010021 master-0 kubenswrapper[7454]: I0319 11:54:44.007732 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-flnbx"] Mar 19 11:54:44.010021 master-0 kubenswrapper[7454]: I0319 11:54:44.008592 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-flnbx" Mar 19 11:54:44.032821 master-0 kubenswrapper[7454]: I0319 11:54:44.029334 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-flnbx"] Mar 19 11:54:44.071017 master-0 kubenswrapper[7454]: I0319 11:54:44.070961 7454 generic.go:334] "Generic (PLEG): container finished" podID="77497070-ffa8-45e5-935d-5281828d6962" containerID="28ea8e21ba4afb486655483dd5710a876c83a2c01ce9d2707b582b55e7a1e992" exitCode=0 Mar 19 11:54:44.071237 master-0 kubenswrapper[7454]: I0319 11:54:44.071064 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p225c" event={"ID":"77497070-ffa8-45e5-935d-5281828d6962","Type":"ContainerDied","Data":"28ea8e21ba4afb486655483dd5710a876c83a2c01ce9d2707b582b55e7a1e992"} Mar 19 11:54:44.071237 master-0 kubenswrapper[7454]: I0319 11:54:44.071111 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p225c" event={"ID":"77497070-ffa8-45e5-935d-5281828d6962","Type":"ContainerStarted","Data":"f071d5c6e7e1f35bc260aa337d9b194fe82c1243aca8a2aec9d30be0bb3216e9"} Mar 19 11:54:44.082048 master-0 kubenswrapper[7454]: I0319 11:54:44.081214 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-897cc986b-vpg2l" event={"ID":"13503fef-09b2-4dbe-9537-a5b361e7b591","Type":"ContainerStarted","Data":"b90c60f1fb4b9dc04ebdc34bb4b27ed7b3e0e2945ef955f7bbed35d2ca9553fa"} Mar 19 11:54:44.082048 master-0 kubenswrapper[7454]: I0319 11:54:44.081294 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-897cc986b-vpg2l" event={"ID":"13503fef-09b2-4dbe-9537-a5b361e7b591","Type":"ContainerStarted","Data":"f8dc91cb7e539049bfed3d0268ef75199c4d1cbe7673ed5611ff317e3d20b1d1"} Mar 19 11:54:44.088009 master-0 kubenswrapper[7454]: I0319 11:54:44.086928 7454 generic.go:334] "Generic (PLEG): container finished" podID="903d114c-199f-46f9-b39b-afa52df71ea9" containerID="f0e553719f0fc9ee108ddf64a7ed8d6042b89d616fa26b1f17b8415d992c87a2" exitCode=0 Mar 19 11:54:44.088009 master-0 kubenswrapper[7454]: I0319 11:54:44.087032 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kj4wv" event={"ID":"903d114c-199f-46f9-b39b-afa52df71ea9","Type":"ContainerDied","Data":"f0e553719f0fc9ee108ddf64a7ed8d6042b89d616fa26b1f17b8415d992c87a2"} Mar 19 11:54:44.088009 master-0 kubenswrapper[7454]: I0319 11:54:44.087067 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kj4wv" event={"ID":"903d114c-199f-46f9-b39b-afa52df71ea9","Type":"ContainerStarted","Data":"785658b3a5e114a93a0f8abff53d8f934cc7da626b174692818b21ff44c148b4"} Mar 19 11:54:44.100981 master-0 kubenswrapper[7454]: I0319 11:54:44.100929 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 11:54:44.139772 master-0 kubenswrapper[7454]: I0319 11:54:44.139495 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1370cf76-52c4-4f19-8dfc-794f2901f8a6-utilities\") pod \"community-operators-flnbx\" (UID: \"1370cf76-52c4-4f19-8dfc-794f2901f8a6\") " pod="openshift-marketplace/community-operators-flnbx" Mar 19 11:54:44.139772 master-0 kubenswrapper[7454]: I0319 11:54:44.139574 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxbgx\" (UniqueName: \"kubernetes.io/projected/1370cf76-52c4-4f19-8dfc-794f2901f8a6-kube-api-access-qxbgx\") pod \"community-operators-flnbx\" (UID: \"1370cf76-52c4-4f19-8dfc-794f2901f8a6\") " pod="openshift-marketplace/community-operators-flnbx" Mar 19 11:54:44.139772 master-0 kubenswrapper[7454]: I0319 11:54:44.139658 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1370cf76-52c4-4f19-8dfc-794f2901f8a6-catalog-content\") pod \"community-operators-flnbx\" (UID: \"1370cf76-52c4-4f19-8dfc-794f2901f8a6\") " pod="openshift-marketplace/community-operators-flnbx" Mar 19 11:54:44.140745 master-0 kubenswrapper[7454]: I0319 11:54:44.140677 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-897cc986b-vpg2l" podStartSLOduration=9.221116308 podStartE2EDuration="26.14066523s" podCreationTimestamp="2026-03-19 11:54:18 +0000 UTC" firstStartedPulling="2026-03-19 11:54:23.302174907 +0000 UTC m=+32.932640810" lastFinishedPulling="2026-03-19 11:54:40.221723819 +0000 UTC m=+49.852189732" observedRunningTime="2026-03-19 11:54:44.138815683 +0000 UTC m=+53.769281606" watchObservedRunningTime="2026-03-19 11:54:44.14066523 +0000 UTC m=+53.771131143" Mar 19 11:54:44.241018 master-0 kubenswrapper[7454]: I0319 11:54:44.240941 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1370cf76-52c4-4f19-8dfc-794f2901f8a6-catalog-content\") pod \"community-operators-flnbx\" (UID: \"1370cf76-52c4-4f19-8dfc-794f2901f8a6\") " pod="openshift-marketplace/community-operators-flnbx" Mar 19 11:54:44.241344 master-0 kubenswrapper[7454]: I0319 11:54:44.241075 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1370cf76-52c4-4f19-8dfc-794f2901f8a6-utilities\") pod \"community-operators-flnbx\" (UID: \"1370cf76-52c4-4f19-8dfc-794f2901f8a6\") " pod="openshift-marketplace/community-operators-flnbx" Mar 19 11:54:44.241344 master-0 kubenswrapper[7454]: I0319 11:54:44.241203 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxbgx\" (UniqueName: \"kubernetes.io/projected/1370cf76-52c4-4f19-8dfc-794f2901f8a6-kube-api-access-qxbgx\") pod \"community-operators-flnbx\" (UID: \"1370cf76-52c4-4f19-8dfc-794f2901f8a6\") " pod="openshift-marketplace/community-operators-flnbx" Mar 19 11:54:44.253833 master-0 kubenswrapper[7454]: I0319 11:54:44.249188 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1370cf76-52c4-4f19-8dfc-794f2901f8a6-utilities\") pod \"community-operators-flnbx\" (UID: \"1370cf76-52c4-4f19-8dfc-794f2901f8a6\") " pod="openshift-marketplace/community-operators-flnbx" Mar 19 11:54:44.253833 master-0 kubenswrapper[7454]: I0319 11:54:44.249391 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1370cf76-52c4-4f19-8dfc-794f2901f8a6-catalog-content\") pod \"community-operators-flnbx\" (UID: \"1370cf76-52c4-4f19-8dfc-794f2901f8a6\") " pod="openshift-marketplace/community-operators-flnbx" Mar 19 11:54:44.315011 master-0 kubenswrapper[7454]: I0319 11:54:44.302847 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb"] Mar 19 11:54:44.315011 master-0 kubenswrapper[7454]: I0319 11:54:44.303410 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 11:54:44.315011 master-0 kubenswrapper[7454]: I0319 11:54:44.308687 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 19 11:54:44.315011 master-0 kubenswrapper[7454]: I0319 11:54:44.310435 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxbgx\" (UniqueName: \"kubernetes.io/projected/1370cf76-52c4-4f19-8dfc-794f2901f8a6-kube-api-access-qxbgx\") pod \"community-operators-flnbx\" (UID: \"1370cf76-52c4-4f19-8dfc-794f2901f8a6\") " pod="openshift-marketplace/community-operators-flnbx" Mar 19 11:54:44.339962 master-0 kubenswrapper[7454]: I0319 11:54:44.339906 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb"] Mar 19 11:54:44.342422 master-0 kubenswrapper[7454]: I0319 11:54:44.342021 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-flnbx" Mar 19 11:54:44.462077 master-0 kubenswrapper[7454]: I0319 11:54:44.462037 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/be4349fa-5c67-4135-80a7-b8a694553662-apiservice-cert\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 11:54:44.462377 master-0 kubenswrapper[7454]: I0319 11:54:44.462092 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/be4349fa-5c67-4135-80a7-b8a694553662-tmpfs\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 11:54:44.462377 master-0 kubenswrapper[7454]: I0319 11:54:44.462152 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbzj2\" (UniqueName: \"kubernetes.io/projected/be4349fa-5c67-4135-80a7-b8a694553662-kube-api-access-jbzj2\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 11:54:44.462377 master-0 kubenswrapper[7454]: I0319 11:54:44.462185 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/be4349fa-5c67-4135-80a7-b8a694553662-webhook-cert\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 11:54:44.564854 master-0 kubenswrapper[7454]: I0319 11:54:44.563557 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/be4349fa-5c67-4135-80a7-b8a694553662-webhook-cert\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 11:54:44.564854 master-0 kubenswrapper[7454]: I0319 11:54:44.563624 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/be4349fa-5c67-4135-80a7-b8a694553662-apiservice-cert\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 11:54:44.564854 master-0 kubenswrapper[7454]: I0319 11:54:44.563654 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/be4349fa-5c67-4135-80a7-b8a694553662-tmpfs\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 11:54:44.564854 master-0 kubenswrapper[7454]: I0319 11:54:44.563685 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbzj2\" (UniqueName: \"kubernetes.io/projected/be4349fa-5c67-4135-80a7-b8a694553662-kube-api-access-jbzj2\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 11:54:44.565700 master-0 kubenswrapper[7454]: I0319 11:54:44.565177 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/be4349fa-5c67-4135-80a7-b8a694553662-tmpfs\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 11:54:44.567396 master-0 kubenswrapper[7454]: I0319 11:54:44.567073 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/be4349fa-5c67-4135-80a7-b8a694553662-apiservice-cert\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 11:54:44.568134 master-0 kubenswrapper[7454]: I0319 11:54:44.568061 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/be4349fa-5c67-4135-80a7-b8a694553662-webhook-cert\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 11:54:44.609215 master-0 kubenswrapper[7454]: I0319 11:54:44.609130 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbzj2\" (UniqueName: \"kubernetes.io/projected/be4349fa-5c67-4135-80a7-b8a694553662-kube-api-access-jbzj2\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 11:54:44.654466 master-0 kubenswrapper[7454]: I0319 11:54:44.650140 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06726494-b3aa-45f2-9b1f-5ee0ea45275e" path="/var/lib/kubelet/pods/06726494-b3aa-45f2-9b1f-5ee0ea45275e/volumes" Mar 19 11:54:44.654466 master-0 kubenswrapper[7454]: I0319 11:54:44.650738 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5586a731-0e66-4ed1-a49e-a7f2dfb4a805" path="/var/lib/kubelet/pods/5586a731-0e66-4ed1-a49e-a7f2dfb4a805/volumes" Mar 19 11:54:44.806137 master-0 kubenswrapper[7454]: I0319 11:54:44.806061 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 11:54:44.897287 master-0 kubenswrapper[7454]: I0319 11:54:44.888108 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9"] Mar 19 11:54:44.921082 master-0 kubenswrapper[7454]: I0319 11:54:44.897694 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7cdddc6cb-q222c"] Mar 19 11:54:44.921082 master-0 kubenswrapper[7454]: I0319 11:54:44.898370 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 11:54:44.921082 master-0 kubenswrapper[7454]: I0319 11:54:44.899193 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 11:54:44.921082 master-0 kubenswrapper[7454]: I0319 11:54:44.908537 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 19 11:54:44.921082 master-0 kubenswrapper[7454]: I0319 11:54:44.912740 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 19 11:54:44.921082 master-0 kubenswrapper[7454]: I0319 11:54:44.912872 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 19 11:54:44.921082 master-0 kubenswrapper[7454]: I0319 11:54:44.912977 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 19 11:54:44.921082 master-0 kubenswrapper[7454]: I0319 11:54:44.913101 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 19 11:54:44.921082 master-0 kubenswrapper[7454]: I0319 11:54:44.913332 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 19 11:54:44.921082 master-0 kubenswrapper[7454]: I0319 11:54:44.913446 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 19 11:54:44.921082 master-0 kubenswrapper[7454]: I0319 11:54:44.913595 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 19 11:54:44.921082 master-0 kubenswrapper[7454]: I0319 11:54:44.913965 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 19 11:54:44.921082 master-0 kubenswrapper[7454]: I0319 11:54:44.914491 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 19 11:54:44.921575 master-0 kubenswrapper[7454]: I0319 11:54:44.921259 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 19 11:54:44.938819 master-0 kubenswrapper[7454]: I0319 11:54:44.932004 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9"] Mar 19 11:54:44.985334 master-0 kubenswrapper[7454]: I0319 11:54:44.969374 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cdddc6cb-q222c"] Mar 19 11:54:44.985334 master-0 kubenswrapper[7454]: I0319 11:54:44.982017 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-flnbx"] Mar 19 11:54:45.088131 master-0 kubenswrapper[7454]: W0319 11:54:45.081392 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1370cf76_52c4_4f19_8dfc_794f2901f8a6.slice/crio-673b063d313abd4fa88faf273eacc91a4214aa37217c17c5778c669aaa95fb83 WatchSource:0}: Error finding container 673b063d313abd4fa88faf273eacc91a4214aa37217c17c5778c669aaa95fb83: Status 404 returned error can't find the container with id 673b063d313abd4fa88faf273eacc91a4214aa37217c17c5778c669aaa95fb83 Mar 19 11:54:45.088131 master-0 kubenswrapper[7454]: I0319 11:54:45.082681 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srbt4\" (UniqueName: \"kubernetes.io/projected/3a6b082a-649b-43f6-8e24-cf222873fe39-kube-api-access-srbt4\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 11:54:45.088131 master-0 kubenswrapper[7454]: I0319 11:54:45.082726 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-proxy-ca-bundles\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 11:54:45.088131 master-0 kubenswrapper[7454]: I0319 11:54:45.082750 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da9becfb-a504-4ef7-92ed-cd2db439d5db-client-ca\") pod \"route-controller-manager-fdb67f9cf-vkmd9\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 11:54:45.088131 master-0 kubenswrapper[7454]: I0319 11:54:45.082842 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvzcn\" (UniqueName: \"kubernetes.io/projected/da9becfb-a504-4ef7-92ed-cd2db439d5db-kube-api-access-lvzcn\") pod \"route-controller-manager-fdb67f9cf-vkmd9\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 11:54:45.088131 master-0 kubenswrapper[7454]: I0319 11:54:45.082875 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-config\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 11:54:45.088131 master-0 kubenswrapper[7454]: I0319 11:54:45.082913 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da9becfb-a504-4ef7-92ed-cd2db439d5db-config\") pod \"route-controller-manager-fdb67f9cf-vkmd9\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 11:54:45.088131 master-0 kubenswrapper[7454]: I0319 11:54:45.082969 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da9becfb-a504-4ef7-92ed-cd2db439d5db-serving-cert\") pod \"route-controller-manager-fdb67f9cf-vkmd9\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 11:54:45.088131 master-0 kubenswrapper[7454]: I0319 11:54:45.082991 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-client-ca\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 11:54:45.088131 master-0 kubenswrapper[7454]: I0319 11:54:45.083012 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a6b082a-649b-43f6-8e24-cf222873fe39-serving-cert\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 11:54:45.134329 master-0 kubenswrapper[7454]: I0319 11:54:45.134221 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-flnbx" event={"ID":"1370cf76-52c4-4f19-8dfc-794f2901f8a6","Type":"ContainerStarted","Data":"673b063d313abd4fa88faf273eacc91a4214aa37217c17c5778c669aaa95fb83"} Mar 19 11:54:45.135667 master-0 kubenswrapper[7454]: I0319 11:54:45.135049 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="f7fd0b13-489f-42b7-a52a-6194fdc9f665" containerName="installer" containerID="cri-o://65da2f47f4c8263662f98db014676bd0876e60b79722705d3aa8abd4a7e835b8" gracePeriod=30 Mar 19 11:54:45.184131 master-0 kubenswrapper[7454]: I0319 11:54:45.183973 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da9becfb-a504-4ef7-92ed-cd2db439d5db-serving-cert\") pod \"route-controller-manager-fdb67f9cf-vkmd9\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 11:54:45.184131 master-0 kubenswrapper[7454]: I0319 11:54:45.184009 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-client-ca\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 11:54:45.184131 master-0 kubenswrapper[7454]: I0319 11:54:45.184026 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a6b082a-649b-43f6-8e24-cf222873fe39-serving-cert\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 11:54:45.184131 master-0 kubenswrapper[7454]: I0319 11:54:45.184056 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srbt4\" (UniqueName: \"kubernetes.io/projected/3a6b082a-649b-43f6-8e24-cf222873fe39-kube-api-access-srbt4\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 11:54:45.184131 master-0 kubenswrapper[7454]: I0319 11:54:45.184078 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-proxy-ca-bundles\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 11:54:45.184131 master-0 kubenswrapper[7454]: I0319 11:54:45.184096 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da9becfb-a504-4ef7-92ed-cd2db439d5db-client-ca\") pod \"route-controller-manager-fdb67f9cf-vkmd9\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 11:54:45.185281 master-0 kubenswrapper[7454]: I0319 11:54:45.184125 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvzcn\" (UniqueName: \"kubernetes.io/projected/da9becfb-a504-4ef7-92ed-cd2db439d5db-kube-api-access-lvzcn\") pod \"route-controller-manager-fdb67f9cf-vkmd9\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 11:54:45.185281 master-0 kubenswrapper[7454]: I0319 11:54:45.185018 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-config\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 11:54:45.185281 master-0 kubenswrapper[7454]: I0319 11:54:45.185072 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da9becfb-a504-4ef7-92ed-cd2db439d5db-config\") pod \"route-controller-manager-fdb67f9cf-vkmd9\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 11:54:45.186081 master-0 kubenswrapper[7454]: I0319 11:54:45.186052 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da9becfb-a504-4ef7-92ed-cd2db439d5db-config\") pod \"route-controller-manager-fdb67f9cf-vkmd9\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 11:54:45.187842 master-0 kubenswrapper[7454]: I0319 11:54:45.187770 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da9becfb-a504-4ef7-92ed-cd2db439d5db-client-ca\") pod \"route-controller-manager-fdb67f9cf-vkmd9\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 11:54:45.188502 master-0 kubenswrapper[7454]: I0319 11:54:45.188308 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-config\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 11:54:45.190280 master-0 kubenswrapper[7454]: I0319 11:54:45.190256 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-proxy-ca-bundles\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 11:54:45.205901 master-0 kubenswrapper[7454]: I0319 11:54:45.201293 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da9becfb-a504-4ef7-92ed-cd2db439d5db-serving-cert\") pod \"route-controller-manager-fdb67f9cf-vkmd9\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 11:54:45.205901 master-0 kubenswrapper[7454]: I0319 11:54:45.203257 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-client-ca\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 11:54:45.207655 master-0 kubenswrapper[7454]: I0319 11:54:45.207625 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvzcn\" (UniqueName: \"kubernetes.io/projected/da9becfb-a504-4ef7-92ed-cd2db439d5db-kube-api-access-lvzcn\") pod \"route-controller-manager-fdb67f9cf-vkmd9\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 11:54:45.208472 master-0 kubenswrapper[7454]: I0319 11:54:45.208416 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a6b082a-649b-43f6-8e24-cf222873fe39-serving-cert\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 11:54:45.217372 master-0 kubenswrapper[7454]: I0319 11:54:45.215394 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srbt4\" (UniqueName: \"kubernetes.io/projected/3a6b082a-649b-43f6-8e24-cf222873fe39-kube-api-access-srbt4\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 11:54:45.226876 master-0 kubenswrapper[7454]: I0319 11:54:45.225596 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bkx8c"] Mar 19 11:54:45.231546 master-0 kubenswrapper[7454]: I0319 11:54:45.231507 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkx8c" Mar 19 11:54:45.240851 master-0 kubenswrapper[7454]: I0319 11:54:45.240789 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkx8c"] Mar 19 11:54:45.247262 master-0 kubenswrapper[7454]: I0319 11:54:45.247225 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 11:54:45.254192 master-0 kubenswrapper[7454]: I0319 11:54:45.253884 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 11:54:45.270965 master-0 kubenswrapper[7454]: I0319 11:54:45.270469 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb"] Mar 19 11:54:45.407169 master-0 kubenswrapper[7454]: I0319 11:54:45.390463 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db75b266-69c4-4790-82f1-43168b5bb6a0-catalog-content\") pod \"redhat-marketplace-bkx8c\" (UID: \"db75b266-69c4-4790-82f1-43168b5bb6a0\") " pod="openshift-marketplace/redhat-marketplace-bkx8c" Mar 19 11:54:45.407169 master-0 kubenswrapper[7454]: I0319 11:54:45.390548 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjz8g\" (UniqueName: \"kubernetes.io/projected/db75b266-69c4-4790-82f1-43168b5bb6a0-kube-api-access-pjz8g\") pod \"redhat-marketplace-bkx8c\" (UID: \"db75b266-69c4-4790-82f1-43168b5bb6a0\") " pod="openshift-marketplace/redhat-marketplace-bkx8c" Mar 19 11:54:45.407169 master-0 kubenswrapper[7454]: I0319 11:54:45.390567 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db75b266-69c4-4790-82f1-43168b5bb6a0-utilities\") pod \"redhat-marketplace-bkx8c\" (UID: \"db75b266-69c4-4790-82f1-43168b5bb6a0\") " pod="openshift-marketplace/redhat-marketplace-bkx8c" Mar 19 11:54:45.492209 master-0 kubenswrapper[7454]: I0319 11:54:45.492053 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjz8g\" (UniqueName: \"kubernetes.io/projected/db75b266-69c4-4790-82f1-43168b5bb6a0-kube-api-access-pjz8g\") pod \"redhat-marketplace-bkx8c\" (UID: \"db75b266-69c4-4790-82f1-43168b5bb6a0\") " pod="openshift-marketplace/redhat-marketplace-bkx8c" Mar 19 11:54:45.492209 master-0 kubenswrapper[7454]: I0319 11:54:45.492105 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db75b266-69c4-4790-82f1-43168b5bb6a0-utilities\") pod \"redhat-marketplace-bkx8c\" (UID: \"db75b266-69c4-4790-82f1-43168b5bb6a0\") " pod="openshift-marketplace/redhat-marketplace-bkx8c" Mar 19 11:54:45.492209 master-0 kubenswrapper[7454]: I0319 11:54:45.492194 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db75b266-69c4-4790-82f1-43168b5bb6a0-catalog-content\") pod \"redhat-marketplace-bkx8c\" (UID: \"db75b266-69c4-4790-82f1-43168b5bb6a0\") " pod="openshift-marketplace/redhat-marketplace-bkx8c" Mar 19 11:54:45.503948 master-0 kubenswrapper[7454]: I0319 11:54:45.497112 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db75b266-69c4-4790-82f1-43168b5bb6a0-catalog-content\") pod \"redhat-marketplace-bkx8c\" (UID: \"db75b266-69c4-4790-82f1-43168b5bb6a0\") " pod="openshift-marketplace/redhat-marketplace-bkx8c" Mar 19 11:54:45.503948 master-0 kubenswrapper[7454]: I0319 11:54:45.497186 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db75b266-69c4-4790-82f1-43168b5bb6a0-utilities\") pod \"redhat-marketplace-bkx8c\" (UID: \"db75b266-69c4-4790-82f1-43168b5bb6a0\") " pod="openshift-marketplace/redhat-marketplace-bkx8c" Mar 19 11:54:45.508495 master-0 kubenswrapper[7454]: I0319 11:54:45.508433 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjz8g\" (UniqueName: \"kubernetes.io/projected/db75b266-69c4-4790-82f1-43168b5bb6a0-kube-api-access-pjz8g\") pod \"redhat-marketplace-bkx8c\" (UID: \"db75b266-69c4-4790-82f1-43168b5bb6a0\") " pod="openshift-marketplace/redhat-marketplace-bkx8c" Mar 19 11:54:45.576514 master-0 kubenswrapper[7454]: I0319 11:54:45.576343 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkx8c" Mar 19 11:54:45.721865 master-0 kubenswrapper[7454]: I0319 11:54:45.715733 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9"] Mar 19 11:54:45.808907 master-0 kubenswrapper[7454]: I0319 11:54:45.808859 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cdddc6cb-q222c"] Mar 19 11:54:45.922025 master-0 kubenswrapper[7454]: I0319 11:54:45.921978 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm"] Mar 19 11:54:45.922779 master-0 kubenswrapper[7454]: I0319 11:54:45.922763 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" Mar 19 11:54:45.929650 master-0 kubenswrapper[7454]: I0319 11:54:45.929609 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 19 11:54:45.932779 master-0 kubenswrapper[7454]: I0319 11:54:45.932745 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm"] Mar 19 11:54:46.052322 master-0 kubenswrapper[7454]: I0319 11:54:46.052275 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkx8c"] Mar 19 11:54:46.117453 master-0 kubenswrapper[7454]: I0319 11:54:46.117416 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/311b8bab-6cee-406d-8e0e-5b18a743d5fa-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-rdpvm\" (UID: \"311b8bab-6cee-406d-8e0e-5b18a743d5fa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" Mar 19 11:54:46.117645 master-0 kubenswrapper[7454]: I0319 11:54:46.117495 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/311b8bab-6cee-406d-8e0e-5b18a743d5fa-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-rdpvm\" (UID: \"311b8bab-6cee-406d-8e0e-5b18a743d5fa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" Mar 19 11:54:46.117645 master-0 kubenswrapper[7454]: I0319 11:54:46.117541 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjfpq\" (UniqueName: \"kubernetes.io/projected/311b8bab-6cee-406d-8e0e-5b18a743d5fa-kube-api-access-hjfpq\") pod \"machine-config-controller-b4f87c5b9-rdpvm\" (UID: \"311b8bab-6cee-406d-8e0e-5b18a743d5fa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" Mar 19 11:54:46.145066 master-0 kubenswrapper[7454]: I0319 11:54:46.144486 7454 generic.go:334] "Generic (PLEG): container finished" podID="1370cf76-52c4-4f19-8dfc-794f2901f8a6" containerID="a96fd225dafcf97e7d05dad471494b2e0d85fd2d1d63677ffa5677e1fef31cd9" exitCode=0 Mar 19 11:54:46.145066 master-0 kubenswrapper[7454]: I0319 11:54:46.144560 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-flnbx" event={"ID":"1370cf76-52c4-4f19-8dfc-794f2901f8a6","Type":"ContainerDied","Data":"a96fd225dafcf97e7d05dad471494b2e0d85fd2d1d63677ffa5677e1fef31cd9"} Mar 19 11:54:46.166145 master-0 kubenswrapper[7454]: I0319 11:54:46.165306 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkx8c" event={"ID":"db75b266-69c4-4790-82f1-43168b5bb6a0","Type":"ContainerStarted","Data":"752facb6414da1569fad0463b07e934509c70b6b2be4eded4b6f87f247f658ac"} Mar 19 11:54:46.185925 master-0 kubenswrapper[7454]: I0319 11:54:46.182625 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" event={"ID":"3a6b082a-649b-43f6-8e24-cf222873fe39","Type":"ContainerStarted","Data":"1934bc0b600f1e74a406788cec8a674a8b6f1a56fe70fd8bd4ae9f2fb2ad6292"} Mar 19 11:54:46.185925 master-0 kubenswrapper[7454]: I0319 11:54:46.182684 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" event={"ID":"3a6b082a-649b-43f6-8e24-cf222873fe39","Type":"ContainerStarted","Data":"b31a84101a7e9f8571fe0abea4a9c0ac92d862991255d66df670219d8949bf71"} Mar 19 11:54:46.186318 master-0 kubenswrapper[7454]: I0319 11:54:46.186266 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 11:54:46.190329 master-0 kubenswrapper[7454]: I0319 11:54:46.190250 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 11:54:46.193231 master-0 kubenswrapper[7454]: I0319 11:54:46.193175 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" event={"ID":"be4349fa-5c67-4135-80a7-b8a694553662","Type":"ContainerStarted","Data":"85ed4f11ce3c4c3b6e708f240f1147528c782e5b97dd8d5879f9092b80a4794e"} Mar 19 11:54:46.193231 master-0 kubenswrapper[7454]: I0319 11:54:46.193229 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" event={"ID":"be4349fa-5c67-4135-80a7-b8a694553662","Type":"ContainerStarted","Data":"593c680a830380526e444778c9d64ee368aed54b01a56b5393d8626c11e75704"} Mar 19 11:54:46.193569 master-0 kubenswrapper[7454]: I0319 11:54:46.193539 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 11:54:46.215743 master-0 kubenswrapper[7454]: I0319 11:54:46.215319 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 19 11:54:46.217653 master-0 kubenswrapper[7454]: I0319 11:54:46.216225 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" event={"ID":"da9becfb-a504-4ef7-92ed-cd2db439d5db","Type":"ContainerStarted","Data":"2d813a15fdfae4a519455f4052abe2653657dc79015833917eccfbaa2776f015"} Mar 19 11:54:46.217653 master-0 kubenswrapper[7454]: I0319 11:54:46.216260 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" event={"ID":"da9becfb-a504-4ef7-92ed-cd2db439d5db","Type":"ContainerStarted","Data":"fa112877e7809f3added7e93999d2d52089456dfb6885e6498c6e53ce0c53ded"} Mar 19 11:54:46.217653 master-0 kubenswrapper[7454]: I0319 11:54:46.216345 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 11:54:46.217653 master-0 kubenswrapper[7454]: I0319 11:54:46.216637 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 11:54:46.218332 master-0 kubenswrapper[7454]: I0319 11:54:46.218107 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/311b8bab-6cee-406d-8e0e-5b18a743d5fa-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-rdpvm\" (UID: \"311b8bab-6cee-406d-8e0e-5b18a743d5fa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" Mar 19 11:54:46.218332 master-0 kubenswrapper[7454]: I0319 11:54:46.218157 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjfpq\" (UniqueName: \"kubernetes.io/projected/311b8bab-6cee-406d-8e0e-5b18a743d5fa-kube-api-access-hjfpq\") pod \"machine-config-controller-b4f87c5b9-rdpvm\" (UID: \"311b8bab-6cee-406d-8e0e-5b18a743d5fa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" Mar 19 11:54:46.218332 master-0 kubenswrapper[7454]: I0319 11:54:46.218183 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/311b8bab-6cee-406d-8e0e-5b18a743d5fa-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-rdpvm\" (UID: \"311b8bab-6cee-406d-8e0e-5b18a743d5fa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" Mar 19 11:54:46.219654 master-0 kubenswrapper[7454]: I0319 11:54:46.219631 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/311b8bab-6cee-406d-8e0e-5b18a743d5fa-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-rdpvm\" (UID: \"311b8bab-6cee-406d-8e0e-5b18a743d5fa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" Mar 19 11:54:46.222774 master-0 kubenswrapper[7454]: I0319 11:54:46.222610 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/311b8bab-6cee-406d-8e0e-5b18a743d5fa-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-rdpvm\" (UID: \"311b8bab-6cee-406d-8e0e-5b18a743d5fa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" Mar 19 11:54:46.232211 master-0 kubenswrapper[7454]: I0319 11:54:46.228404 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" podStartSLOduration=16.228388648 podStartE2EDuration="16.228388648s" podCreationTimestamp="2026-03-19 11:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:54:46.224838107 +0000 UTC m=+55.855304040" watchObservedRunningTime="2026-03-19 11:54:46.228388648 +0000 UTC m=+55.858854561" Mar 19 11:54:46.239190 master-0 kubenswrapper[7454]: I0319 11:54:46.239154 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 19 11:54:46.264038 master-0 kubenswrapper[7454]: I0319 11:54:46.263990 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 11:54:46.265972 master-0 kubenswrapper[7454]: I0319 11:54:46.265920 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjfpq\" (UniqueName: \"kubernetes.io/projected/311b8bab-6cee-406d-8e0e-5b18a743d5fa-kube-api-access-hjfpq\") pod \"machine-config-controller-b4f87c5b9-rdpvm\" (UID: \"311b8bab-6cee-406d-8e0e-5b18a743d5fa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" Mar 19 11:54:46.270939 master-0 kubenswrapper[7454]: I0319 11:54:46.270789 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" podStartSLOduration=2.270766445 podStartE2EDuration="2.270766445s" podCreationTimestamp="2026-03-19 11:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:54:46.270439325 +0000 UTC m=+55.900905238" watchObservedRunningTime="2026-03-19 11:54:46.270766445 +0000 UTC m=+55.901232358" Mar 19 11:54:46.281882 master-0 kubenswrapper[7454]: I0319 11:54:46.280727 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" Mar 19 11:54:46.302929 master-0 kubenswrapper[7454]: I0319 11:54:46.300321 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" podStartSLOduration=16.300301480999998 podStartE2EDuration="16.300301481s" podCreationTimestamp="2026-03-19 11:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:54:46.291875127 +0000 UTC m=+55.922341060" watchObservedRunningTime="2026-03-19 11:54:46.300301481 +0000 UTC m=+55.930767394" Mar 19 11:54:46.319158 master-0 kubenswrapper[7454]: I0319 11:54:46.319105 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2e4442dc-19e2-42a3-b5d9-7af7765b1939-var-lock\") pod \"installer-2-master-0\" (UID: \"2e4442dc-19e2-42a3-b5d9-7af7765b1939\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 11:54:46.319714 master-0 kubenswrapper[7454]: I0319 11:54:46.319678 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e4442dc-19e2-42a3-b5d9-7af7765b1939-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"2e4442dc-19e2-42a3-b5d9-7af7765b1939\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 11:54:46.319777 master-0 kubenswrapper[7454]: I0319 11:54:46.319739 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e4442dc-19e2-42a3-b5d9-7af7765b1939-kube-api-access\") pod \"installer-2-master-0\" (UID: \"2e4442dc-19e2-42a3-b5d9-7af7765b1939\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 11:54:46.393051 master-0 kubenswrapper[7454]: I0319 11:54:46.393008 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 11:54:46.425887 master-0 kubenswrapper[7454]: I0319 11:54:46.420810 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2e4442dc-19e2-42a3-b5d9-7af7765b1939-var-lock\") pod \"installer-2-master-0\" (UID: \"2e4442dc-19e2-42a3-b5d9-7af7765b1939\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 11:54:46.425887 master-0 kubenswrapper[7454]: I0319 11:54:46.420858 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e4442dc-19e2-42a3-b5d9-7af7765b1939-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"2e4442dc-19e2-42a3-b5d9-7af7765b1939\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 11:54:46.425887 master-0 kubenswrapper[7454]: I0319 11:54:46.420876 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e4442dc-19e2-42a3-b5d9-7af7765b1939-kube-api-access\") pod \"installer-2-master-0\" (UID: \"2e4442dc-19e2-42a3-b5d9-7af7765b1939\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 11:54:46.425887 master-0 kubenswrapper[7454]: I0319 11:54:46.421159 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2e4442dc-19e2-42a3-b5d9-7af7765b1939-var-lock\") pod \"installer-2-master-0\" (UID: \"2e4442dc-19e2-42a3-b5d9-7af7765b1939\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 11:54:46.425887 master-0 kubenswrapper[7454]: I0319 11:54:46.421204 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e4442dc-19e2-42a3-b5d9-7af7765b1939-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"2e4442dc-19e2-42a3-b5d9-7af7765b1939\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 11:54:46.454667 master-0 kubenswrapper[7454]: I0319 11:54:46.454621 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e4442dc-19e2-42a3-b5d9-7af7765b1939-kube-api-access\") pod \"installer-2-master-0\" (UID: \"2e4442dc-19e2-42a3-b5d9-7af7765b1939\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 11:54:46.569677 master-0 kubenswrapper[7454]: I0319 11:54:46.569626 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 11:54:46.734074 master-0 kubenswrapper[7454]: I0319 11:54:46.727501 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm"] Mar 19 11:54:47.097668 master-0 kubenswrapper[7454]: I0319 11:54:47.097521 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 19 11:54:47.264897 master-0 kubenswrapper[7454]: I0319 11:54:47.263680 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6"] Mar 19 11:54:47.268182 master-0 kubenswrapper[7454]: I0319 11:54:47.268084 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6" Mar 19 11:54:47.270565 master-0 kubenswrapper[7454]: I0319 11:54:47.270532 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 19 11:54:47.270770 master-0 kubenswrapper[7454]: I0319 11:54:47.270735 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"2e4442dc-19e2-42a3-b5d9-7af7765b1939","Type":"ContainerStarted","Data":"cfaade6a812c1fae7dc2bc47f01477e66bb0563b115dfa8becda8b83dc0a10b7"} Mar 19 11:54:47.282191 master-0 kubenswrapper[7454]: I0319 11:54:47.277933 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-7dcf5569b5-lkpgl"] Mar 19 11:54:47.282191 master-0 kubenswrapper[7454]: I0319 11:54:47.278596 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:54:47.282459 master-0 kubenswrapper[7454]: I0319 11:54:47.282242 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 19 11:54:47.282459 master-0 kubenswrapper[7454]: I0319 11:54:47.282427 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 19 11:54:47.282551 master-0 kubenswrapper[7454]: I0319 11:54:47.282236 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 19 11:54:47.282608 master-0 kubenswrapper[7454]: I0319 11:54:47.282600 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 19 11:54:47.282679 master-0 kubenswrapper[7454]: I0319 11:54:47.282661 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 19 11:54:47.282725 master-0 kubenswrapper[7454]: I0319 11:54:47.282689 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 19 11:54:47.287150 master-0 kubenswrapper[7454]: I0319 11:54:47.286563 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" event={"ID":"311b8bab-6cee-406d-8e0e-5b18a743d5fa","Type":"ContainerStarted","Data":"92146f5206ba4af3dcab747b3b3365816fae1c4fa84ae25f4e8444b11bfc04c8"} Mar 19 11:54:47.287150 master-0 kubenswrapper[7454]: I0319 11:54:47.286614 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" event={"ID":"311b8bab-6cee-406d-8e0e-5b18a743d5fa","Type":"ContainerStarted","Data":"d18987c9d1d7090b8b91208e0a73136c08533b394e06d167d40679025e9ca39d"} Mar 19 11:54:47.287150 master-0 kubenswrapper[7454]: I0319 11:54:47.286625 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" event={"ID":"311b8bab-6cee-406d-8e0e-5b18a743d5fa","Type":"ContainerStarted","Data":"ea807ec97b5b85d57bfd1e0adda9e020d25ab20667140eb00ae9510d72b84498"} Mar 19 11:54:47.288082 master-0 kubenswrapper[7454]: I0319 11:54:47.288044 7454 generic.go:334] "Generic (PLEG): container finished" podID="db75b266-69c4-4790-82f1-43168b5bb6a0" containerID="2c55cfbfdd95ef2dfe0541bb2247f3e0696ee475fccd5a1a3f51314f177793a7" exitCode=0 Mar 19 11:54:47.288973 master-0 kubenswrapper[7454]: I0319 11:54:47.288934 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkx8c" event={"ID":"db75b266-69c4-4790-82f1-43168b5bb6a0","Type":"ContainerDied","Data":"2c55cfbfdd95ef2dfe0541bb2247f3e0696ee475fccd5a1a3f51314f177793a7"} Mar 19 11:54:47.304133 master-0 kubenswrapper[7454]: I0319 11:54:47.304084 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-6dmt7"] Mar 19 11:54:47.304696 master-0 kubenswrapper[7454]: I0319 11:54:47.304665 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6"] Mar 19 11:54:47.304952 master-0 kubenswrapper[7454]: I0319 11:54:47.304742 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-6dmt7" Mar 19 11:54:47.321330 master-0 kubenswrapper[7454]: I0319 11:54:47.317588 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-6dmt7"] Mar 19 11:54:47.357702 master-0 kubenswrapper[7454]: I0319 11:54:47.357566 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91112ce6-4f9d-44c1-a4e7-fea126554bcf-metrics-certs\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:54:47.357702 master-0 kubenswrapper[7454]: I0319 11:54:47.357619 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91112ce6-4f9d-44c1-a4e7-fea126554bcf-service-ca-bundle\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:54:47.357702 master-0 kubenswrapper[7454]: I0319 11:54:47.357645 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hrkb\" (UniqueName: \"kubernetes.io/projected/91112ce6-4f9d-44c1-a4e7-fea126554bcf-kube-api-access-8hrkb\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:54:47.357702 master-0 kubenswrapper[7454]: I0319 11:54:47.357688 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/91112ce6-4f9d-44c1-a4e7-fea126554bcf-stats-auth\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:54:47.357998 master-0 kubenswrapper[7454]: I0319 11:54:47.357730 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/882fd952-1914-47be-96bf-cac6341ca877-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-z8xf6\" (UID: \"882fd952-1914-47be-96bf-cac6341ca877\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6" Mar 19 11:54:47.357998 master-0 kubenswrapper[7454]: I0319 11:54:47.357868 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/91112ce6-4f9d-44c1-a4e7-fea126554bcf-default-certificate\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:54:47.391601 master-0 kubenswrapper[7454]: I0319 11:54:47.391062 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" podStartSLOduration=2.391037802 podStartE2EDuration="2.391037802s" podCreationTimestamp="2026-03-19 11:54:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:54:47.388183003 +0000 UTC m=+57.018648936" watchObservedRunningTime="2026-03-19 11:54:47.391037802 +0000 UTC m=+57.021503725" Mar 19 11:54:47.459328 master-0 kubenswrapper[7454]: I0319 11:54:47.458779 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91112ce6-4f9d-44c1-a4e7-fea126554bcf-metrics-certs\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:54:47.459328 master-0 kubenswrapper[7454]: I0319 11:54:47.458857 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91112ce6-4f9d-44c1-a4e7-fea126554bcf-service-ca-bundle\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:54:47.459328 master-0 kubenswrapper[7454]: I0319 11:54:47.458874 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hrkb\" (UniqueName: \"kubernetes.io/projected/91112ce6-4f9d-44c1-a4e7-fea126554bcf-kube-api-access-8hrkb\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:54:47.459328 master-0 kubenswrapper[7454]: I0319 11:54:47.459133 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/91112ce6-4f9d-44c1-a4e7-fea126554bcf-stats-auth\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:54:47.459328 master-0 kubenswrapper[7454]: I0319 11:54:47.459217 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/882fd952-1914-47be-96bf-cac6341ca877-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-z8xf6\" (UID: \"882fd952-1914-47be-96bf-cac6341ca877\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6" Mar 19 11:54:47.459328 master-0 kubenswrapper[7454]: I0319 11:54:47.459235 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/91112ce6-4f9d-44c1-a4e7-fea126554bcf-default-certificate\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:54:47.461929 master-0 kubenswrapper[7454]: I0319 11:54:47.461680 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvvk8\" (UniqueName: \"kubernetes.io/projected/0316c374-f812-4e0a-8645-727e8372f16e-kube-api-access-tvvk8\") pod \"network-check-source-b4bf74f6-6dmt7\" (UID: \"0316c374-f812-4e0a-8645-727e8372f16e\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-6dmt7" Mar 19 11:54:47.462637 master-0 kubenswrapper[7454]: I0319 11:54:47.462558 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91112ce6-4f9d-44c1-a4e7-fea126554bcf-service-ca-bundle\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:54:47.463707 master-0 kubenswrapper[7454]: I0319 11:54:47.463614 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91112ce6-4f9d-44c1-a4e7-fea126554bcf-metrics-certs\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:54:47.467351 master-0 kubenswrapper[7454]: I0319 11:54:47.467125 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/882fd952-1914-47be-96bf-cac6341ca877-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-z8xf6\" (UID: \"882fd952-1914-47be-96bf-cac6341ca877\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6" Mar 19 11:54:47.471783 master-0 kubenswrapper[7454]: I0319 11:54:47.471750 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/91112ce6-4f9d-44c1-a4e7-fea126554bcf-default-certificate\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:54:47.484563 master-0 kubenswrapper[7454]: I0319 11:54:47.484357 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/91112ce6-4f9d-44c1-a4e7-fea126554bcf-stats-auth\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:54:47.488741 master-0 kubenswrapper[7454]: I0319 11:54:47.488707 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hrkb\" (UniqueName: \"kubernetes.io/projected/91112ce6-4f9d-44c1-a4e7-fea126554bcf-kube-api-access-8hrkb\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:54:47.547872 master-0 kubenswrapper[7454]: I0319 11:54:47.546860 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:47.551322 master-0 kubenswrapper[7454]: I0319 11:54:47.550498 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:47.563419 master-0 kubenswrapper[7454]: I0319 11:54:47.563348 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvvk8\" (UniqueName: \"kubernetes.io/projected/0316c374-f812-4e0a-8645-727e8372f16e-kube-api-access-tvvk8\") pod \"network-check-source-b4bf74f6-6dmt7\" (UID: \"0316c374-f812-4e0a-8645-727e8372f16e\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-6dmt7" Mar 19 11:54:47.565314 master-0 kubenswrapper[7454]: I0319 11:54:47.564257 7454 patch_prober.go:28] interesting pod/apiserver-897cc986b-vpg2l container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 19 11:54:47.565314 master-0 kubenswrapper[7454]: [+]log ok Mar 19 11:54:47.565314 master-0 kubenswrapper[7454]: [+]etcd ok Mar 19 11:54:47.565314 master-0 kubenswrapper[7454]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 19 11:54:47.565314 master-0 kubenswrapper[7454]: [+]poststarthook/generic-apiserver-start-informers ok Mar 19 11:54:47.565314 master-0 kubenswrapper[7454]: [+]poststarthook/max-in-flight-filter ok Mar 19 11:54:47.565314 master-0 kubenswrapper[7454]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 19 11:54:47.565314 master-0 kubenswrapper[7454]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 19 11:54:47.565314 master-0 kubenswrapper[7454]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 19 11:54:47.565314 master-0 kubenswrapper[7454]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 19 11:54:47.565314 master-0 kubenswrapper[7454]: [+]poststarthook/project.openshift.io-projectcache ok Mar 19 11:54:47.565314 master-0 kubenswrapper[7454]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 19 11:54:47.565314 master-0 kubenswrapper[7454]: [+]poststarthook/openshift.io-startinformers ok Mar 19 11:54:47.565314 master-0 kubenswrapper[7454]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 19 11:54:47.565314 master-0 kubenswrapper[7454]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 19 11:54:47.565314 master-0 kubenswrapper[7454]: livez check failed Mar 19 11:54:47.565314 master-0 kubenswrapper[7454]: I0319 11:54:47.564309 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-897cc986b-vpg2l" podUID="13503fef-09b2-4dbe-9537-a5b361e7b591" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:54:47.588440 master-0 kubenswrapper[7454]: I0319 11:54:47.587006 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvvk8\" (UniqueName: \"kubernetes.io/projected/0316c374-f812-4e0a-8645-727e8372f16e-kube-api-access-tvvk8\") pod \"network-check-source-b4bf74f6-6dmt7\" (UID: \"0316c374-f812-4e0a-8645-727e8372f16e\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-6dmt7" Mar 19 11:54:47.612175 master-0 kubenswrapper[7454]: I0319 11:54:47.611624 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6" Mar 19 11:54:47.656373 master-0 kubenswrapper[7454]: I0319 11:54:47.655901 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:54:47.699384 master-0 kubenswrapper[7454]: I0319 11:54:47.698138 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-6dmt7" Mar 19 11:54:47.951018 master-0 kubenswrapper[7454]: I0319 11:54:47.949705 7454 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 19 11:54:48.069478 master-0 kubenswrapper[7454]: I0319 11:54:48.069432 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6"] Mar 19 11:54:48.077198 master-0 kubenswrapper[7454]: W0319 11:54:48.077162 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod882fd952_1914_47be_96bf_cac6341ca877.slice/crio-3756314b5f9faad34dff96625b9ef78c27d73db523c30a3f82a5ea254d67fd72 WatchSource:0}: Error finding container 3756314b5f9faad34dff96625b9ef78c27d73db523c30a3f82a5ea254d67fd72: Status 404 returned error can't find the container with id 3756314b5f9faad34dff96625b9ef78c27d73db523c30a3f82a5ea254d67fd72 Mar 19 11:54:48.190571 master-0 kubenswrapper[7454]: I0319 11:54:48.190534 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-6dmt7"] Mar 19 11:54:48.199402 master-0 kubenswrapper[7454]: W0319 11:54:48.199363 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0316c374_f812_4e0a_8645_727e8372f16e.slice/crio-dc31fac048987256095251eb1c41dfbd7ba8f1030acd608588347d150bf4c3c7 WatchSource:0}: Error finding container dc31fac048987256095251eb1c41dfbd7ba8f1030acd608588347d150bf4c3c7: Status 404 returned error can't find the container with id dc31fac048987256095251eb1c41dfbd7ba8f1030acd608588347d150bf4c3c7 Mar 19 11:54:48.300566 master-0 kubenswrapper[7454]: I0319 11:54:48.300210 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"2e4442dc-19e2-42a3-b5d9-7af7765b1939","Type":"ContainerStarted","Data":"01fb0bb7c58b7c7fb9f4e6423408b3fdefa74b9c0303c15e18382b768dd8f028"} Mar 19 11:54:48.303159 master-0 kubenswrapper[7454]: I0319 11:54:48.303107 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" event={"ID":"91112ce6-4f9d-44c1-a4e7-fea126554bcf","Type":"ContainerStarted","Data":"27514f785ebf129e635b61742d2a50f4b4590a69d29ba2f3c58ee430e3465119"} Mar 19 11:54:48.305623 master-0 kubenswrapper[7454]: I0319 11:54:48.305571 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-6dmt7" event={"ID":"0316c374-f812-4e0a-8645-727e8372f16e","Type":"ContainerStarted","Data":"dc31fac048987256095251eb1c41dfbd7ba8f1030acd608588347d150bf4c3c7"} Mar 19 11:54:48.307843 master-0 kubenswrapper[7454]: I0319 11:54:48.307753 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6" event={"ID":"882fd952-1914-47be-96bf-cac6341ca877","Type":"ContainerStarted","Data":"3756314b5f9faad34dff96625b9ef78c27d73db523c30a3f82a5ea254d67fd72"} Mar 19 11:54:48.405949 master-0 kubenswrapper[7454]: I0319 11:54:48.405863 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=2.405837005 podStartE2EDuration="2.405837005s" podCreationTimestamp="2026-03-19 11:54:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:54:48.401412737 +0000 UTC m=+58.031878650" watchObservedRunningTime="2026-03-19 11:54:48.405837005 +0000 UTC m=+58.036302918" Mar 19 11:54:49.313905 master-0 kubenswrapper[7454]: I0319 11:54:49.313835 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-6dmt7" event={"ID":"0316c374-f812-4e0a-8645-727e8372f16e","Type":"ContainerStarted","Data":"29f83826782037d3aa62f54384313a17095f2cfbaba13ab4a86e3e3ac942e8dd"} Mar 19 11:54:49.519821 master-0 kubenswrapper[7454]: I0319 11:54:49.519642 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-6dmt7" podStartSLOduration=119.51962019 podStartE2EDuration="1m59.51962019s" podCreationTimestamp="2026-03-19 11:52:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:54:49.519097644 +0000 UTC m=+59.149563567" watchObservedRunningTime="2026-03-19 11:54:49.51962019 +0000 UTC m=+59.150086103" Mar 19 11:54:50.051480 master-0 kubenswrapper[7454]: I0319 11:54:50.051436 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-zjdkm" Mar 19 11:54:50.372932 master-0 kubenswrapper[7454]: I0319 11:54:50.372596 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-g7mqg"] Mar 19 11:54:50.382663 master-0 kubenswrapper[7454]: I0319 11:54:50.380499 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-g7mqg" Mar 19 11:54:50.385668 master-0 kubenswrapper[7454]: I0319 11:54:50.384753 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-h5t8s" Mar 19 11:54:50.385668 master-0 kubenswrapper[7454]: I0319 11:54:50.385055 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 19 11:54:50.385668 master-0 kubenswrapper[7454]: I0319 11:54:50.385517 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 19 11:54:50.446585 master-0 kubenswrapper[7454]: I0319 11:54:50.444949 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-node-bootstrap-token\") pod \"machine-config-server-g7mqg\" (UID: \"0e25d4ed-4ad0-4706-ad25-7822c9a1d07e\") " pod="openshift-machine-config-operator/machine-config-server-g7mqg" Mar 19 11:54:50.446585 master-0 kubenswrapper[7454]: I0319 11:54:50.445044 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-certs\") pod \"machine-config-server-g7mqg\" (UID: \"0e25d4ed-4ad0-4706-ad25-7822c9a1d07e\") " pod="openshift-machine-config-operator/machine-config-server-g7mqg" Mar 19 11:54:50.446585 master-0 kubenswrapper[7454]: I0319 11:54:50.445155 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9k5t\" (UniqueName: \"kubernetes.io/projected/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-kube-api-access-r9k5t\") pod \"machine-config-server-g7mqg\" (UID: \"0e25d4ed-4ad0-4706-ad25-7822c9a1d07e\") " pod="openshift-machine-config-operator/machine-config-server-g7mqg" Mar 19 11:54:50.547900 master-0 kubenswrapper[7454]: I0319 11:54:50.547380 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9k5t\" (UniqueName: \"kubernetes.io/projected/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-kube-api-access-r9k5t\") pod \"machine-config-server-g7mqg\" (UID: \"0e25d4ed-4ad0-4706-ad25-7822c9a1d07e\") " pod="openshift-machine-config-operator/machine-config-server-g7mqg" Mar 19 11:54:50.547900 master-0 kubenswrapper[7454]: I0319 11:54:50.547506 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-node-bootstrap-token\") pod \"machine-config-server-g7mqg\" (UID: \"0e25d4ed-4ad0-4706-ad25-7822c9a1d07e\") " pod="openshift-machine-config-operator/machine-config-server-g7mqg" Mar 19 11:54:50.547900 master-0 kubenswrapper[7454]: I0319 11:54:50.547547 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-certs\") pod \"machine-config-server-g7mqg\" (UID: \"0e25d4ed-4ad0-4706-ad25-7822c9a1d07e\") " pod="openshift-machine-config-operator/machine-config-server-g7mqg" Mar 19 11:54:50.553024 master-0 kubenswrapper[7454]: I0319 11:54:50.552985 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-node-bootstrap-token\") pod \"machine-config-server-g7mqg\" (UID: \"0e25d4ed-4ad0-4706-ad25-7822c9a1d07e\") " pod="openshift-machine-config-operator/machine-config-server-g7mqg" Mar 19 11:54:50.556376 master-0 kubenswrapper[7454]: I0319 11:54:50.556319 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-certs\") pod \"machine-config-server-g7mqg\" (UID: \"0e25d4ed-4ad0-4706-ad25-7822c9a1d07e\") " pod="openshift-machine-config-operator/machine-config-server-g7mqg" Mar 19 11:54:50.567563 master-0 kubenswrapper[7454]: I0319 11:54:50.566269 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9k5t\" (UniqueName: \"kubernetes.io/projected/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-kube-api-access-r9k5t\") pod \"machine-config-server-g7mqg\" (UID: \"0e25d4ed-4ad0-4706-ad25-7822c9a1d07e\") " pod="openshift-machine-config-operator/machine-config-server-g7mqg" Mar 19 11:54:50.737535 master-0 kubenswrapper[7454]: I0319 11:54:50.737486 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-h5t8s" Mar 19 11:54:50.742900 master-0 kubenswrapper[7454]: I0319 11:54:50.742860 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-g7mqg" Mar 19 11:54:52.550746 master-0 kubenswrapper[7454]: I0319 11:54:52.550699 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:52.555602 master-0 kubenswrapper[7454]: I0319 11:54:52.555559 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 11:54:54.040156 master-0 kubenswrapper[7454]: I0319 11:54:54.039984 7454 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 19 11:54:54.044998 master-0 kubenswrapper[7454]: I0319 11:54:54.044958 7454 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 19 11:54:54.045272 master-0 kubenswrapper[7454]: E0319 11:54:54.045237 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 19 11:54:54.045272 master-0 kubenswrapper[7454]: I0319 11:54:54.045259 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 19 11:54:54.045358 master-0 kubenswrapper[7454]: E0319 11:54:54.045276 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 19 11:54:54.045358 master-0 kubenswrapper[7454]: I0319 11:54:54.045285 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 19 11:54:54.045420 master-0 kubenswrapper[7454]: I0319 11:54:54.045405 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 19 11:54:54.045420 master-0 kubenswrapper[7454]: I0319 11:54:54.045418 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 19 11:54:54.047730 master-0 kubenswrapper[7454]: I0319 11:54:54.047705 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 19 11:54:54.052360 master-0 kubenswrapper[7454]: I0319 11:54:54.051864 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 11:54:54.052360 master-0 kubenswrapper[7454]: I0319 11:54:54.051909 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 11:54:54.052360 master-0 kubenswrapper[7454]: I0319 11:54:54.051937 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 11:54:54.052360 master-0 kubenswrapper[7454]: I0319 11:54:54.052196 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 11:54:54.052575 master-0 kubenswrapper[7454]: I0319 11:54:54.052432 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 11:54:54.052779 master-0 kubenswrapper[7454]: I0319 11:54:54.052727 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 11:54:54.155144 master-0 kubenswrapper[7454]: I0319 11:54:54.154712 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 11:54:54.155144 master-0 kubenswrapper[7454]: I0319 11:54:54.154777 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 11:54:54.155144 master-0 kubenswrapper[7454]: I0319 11:54:54.154820 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 11:54:54.155144 master-0 kubenswrapper[7454]: I0319 11:54:54.154862 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 11:54:54.155144 master-0 kubenswrapper[7454]: I0319 11:54:54.154908 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 11:54:54.155144 master-0 kubenswrapper[7454]: I0319 11:54:54.154938 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 11:54:54.155144 master-0 kubenswrapper[7454]: I0319 11:54:54.155075 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 11:54:54.155144 master-0 kubenswrapper[7454]: I0319 11:54:54.155079 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 11:54:54.155634 master-0 kubenswrapper[7454]: I0319 11:54:54.155328 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 11:54:54.155685 master-0 kubenswrapper[7454]: I0319 11:54:54.155401 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 11:54:54.155685 master-0 kubenswrapper[7454]: I0319 11:54:54.155657 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 11:54:54.155762 master-0 kubenswrapper[7454]: I0319 11:54:54.155468 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 11:54:54.607949 master-0 kubenswrapper[7454]: I0319 11:54:54.607546 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 19 11:54:54.619842 master-0 kubenswrapper[7454]: I0319 11:54:54.616469 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 19 11:54:54.731830 master-0 kubenswrapper[7454]: I0319 11:54:54.731771 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkx8c"] Mar 19 11:54:54.740872 master-0 kubenswrapper[7454]: W0319 11:54:54.740552 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-b16dac4ee3a03979e5d36fc02466273719d2d9a5c4e08c3c4d9859fdb912a95d WatchSource:0}: Error finding container b16dac4ee3a03979e5d36fc02466273719d2d9a5c4e08c3c4d9859fdb912a95d: Status 404 returned error can't find the container with id b16dac4ee3a03979e5d36fc02466273719d2d9a5c4e08c3c4d9859fdb912a95d Mar 19 11:54:54.835879 master-0 kubenswrapper[7454]: I0319 11:54:54.833753 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cjgpg"] Mar 19 11:54:54.835879 master-0 kubenswrapper[7454]: I0319 11:54:54.835553 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 11:54:54.852893 master-0 kubenswrapper[7454]: I0319 11:54:54.852168 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-djzws" Mar 19 11:54:55.072713 master-0 kubenswrapper[7454]: I0319 11:54:55.072117 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-g7mqg" event={"ID":"0e25d4ed-4ad0-4706-ad25-7822c9a1d07e","Type":"ContainerStarted","Data":"6d678386c9d8ee3ccaf97160a5d644fc4f5d17544c6fb3d29d199b1c5b6b5add"} Mar 19 11:54:55.078603 master-0 kubenswrapper[7454]: I0319 11:54:55.077952 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"b16dac4ee3a03979e5d36fc02466273719d2d9a5c4e08c3c4d9859fdb912a95d"} Mar 19 11:54:55.081019 master-0 kubenswrapper[7454]: I0319 11:54:55.080388 7454 generic.go:334] "Generic (PLEG): container finished" podID="11f83dfb-da04-483f-b281-ebdb39f3ab27" containerID="b09cf9e92d522e2b105a0b4a4e50ff7409083b9260caed07cdd2a78e778f9e16" exitCode=0 Mar 19 11:54:55.081019 master-0 kubenswrapper[7454]: I0319 11:54:55.080579 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" containerID="cri-o://1f3d2affaef2ec02f4c8910d24c25e42512158f294dc5c7de4fd47923e7552fe" gracePeriod=30 Mar 19 11:54:55.081019 master-0 kubenswrapper[7454]: I0319 11:54:55.080890 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"11f83dfb-da04-483f-b281-ebdb39f3ab27","Type":"ContainerDied","Data":"b09cf9e92d522e2b105a0b4a4e50ff7409083b9260caed07cdd2a78e778f9e16"} Mar 19 11:54:55.081019 master-0 kubenswrapper[7454]: I0319 11:54:55.080887 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" containerID="cri-o://5966cfc73c8dc098af4cd51014c727f29c95ad4372f9a6d75305e51f28be76da" gracePeriod=30 Mar 19 11:54:55.159464 master-0 kubenswrapper[7454]: I0319 11:54:55.159087 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnp9l\" (UniqueName: \"kubernetes.io/projected/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-kube-api-access-jnp9l\") pod \"redhat-marketplace-cjgpg\" (UID: \"0ed7eded-1e67-49ad-9777-c2ed1e006ce3\") " pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 11:54:55.159464 master-0 kubenswrapper[7454]: I0319 11:54:55.159169 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-utilities\") pod \"redhat-marketplace-cjgpg\" (UID: \"0ed7eded-1e67-49ad-9777-c2ed1e006ce3\") " pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 11:54:55.159464 master-0 kubenswrapper[7454]: I0319 11:54:55.159200 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-catalog-content\") pod \"redhat-marketplace-cjgpg\" (UID: \"0ed7eded-1e67-49ad-9777-c2ed1e006ce3\") " pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 11:54:55.260320 master-0 kubenswrapper[7454]: I0319 11:54:55.260253 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-utilities\") pod \"redhat-marketplace-cjgpg\" (UID: \"0ed7eded-1e67-49ad-9777-c2ed1e006ce3\") " pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 11:54:55.260320 master-0 kubenswrapper[7454]: I0319 11:54:55.260322 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-catalog-content\") pod \"redhat-marketplace-cjgpg\" (UID: \"0ed7eded-1e67-49ad-9777-c2ed1e006ce3\") " pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 11:54:55.260555 master-0 kubenswrapper[7454]: I0319 11:54:55.260389 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnp9l\" (UniqueName: \"kubernetes.io/projected/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-kube-api-access-jnp9l\") pod \"redhat-marketplace-cjgpg\" (UID: \"0ed7eded-1e67-49ad-9777-c2ed1e006ce3\") " pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 11:54:55.261134 master-0 kubenswrapper[7454]: I0319 11:54:55.260897 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-utilities\") pod \"redhat-marketplace-cjgpg\" (UID: \"0ed7eded-1e67-49ad-9777-c2ed1e006ce3\") " pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 11:54:55.261134 master-0 kubenswrapper[7454]: I0319 11:54:55.261013 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-catalog-content\") pod \"redhat-marketplace-cjgpg\" (UID: \"0ed7eded-1e67-49ad-9777-c2ed1e006ce3\") " pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 11:54:56.086385 master-0 kubenswrapper[7454]: I0319 11:54:56.086255 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" event={"ID":"91112ce6-4f9d-44c1-a4e7-fea126554bcf","Type":"ContainerStarted","Data":"2f120a0d94fdbfa9eb3c076343f202eb79687478095e8ae9cb88dc10339e167a"} Mar 19 11:54:56.088267 master-0 kubenswrapper[7454]: I0319 11:54:56.088217 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6" event={"ID":"882fd952-1914-47be-96bf-cac6341ca877","Type":"ContainerStarted","Data":"febb17ecbe7c98a535563bf35e76a4f8f883191a7467ea656f8782b20384067b"} Mar 19 11:54:56.088532 master-0 kubenswrapper[7454]: I0319 11:54:56.088472 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6" Mar 19 11:54:56.090170 master-0 kubenswrapper[7454]: I0319 11:54:56.090084 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-g7mqg" event={"ID":"0e25d4ed-4ad0-4706-ad25-7822c9a1d07e","Type":"ContainerStarted","Data":"dcaa7d8304a0f560a648a62a9585618cb4268a8bf7b143a02dfc3ec440b73d05"} Mar 19 11:54:56.091362 master-0 kubenswrapper[7454]: I0319 11:54:56.091326 7454 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-69c6b55594-z8xf6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.128.0.59:8443/healthz\": dial tcp 10.128.0.59:8443: connect: connection refused" start-of-body= Mar 19 11:54:56.091430 master-0 kubenswrapper[7454]: I0319 11:54:56.091369 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6" podUID="882fd952-1914-47be-96bf-cac6341ca877" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.128.0.59:8443/healthz\": dial tcp 10.128.0.59:8443: connect: connection refused" Mar 19 11:54:56.092845 master-0 kubenswrapper[7454]: I0319 11:54:56.092788 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"fdd600f8cdf3f0f95b3056a22a1e42b087a6ae97aca51e424c6d9174012b4280"} Mar 19 11:54:56.656713 master-0 kubenswrapper[7454]: I0319 11:54:56.656664 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:54:56.657570 master-0 kubenswrapper[7454]: I0319 11:54:56.657535 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 19 11:54:56.657638 master-0 kubenswrapper[7454]: I0319 11:54:56.657596 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 19 11:54:57.101021 master-0 kubenswrapper[7454]: I0319 11:54:57.100898 7454 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-69c6b55594-z8xf6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.128.0.59:8443/healthz\": dial tcp 10.128.0.59:8443: connect: connection refused" start-of-body= Mar 19 11:54:57.101021 master-0 kubenswrapper[7454]: I0319 11:54:57.100993 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6" podUID="882fd952-1914-47be-96bf-cac6341ca877" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.128.0.59:8443/healthz\": dial tcp 10.128.0.59:8443: connect: connection refused" Mar 19 11:54:57.613598 master-0 kubenswrapper[7454]: I0319 11:54:57.613515 7454 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-69c6b55594-z8xf6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.128.0.59:8443/healthz\": dial tcp 10.128.0.59:8443: connect: connection refused" start-of-body= Mar 19 11:54:57.613598 master-0 kubenswrapper[7454]: I0319 11:54:57.613583 7454 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6" podUID="882fd952-1914-47be-96bf-cac6341ca877" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.128.0.59:8443/healthz\": dial tcp 10.128.0.59:8443: connect: connection refused" Mar 19 11:54:57.614674 master-0 kubenswrapper[7454]: I0319 11:54:57.614625 7454 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-69c6b55594-z8xf6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.128.0.59:8443/healthz\": dial tcp 10.128.0.59:8443: connect: connection refused" start-of-body= Mar 19 11:54:57.614740 master-0 kubenswrapper[7454]: I0319 11:54:57.614705 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6" podUID="882fd952-1914-47be-96bf-cac6341ca877" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.128.0.59:8443/healthz\": dial tcp 10.128.0.59:8443: connect: connection refused" Mar 19 11:54:57.657188 master-0 kubenswrapper[7454]: I0319 11:54:57.657162 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:54:57.657417 master-0 kubenswrapper[7454]: I0319 11:54:57.657398 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 19 11:54:57.657529 master-0 kubenswrapper[7454]: I0319 11:54:57.657505 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 19 11:54:58.658440 master-0 kubenswrapper[7454]: I0319 11:54:58.658332 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 19 11:54:58.659366 master-0 kubenswrapper[7454]: I0319 11:54:58.658441 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 19 11:54:59.659358 master-0 kubenswrapper[7454]: I0319 11:54:59.659151 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 19 11:54:59.659895 master-0 kubenswrapper[7454]: I0319 11:54:59.659372 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 19 11:55:01.572054 master-0 kubenswrapper[7454]: I0319 11:55:01.562856 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 19 11:55:01.572054 master-0 kubenswrapper[7454]: I0319 11:55:01.562905 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 19 11:55:01.661185 master-0 kubenswrapper[7454]: I0319 11:55:01.661137 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 19 11:55:01.662975 master-0 kubenswrapper[7454]: I0319 11:55:01.661200 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 19 11:55:02.658852 master-0 kubenswrapper[7454]: I0319 11:55:02.658001 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 19 11:55:02.658852 master-0 kubenswrapper[7454]: I0319 11:55:02.658089 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 19 11:55:04.748085 master-0 kubenswrapper[7454]: I0319 11:55:04.746079 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 19 11:55:04.748085 master-0 kubenswrapper[7454]: I0319 11:55:04.746229 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 19 11:55:05.658229 master-0 kubenswrapper[7454]: I0319 11:55:05.658089 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 19 11:55:05.658229 master-0 kubenswrapper[7454]: I0319 11:55:05.658152 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 19 11:55:06.658940 master-0 kubenswrapper[7454]: I0319 11:55:06.658845 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 19 11:55:06.658940 master-0 kubenswrapper[7454]: I0319 11:55:06.658925 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 19 11:55:07.614532 master-0 kubenswrapper[7454]: I0319 11:55:07.614430 7454 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-69c6b55594-z8xf6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.128.0.59:8443/healthz\": dial tcp 10.128.0.59:8443: connect: connection refused" start-of-body= Mar 19 11:55:07.614532 master-0 kubenswrapper[7454]: I0319 11:55:07.614518 7454 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6" podUID="882fd952-1914-47be-96bf-cac6341ca877" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.128.0.59:8443/healthz\": dial tcp 10.128.0.59:8443: connect: connection refused" Mar 19 11:55:07.614933 master-0 kubenswrapper[7454]: I0319 11:55:07.614582 7454 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-69c6b55594-z8xf6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.128.0.59:8443/healthz\": dial tcp 10.128.0.59:8443: connect: connection refused" start-of-body= Mar 19 11:55:07.614933 master-0 kubenswrapper[7454]: I0319 11:55:07.614613 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6" podUID="882fd952-1914-47be-96bf-cac6341ca877" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.128.0.59:8443/healthz\": dial tcp 10.128.0.59:8443: connect: connection refused" Mar 19 11:55:07.658975 master-0 kubenswrapper[7454]: I0319 11:55:07.658577 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 19 11:55:07.658975 master-0 kubenswrapper[7454]: I0319 11:55:07.658681 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 19 11:55:08.658430 master-0 kubenswrapper[7454]: I0319 11:55:08.658310 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 19 11:55:08.658430 master-0 kubenswrapper[7454]: I0319 11:55:08.658403 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 19 11:55:09.657595 master-0 kubenswrapper[7454]: I0319 11:55:09.657471 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 19 11:55:09.658131 master-0 kubenswrapper[7454]: I0319 11:55:09.657597 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 19 11:55:10.528024 master-0 kubenswrapper[7454]: I0319 11:55:10.527969 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 19 11:55:10.660519 master-0 kubenswrapper[7454]: I0319 11:55:10.660375 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:10.660519 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:10.660519 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:10.660519 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:10.660519 master-0 kubenswrapper[7454]: I0319 11:55:10.660449 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:10.713287 master-0 kubenswrapper[7454]: I0319 11:55:10.713232 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11f83dfb-da04-483f-b281-ebdb39f3ab27-kubelet-dir\") pod \"11f83dfb-da04-483f-b281-ebdb39f3ab27\" (UID: \"11f83dfb-da04-483f-b281-ebdb39f3ab27\") " Mar 19 11:55:10.713669 master-0 kubenswrapper[7454]: I0319 11:55:10.713359 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11f83dfb-da04-483f-b281-ebdb39f3ab27-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "11f83dfb-da04-483f-b281-ebdb39f3ab27" (UID: "11f83dfb-da04-483f-b281-ebdb39f3ab27"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:55:10.713669 master-0 kubenswrapper[7454]: I0319 11:55:10.713529 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/11f83dfb-da04-483f-b281-ebdb39f3ab27-var-lock\") pod \"11f83dfb-da04-483f-b281-ebdb39f3ab27\" (UID: \"11f83dfb-da04-483f-b281-ebdb39f3ab27\") " Mar 19 11:55:10.713669 master-0 kubenswrapper[7454]: I0319 11:55:10.713610 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11f83dfb-da04-483f-b281-ebdb39f3ab27-kube-api-access\") pod \"11f83dfb-da04-483f-b281-ebdb39f3ab27\" (UID: \"11f83dfb-da04-483f-b281-ebdb39f3ab27\") " Mar 19 11:55:10.713669 master-0 kubenswrapper[7454]: I0319 11:55:10.713617 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11f83dfb-da04-483f-b281-ebdb39f3ab27-var-lock" (OuterVolumeSpecName: "var-lock") pod "11f83dfb-da04-483f-b281-ebdb39f3ab27" (UID: "11f83dfb-da04-483f-b281-ebdb39f3ab27"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:55:10.713927 master-0 kubenswrapper[7454]: I0319 11:55:10.713908 7454 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11f83dfb-da04-483f-b281-ebdb39f3ab27-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 11:55:10.713927 master-0 kubenswrapper[7454]: I0319 11:55:10.713925 7454 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/11f83dfb-da04-483f-b281-ebdb39f3ab27-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 11:55:10.718160 master-0 kubenswrapper[7454]: I0319 11:55:10.718110 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11f83dfb-da04-483f-b281-ebdb39f3ab27-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "11f83dfb-da04-483f-b281-ebdb39f3ab27" (UID: "11f83dfb-da04-483f-b281-ebdb39f3ab27"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:55:10.814874 master-0 kubenswrapper[7454]: I0319 11:55:10.814827 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11f83dfb-da04-483f-b281-ebdb39f3ab27-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 11:55:10.820668 master-0 kubenswrapper[7454]: I0319 11:55:10.820626 7454 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="fdd600f8cdf3f0f95b3056a22a1e42b087a6ae97aca51e424c6d9174012b4280" exitCode=0 Mar 19 11:55:10.820857 master-0 kubenswrapper[7454]: I0319 11:55:10.820693 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"fdd600f8cdf3f0f95b3056a22a1e42b087a6ae97aca51e424c6d9174012b4280"} Mar 19 11:55:10.822410 master-0 kubenswrapper[7454]: I0319 11:55:10.822375 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"11f83dfb-da04-483f-b281-ebdb39f3ab27","Type":"ContainerDied","Data":"8bc9b9c94d7c2fc35e88bdf943a6e373d9be7c1dc5c7edff2198406e6c44db25"} Mar 19 11:55:10.822470 master-0 kubenswrapper[7454]: I0319 11:55:10.822421 7454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bc9b9c94d7c2fc35e88bdf943a6e373d9be7c1dc5c7edff2198406e6c44db25" Mar 19 11:55:10.822516 master-0 kubenswrapper[7454]: I0319 11:55:10.822484 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 19 11:55:11.659958 master-0 kubenswrapper[7454]: I0319 11:55:11.659828 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:11.659958 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:11.659958 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:11.659958 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:11.659958 master-0 kubenswrapper[7454]: I0319 11:55:11.659904 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:12.650381 master-0 kubenswrapper[7454]: E0319 11:55:12.650081 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T11:55:02Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T11:55:02Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T11:55:02Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T11:55:02Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578\\\"],\\\"sizeBytes\\\":862657321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89\\\"],\\\"sizeBytes\\\":548752816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30a2f97d7785ce8b0ea5115e67c4554b64adefbc7856bcf6f4fe6cc7e938a310\\\"],\\\"sizeBytes\\\":513582374},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098\\\"],\\\"sizeBytes\\\":511227324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b\\\"],\\\"sizeBytes\\\":487096305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113\\\"],\\\"sizeBytes\\\":484450894},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12\\\"],\\\"sizeBytes\\\":484187929},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97\\\"],\\\"sizeBytes\\\":470826739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc\\\"],\\\"sizeBytes\\\":468265024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\\\"],\\\"sizeBytes\\\":465090934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24\\\"],\\\"sizeBytes\\\":463705930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62\\\"],\\\"sizeBytes\\\":458126937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe\\\"],\\\"sizeBytes\\\":456576198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739\\\"],\\\"sizeBytes\\\":448828620},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85\\\"],\\\"sizeBytes\\\":448042136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0\\\"],\\\"sizeBytes\\\":443272037},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483\\\"],\\\"sizeBytes\\\":438654374},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e66fd50be6f83ce321a566dfb76f3725b597374077d5af13813b928f6b1267e\\\"],\\\"sizeBytes\\\":411587146},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3a494212f1ba17f0f0980eef583218330eccb56eadf6b8cb0548c76d99b5014\\\"],\\\"sizeBytes\\\":407347125}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 11:55:12.659667 master-0 kubenswrapper[7454]: I0319 11:55:12.659623 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:12.659667 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:12.659667 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:12.659667 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:12.659929 master-0 kubenswrapper[7454]: I0319 11:55:12.659673 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:13.054080 master-0 kubenswrapper[7454]: I0319 11:55:13.054025 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 19 11:55:13.350893 master-0 kubenswrapper[7454]: I0319 11:55:13.350671 7454 prober.go:107] "Probe failed" probeType="Liveness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 19 11:55:13.659593 master-0 kubenswrapper[7454]: I0319 11:55:13.659452 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:13.659593 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:13.659593 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:13.659593 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:13.659593 master-0 kubenswrapper[7454]: I0319 11:55:13.659525 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:13.713313 master-0 kubenswrapper[7454]: I0319 11:55:13.713254 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 11:55:14.556545 master-0 kubenswrapper[7454]: I0319 11:55:14.556457 7454 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-pkgvq container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Mar 19 11:55:14.556834 master-0 kubenswrapper[7454]: I0319 11:55:14.556540 7454 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" podUID="d3017b5e-178e-49de-89d2-817a18398203" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Mar 19 11:55:14.659050 master-0 kubenswrapper[7454]: I0319 11:55:14.658986 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:14.659050 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:14.659050 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:14.659050 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:14.659443 master-0 kubenswrapper[7454]: I0319 11:55:14.659075 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:14.741652 master-0 kubenswrapper[7454]: E0319 11:55:14.741569 7454 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 11:55:14.864259 master-0 kubenswrapper[7454]: I0319 11:55:14.864107 7454 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="570446cbe4fe51c612e56ccc1c781b010d9f51a4701a23ab3e0e9c3afd18acfd" exitCode=1 Mar 19 11:55:14.864259 master-0 kubenswrapper[7454]: I0319 11:55:14.864184 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"570446cbe4fe51c612e56ccc1c781b010d9f51a4701a23ab3e0e9c3afd18acfd"} Mar 19 11:55:14.864259 master-0 kubenswrapper[7454]: I0319 11:55:14.864255 7454 scope.go:117] "RemoveContainer" containerID="f7123f20a535bea151420277445f140ddc0e3200c0d15a65bcdb6b9d86c90ca9" Mar 19 11:55:14.864662 master-0 kubenswrapper[7454]: I0319 11:55:14.864639 7454 scope.go:117] "RemoveContainer" containerID="570446cbe4fe51c612e56ccc1c781b010d9f51a4701a23ab3e0e9c3afd18acfd" Mar 19 11:55:14.866173 master-0 kubenswrapper[7454]: I0319 11:55:14.866154 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-gx4w8_9ed2dbd1-aec4-4009-917a-933533912ab5/openshift-controller-manager-operator/0.log" Mar 19 11:55:14.866233 master-0 kubenswrapper[7454]: I0319 11:55:14.866191 7454 generic.go:334] "Generic (PLEG): container finished" podID="9ed2dbd1-aec4-4009-917a-933533912ab5" containerID="fc5332ce9b6e52d47f6ebb8b58ad2c77aaab22f1f6505f1913fed9b59e6a2824" exitCode=1 Mar 19 11:55:14.866233 master-0 kubenswrapper[7454]: I0319 11:55:14.866238 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" event={"ID":"9ed2dbd1-aec4-4009-917a-933533912ab5","Type":"ContainerDied","Data":"fc5332ce9b6e52d47f6ebb8b58ad2c77aaab22f1f6505f1913fed9b59e6a2824"} Mar 19 11:55:14.866456 master-0 kubenswrapper[7454]: I0319 11:55:14.866442 7454 scope.go:117] "RemoveContainer" containerID="fc5332ce9b6e52d47f6ebb8b58ad2c77aaab22f1f6505f1913fed9b59e6a2824" Mar 19 11:55:14.868164 master-0 kubenswrapper[7454]: I0319 11:55:14.868147 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_f7fd0b13-489f-42b7-a52a-6194fdc9f665/installer/0.log" Mar 19 11:55:14.868223 master-0 kubenswrapper[7454]: I0319 11:55:14.868184 7454 generic.go:334] "Generic (PLEG): container finished" podID="f7fd0b13-489f-42b7-a52a-6194fdc9f665" containerID="65da2f47f4c8263662f98db014676bd0876e60b79722705d3aa8abd4a7e835b8" exitCode=1 Mar 19 11:55:14.868253 master-0 kubenswrapper[7454]: I0319 11:55:14.868233 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"f7fd0b13-489f-42b7-a52a-6194fdc9f665","Type":"ContainerDied","Data":"65da2f47f4c8263662f98db014676bd0876e60b79722705d3aa8abd4a7e835b8"} Mar 19 11:55:14.869534 master-0 kubenswrapper[7454]: I0319 11:55:14.869495 7454 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="6606dc49963e1cc0f10c3000efffd7cbb91c76beb712be6d1c6cb91c1b4a7c79" exitCode=1 Mar 19 11:55:14.869595 master-0 kubenswrapper[7454]: I0319 11:55:14.869533 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerDied","Data":"6606dc49963e1cc0f10c3000efffd7cbb91c76beb712be6d1c6cb91c1b4a7c79"} Mar 19 11:55:14.869870 master-0 kubenswrapper[7454]: I0319 11:55:14.869850 7454 scope.go:117] "RemoveContainer" containerID="6606dc49963e1cc0f10c3000efffd7cbb91c76beb712be6d1c6cb91c1b4a7c79" Mar 19 11:55:15.659155 master-0 kubenswrapper[7454]: I0319 11:55:15.659105 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:15.659155 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:15.659155 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:15.659155 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:15.659601 master-0 kubenswrapper[7454]: I0319 11:55:15.659173 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:16.658902 master-0 kubenswrapper[7454]: I0319 11:55:16.658850 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:16.658902 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:16.658902 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:16.658902 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:16.659609 master-0 kubenswrapper[7454]: I0319 11:55:16.658931 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:17.620339 master-0 kubenswrapper[7454]: I0319 11:55:17.620288 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6" Mar 19 11:55:17.659139 master-0 kubenswrapper[7454]: I0319 11:55:17.659092 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:17.659139 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:17.659139 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:17.659139 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:17.659851 master-0 kubenswrapper[7454]: I0319 11:55:17.659158 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:18.658825 master-0 kubenswrapper[7454]: I0319 11:55:18.658781 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:18.658825 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:18.658825 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:18.658825 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:18.659050 master-0 kubenswrapper[7454]: I0319 11:55:18.658841 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:19.659597 master-0 kubenswrapper[7454]: I0319 11:55:19.659530 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:19.659597 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:19.659597 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:19.659597 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:19.660197 master-0 kubenswrapper[7454]: I0319 11:55:19.659619 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:20.658975 master-0 kubenswrapper[7454]: I0319 11:55:20.658931 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:20.658975 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:20.658975 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:20.658975 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:20.659243 master-0 kubenswrapper[7454]: I0319 11:55:20.658989 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:21.659148 master-0 kubenswrapper[7454]: I0319 11:55:21.659092 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:21.659148 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:21.659148 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:21.659148 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:21.659841 master-0 kubenswrapper[7454]: I0319 11:55:21.659150 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:22.342757 master-0 kubenswrapper[7454]: I0319 11:55:22.342677 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:55:22.650434 master-0 kubenswrapper[7454]: E0319 11:55:22.650380 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Mar 19 11:55:22.658347 master-0 kubenswrapper[7454]: I0319 11:55:22.658300 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:22.658347 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:22.658347 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:22.658347 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:22.658528 master-0 kubenswrapper[7454]: I0319 11:55:22.658381 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:23.040174 master-0 kubenswrapper[7454]: I0319 11:55:23.040131 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_f7fd0b13-489f-42b7-a52a-6194fdc9f665/installer/0.log" Mar 19 11:55:23.040665 master-0 kubenswrapper[7454]: I0319 11:55:23.040205 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 11:55:23.053440 master-0 kubenswrapper[7454]: I0319 11:55:23.053354 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:55:23.182592 master-0 kubenswrapper[7454]: I0319 11:55:23.182328 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7fd0b13-489f-42b7-a52a-6194fdc9f665-kubelet-dir\") pod \"f7fd0b13-489f-42b7-a52a-6194fdc9f665\" (UID: \"f7fd0b13-489f-42b7-a52a-6194fdc9f665\") " Mar 19 11:55:23.182592 master-0 kubenswrapper[7454]: I0319 11:55:23.182404 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7fd0b13-489f-42b7-a52a-6194fdc9f665-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f7fd0b13-489f-42b7-a52a-6194fdc9f665" (UID: "f7fd0b13-489f-42b7-a52a-6194fdc9f665"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:55:23.182592 master-0 kubenswrapper[7454]: I0319 11:55:23.182441 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7fd0b13-489f-42b7-a52a-6194fdc9f665-kube-api-access\") pod \"f7fd0b13-489f-42b7-a52a-6194fdc9f665\" (UID: \"f7fd0b13-489f-42b7-a52a-6194fdc9f665\") " Mar 19 11:55:23.182592 master-0 kubenswrapper[7454]: I0319 11:55:23.182513 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7fd0b13-489f-42b7-a52a-6194fdc9f665-var-lock\") pod \"f7fd0b13-489f-42b7-a52a-6194fdc9f665\" (UID: \"f7fd0b13-489f-42b7-a52a-6194fdc9f665\") " Mar 19 11:55:23.182877 master-0 kubenswrapper[7454]: I0319 11:55:23.182702 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7fd0b13-489f-42b7-a52a-6194fdc9f665-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7fd0b13-489f-42b7-a52a-6194fdc9f665" (UID: "f7fd0b13-489f-42b7-a52a-6194fdc9f665"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:55:23.182877 master-0 kubenswrapper[7454]: I0319 11:55:23.182871 7454 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7fd0b13-489f-42b7-a52a-6194fdc9f665-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 11:55:23.182943 master-0 kubenswrapper[7454]: I0319 11:55:23.182886 7454 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7fd0b13-489f-42b7-a52a-6194fdc9f665-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 11:55:23.185295 master-0 kubenswrapper[7454]: I0319 11:55:23.185261 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7fd0b13-489f-42b7-a52a-6194fdc9f665-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f7fd0b13-489f-42b7-a52a-6194fdc9f665" (UID: "f7fd0b13-489f-42b7-a52a-6194fdc9f665"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:55:23.283582 master-0 kubenswrapper[7454]: I0319 11:55:23.283542 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7fd0b13-489f-42b7-a52a-6194fdc9f665-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 11:55:23.353143 master-0 kubenswrapper[7454]: I0319 11:55:23.353093 7454 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:55:23.660047 master-0 kubenswrapper[7454]: I0319 11:55:23.659987 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:23.660047 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:23.660047 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:23.660047 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:23.660443 master-0 kubenswrapper[7454]: I0319 11:55:23.660057 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:23.914024 master-0 kubenswrapper[7454]: I0319 11:55:23.913664 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_f7fd0b13-489f-42b7-a52a-6194fdc9f665/installer/0.log" Mar 19 11:55:23.914024 master-0 kubenswrapper[7454]: I0319 11:55:23.913820 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 11:55:23.914485 master-0 kubenswrapper[7454]: I0319 11:55:23.914443 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"f7fd0b13-489f-42b7-a52a-6194fdc9f665","Type":"ContainerDied","Data":"d8308efe72c7c6664abd233543bc59b7b4013bcb4b0b94da4d2f18534b26e9f7"} Mar 19 11:55:23.914715 master-0 kubenswrapper[7454]: I0319 11:55:23.914681 7454 scope.go:117] "RemoveContainer" containerID="65da2f47f4c8263662f98db014676bd0876e60b79722705d3aa8abd4a7e835b8" Mar 19 11:55:23.917776 master-0 kubenswrapper[7454]: I0319 11:55:23.917733 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kj4wv" event={"ID":"903d114c-199f-46f9-b39b-afa52df71ea9","Type":"ContainerStarted","Data":"8caa550d637bf8158cfa690d480d13253effda2df95bdebb7484c3a287ec13ac"} Mar 19 11:55:23.920561 master-0 kubenswrapper[7454]: I0319 11:55:23.920511 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"5ae4788fe8a4fbccec56e9e4515eedb286ece7ed48749691d96f6fb8097bac2c"} Mar 19 11:55:23.925648 master-0 kubenswrapper[7454]: I0319 11:55:23.925616 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"84d3d820222fb615bbd44a77bcef4de4c96b78f545a4bc5490f5fa77f0e958e7"} Mar 19 11:55:23.930062 master-0 kubenswrapper[7454]: I0319 11:55:23.930015 7454 generic.go:334] "Generic (PLEG): container finished" podID="1370cf76-52c4-4f19-8dfc-794f2901f8a6" containerID="a73ef6ab87ac7f56538653c5c86cb08ccd6e50dfb5fbf1edd61c934ee0c0aadc" exitCode=0 Mar 19 11:55:23.930175 master-0 kubenswrapper[7454]: I0319 11:55:23.930077 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-flnbx" event={"ID":"1370cf76-52c4-4f19-8dfc-794f2901f8a6","Type":"ContainerDied","Data":"a73ef6ab87ac7f56538653c5c86cb08ccd6e50dfb5fbf1edd61c934ee0c0aadc"} Mar 19 11:55:23.942074 master-0 kubenswrapper[7454]: I0319 11:55:23.942019 7454 generic.go:334] "Generic (PLEG): container finished" podID="db75b266-69c4-4790-82f1-43168b5bb6a0" containerID="905cdb7e9c876a91b59fac6f4367cfcaa957cfa65f57c8c7566420ea635f6e6d" exitCode=0 Mar 19 11:55:23.942261 master-0 kubenswrapper[7454]: I0319 11:55:23.942098 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkx8c" event={"ID":"db75b266-69c4-4790-82f1-43168b5bb6a0","Type":"ContainerDied","Data":"905cdb7e9c876a91b59fac6f4367cfcaa957cfa65f57c8c7566420ea635f6e6d"} Mar 19 11:55:23.951917 master-0 kubenswrapper[7454]: I0319 11:55:23.951882 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-gx4w8_9ed2dbd1-aec4-4009-917a-933533912ab5/openshift-controller-manager-operator/0.log" Mar 19 11:55:23.952083 master-0 kubenswrapper[7454]: I0319 11:55:23.951959 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" event={"ID":"9ed2dbd1-aec4-4009-917a-933533912ab5","Type":"ContainerStarted","Data":"24fd9caa7952430318d8f0070bff5d8f9a23ccd510c898e8d4b008fdb27da600"} Mar 19 11:55:23.956009 master-0 kubenswrapper[7454]: I0319 11:55:23.955982 7454 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="1997d87abd59fc12165851e197aa04b956b4477ab2970792d896817a67fd51a4" exitCode=0 Mar 19 11:55:23.956130 master-0 kubenswrapper[7454]: I0319 11:55:23.956066 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"1997d87abd59fc12165851e197aa04b956b4477ab2970792d896817a67fd51a4"} Mar 19 11:55:23.975883 master-0 kubenswrapper[7454]: I0319 11:55:23.960821 7454 generic.go:334] "Generic (PLEG): container finished" podID="77497070-ffa8-45e5-935d-5281828d6962" containerID="4d2f9c965608f2ecf77aeff13b4656c12f2fe4c17a81e488d5e58ef86cf4113b" exitCode=0 Mar 19 11:55:23.975883 master-0 kubenswrapper[7454]: I0319 11:55:23.960886 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p225c" event={"ID":"77497070-ffa8-45e5-935d-5281828d6962","Type":"ContainerDied","Data":"4d2f9c965608f2ecf77aeff13b4656c12f2fe4c17a81e488d5e58ef86cf4113b"} Mar 19 11:55:23.975883 master-0 kubenswrapper[7454]: I0319 11:55:23.964223 7454 generic.go:334] "Generic (PLEG): container finished" podID="d664a6d0d2a24360dee10612610f1b59" containerID="5966cfc73c8dc098af4cd51014c727f29c95ad4372f9a6d75305e51f28be76da" exitCode=0 Mar 19 11:55:23.986526 master-0 kubenswrapper[7454]: I0319 11:55:23.986388 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:55:23.986526 master-0 kubenswrapper[7454]: I0319 11:55:23.986465 7454 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:55:24.204912 master-0 kubenswrapper[7454]: I0319 11:55:24.204853 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkx8c" Mar 19 11:55:24.395987 master-0 kubenswrapper[7454]: I0319 11:55:24.395871 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjz8g\" (UniqueName: \"kubernetes.io/projected/db75b266-69c4-4790-82f1-43168b5bb6a0-kube-api-access-pjz8g\") pod \"db75b266-69c4-4790-82f1-43168b5bb6a0\" (UID: \"db75b266-69c4-4790-82f1-43168b5bb6a0\") " Mar 19 11:55:24.396298 master-0 kubenswrapper[7454]: I0319 11:55:24.396112 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db75b266-69c4-4790-82f1-43168b5bb6a0-catalog-content\") pod \"db75b266-69c4-4790-82f1-43168b5bb6a0\" (UID: \"db75b266-69c4-4790-82f1-43168b5bb6a0\") " Mar 19 11:55:24.396298 master-0 kubenswrapper[7454]: I0319 11:55:24.396275 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db75b266-69c4-4790-82f1-43168b5bb6a0-utilities\") pod \"db75b266-69c4-4790-82f1-43168b5bb6a0\" (UID: \"db75b266-69c4-4790-82f1-43168b5bb6a0\") " Mar 19 11:55:24.397773 master-0 kubenswrapper[7454]: I0319 11:55:24.397698 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db75b266-69c4-4790-82f1-43168b5bb6a0-utilities" (OuterVolumeSpecName: "utilities") pod "db75b266-69c4-4790-82f1-43168b5bb6a0" (UID: "db75b266-69c4-4790-82f1-43168b5bb6a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 11:55:24.399846 master-0 kubenswrapper[7454]: I0319 11:55:24.399730 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db75b266-69c4-4790-82f1-43168b5bb6a0-kube-api-access-pjz8g" (OuterVolumeSpecName: "kube-api-access-pjz8g") pod "db75b266-69c4-4790-82f1-43168b5bb6a0" (UID: "db75b266-69c4-4790-82f1-43168b5bb6a0"). InnerVolumeSpecName "kube-api-access-pjz8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:55:24.461918 master-0 kubenswrapper[7454]: I0319 11:55:24.461656 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db75b266-69c4-4790-82f1-43168b5bb6a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db75b266-69c4-4790-82f1-43168b5bb6a0" (UID: "db75b266-69c4-4790-82f1-43168b5bb6a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 11:55:24.499095 master-0 kubenswrapper[7454]: I0319 11:55:24.499015 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjz8g\" (UniqueName: \"kubernetes.io/projected/db75b266-69c4-4790-82f1-43168b5bb6a0-kube-api-access-pjz8g\") on node \"master-0\" DevicePath \"\"" Mar 19 11:55:24.499095 master-0 kubenswrapper[7454]: I0319 11:55:24.499079 7454 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db75b266-69c4-4790-82f1-43168b5bb6a0-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 19 11:55:24.499095 master-0 kubenswrapper[7454]: I0319 11:55:24.499096 7454 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db75b266-69c4-4790-82f1-43168b5bb6a0-utilities\") on node \"master-0\" DevicePath \"\"" Mar 19 11:55:24.556598 master-0 kubenswrapper[7454]: I0319 11:55:24.556540 7454 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-pkgvq container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Mar 19 11:55:24.556773 master-0 kubenswrapper[7454]: I0319 11:55:24.556610 7454 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" podUID="d3017b5e-178e-49de-89d2-817a18398203" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Mar 19 11:55:24.660470 master-0 kubenswrapper[7454]: I0319 11:55:24.660394 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:24.660470 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:24.660470 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:24.660470 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:24.660895 master-0 kubenswrapper[7454]: I0319 11:55:24.660489 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:24.743135 master-0 kubenswrapper[7454]: E0319 11:55:24.743042 7454 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" Mar 19 11:55:24.972271 master-0 kubenswrapper[7454]: I0319 11:55:24.972157 7454 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="0d9f4d5c57a3e2693c6c9591c7e86b98f1d2ab85c4a622f907e544850edaa7ba" exitCode=0 Mar 19 11:55:24.972271 master-0 kubenswrapper[7454]: I0319 11:55:24.972236 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"0d9f4d5c57a3e2693c6c9591c7e86b98f1d2ab85c4a622f907e544850edaa7ba"} Mar 19 11:55:24.974293 master-0 kubenswrapper[7454]: I0319 11:55:24.974233 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkx8c" event={"ID":"db75b266-69c4-4790-82f1-43168b5bb6a0","Type":"ContainerDied","Data":"752facb6414da1569fad0463b07e934509c70b6b2be4eded4b6f87f247f658ac"} Mar 19 11:55:24.974391 master-0 kubenswrapper[7454]: I0319 11:55:24.974305 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkx8c" Mar 19 11:55:24.974515 master-0 kubenswrapper[7454]: I0319 11:55:24.974311 7454 scope.go:117] "RemoveContainer" containerID="905cdb7e9c876a91b59fac6f4367cfcaa957cfa65f57c8c7566420ea635f6e6d" Mar 19 11:55:24.978844 master-0 kubenswrapper[7454]: I0319 11:55:24.978762 7454 generic.go:334] "Generic (PLEG): container finished" podID="903d114c-199f-46f9-b39b-afa52df71ea9" containerID="8caa550d637bf8158cfa690d480d13253effda2df95bdebb7484c3a287ec13ac" exitCode=0 Mar 19 11:55:24.978938 master-0 kubenswrapper[7454]: I0319 11:55:24.978859 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kj4wv" event={"ID":"903d114c-199f-46f9-b39b-afa52df71ea9","Type":"ContainerDied","Data":"8caa550d637bf8158cfa690d480d13253effda2df95bdebb7484c3a287ec13ac"} Mar 19 11:55:24.997636 master-0 kubenswrapper[7454]: I0319 11:55:24.997564 7454 scope.go:117] "RemoveContainer" containerID="2c55cfbfdd95ef2dfe0541bb2247f3e0696ee475fccd5a1a3f51314f177793a7" Mar 19 11:55:25.659775 master-0 kubenswrapper[7454]: I0319 11:55:25.659682 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:25.659775 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:25.659775 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:25.659775 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:25.659775 master-0 kubenswrapper[7454]: I0319 11:55:25.659762 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:25.832020 master-0 kubenswrapper[7454]: I0319 11:55:25.831979 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_d664a6d0d2a24360dee10612610f1b59/etcdctl/0.log" Mar 19 11:55:25.832256 master-0 kubenswrapper[7454]: I0319 11:55:25.832061 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 19 11:55:25.862393 master-0 kubenswrapper[7454]: I0319 11:55:25.862336 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:55:25.862568 master-0 kubenswrapper[7454]: I0319 11:55:25.862415 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:55:25.990540 master-0 kubenswrapper[7454]: I0319 11:55:25.990476 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-flnbx" event={"ID":"1370cf76-52c4-4f19-8dfc-794f2901f8a6","Type":"ContainerStarted","Data":"119bf6827db5b99fa9f0be9581be040a80edb534abd0a1348f3d58833768911f"} Mar 19 11:55:25.992994 master-0 kubenswrapper[7454]: I0319 11:55:25.992960 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_d664a6d0d2a24360dee10612610f1b59/etcdctl/0.log" Mar 19 11:55:25.993089 master-0 kubenswrapper[7454]: I0319 11:55:25.993039 7454 generic.go:334] "Generic (PLEG): container finished" podID="d664a6d0d2a24360dee10612610f1b59" containerID="1f3d2affaef2ec02f4c8910d24c25e42512158f294dc5c7de4fd47923e7552fe" exitCode=137 Mar 19 11:55:25.993156 master-0 kubenswrapper[7454]: I0319 11:55:25.993124 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 19 11:55:25.993194 master-0 kubenswrapper[7454]: I0319 11:55:25.993164 7454 scope.go:117] "RemoveContainer" containerID="5966cfc73c8dc098af4cd51014c727f29c95ad4372f9a6d75305e51f28be76da" Mar 19 11:55:25.997770 master-0 kubenswrapper[7454]: I0319 11:55:25.997734 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"fc9e72422a0246db78ed7d7b829fa16f2e8eddf756aaf9341f686725870d6083"} Mar 19 11:55:26.011563 master-0 kubenswrapper[7454]: I0319 11:55:26.011526 7454 scope.go:117] "RemoveContainer" containerID="1f3d2affaef2ec02f4c8910d24c25e42512158f294dc5c7de4fd47923e7552fe" Mar 19 11:55:26.020656 master-0 kubenswrapper[7454]: I0319 11:55:26.020628 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"d664a6d0d2a24360dee10612610f1b59\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " Mar 19 11:55:26.020753 master-0 kubenswrapper[7454]: I0319 11:55:26.020713 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"d664a6d0d2a24360dee10612610f1b59\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " Mar 19 11:55:26.020815 master-0 kubenswrapper[7454]: I0319 11:55:26.020773 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs" (OuterVolumeSpecName: "certs") pod "d664a6d0d2a24360dee10612610f1b59" (UID: "d664a6d0d2a24360dee10612610f1b59"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:55:26.020868 master-0 kubenswrapper[7454]: I0319 11:55:26.020792 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir" (OuterVolumeSpecName: "data-dir") pod "d664a6d0d2a24360dee10612610f1b59" (UID: "d664a6d0d2a24360dee10612610f1b59"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:55:26.020995 master-0 kubenswrapper[7454]: I0319 11:55:26.020977 7454 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 11:55:26.021043 master-0 kubenswrapper[7454]: I0319 11:55:26.021000 7454 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 11:55:26.031497 master-0 kubenswrapper[7454]: I0319 11:55:26.031449 7454 scope.go:117] "RemoveContainer" containerID="5966cfc73c8dc098af4cd51014c727f29c95ad4372f9a6d75305e51f28be76da" Mar 19 11:55:26.033609 master-0 kubenswrapper[7454]: E0319 11:55:26.033540 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5966cfc73c8dc098af4cd51014c727f29c95ad4372f9a6d75305e51f28be76da\": container with ID starting with 5966cfc73c8dc098af4cd51014c727f29c95ad4372f9a6d75305e51f28be76da not found: ID does not exist" containerID="5966cfc73c8dc098af4cd51014c727f29c95ad4372f9a6d75305e51f28be76da" Mar 19 11:55:26.033703 master-0 kubenswrapper[7454]: I0319 11:55:26.033614 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5966cfc73c8dc098af4cd51014c727f29c95ad4372f9a6d75305e51f28be76da"} err="failed to get container status \"5966cfc73c8dc098af4cd51014c727f29c95ad4372f9a6d75305e51f28be76da\": rpc error: code = NotFound desc = could not find container \"5966cfc73c8dc098af4cd51014c727f29c95ad4372f9a6d75305e51f28be76da\": container with ID starting with 5966cfc73c8dc098af4cd51014c727f29c95ad4372f9a6d75305e51f28be76da not found: ID does not exist" Mar 19 11:55:26.033703 master-0 kubenswrapper[7454]: I0319 11:55:26.033650 7454 scope.go:117] "RemoveContainer" containerID="1f3d2affaef2ec02f4c8910d24c25e42512158f294dc5c7de4fd47923e7552fe" Mar 19 11:55:26.034848 master-0 kubenswrapper[7454]: E0319 11:55:26.034793 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f3d2affaef2ec02f4c8910d24c25e42512158f294dc5c7de4fd47923e7552fe\": container with ID starting with 1f3d2affaef2ec02f4c8910d24c25e42512158f294dc5c7de4fd47923e7552fe not found: ID does not exist" containerID="1f3d2affaef2ec02f4c8910d24c25e42512158f294dc5c7de4fd47923e7552fe" Mar 19 11:55:26.034920 master-0 kubenswrapper[7454]: I0319 11:55:26.034845 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f3d2affaef2ec02f4c8910d24c25e42512158f294dc5c7de4fd47923e7552fe"} err="failed to get container status \"1f3d2affaef2ec02f4c8910d24c25e42512158f294dc5c7de4fd47923e7552fe\": rpc error: code = NotFound desc = could not find container \"1f3d2affaef2ec02f4c8910d24c25e42512158f294dc5c7de4fd47923e7552fe\": container with ID starting with 1f3d2affaef2ec02f4c8910d24c25e42512158f294dc5c7de4fd47923e7552fe not found: ID does not exist" Mar 19 11:55:26.655529 master-0 kubenswrapper[7454]: I0319 11:55:26.655294 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d664a6d0d2a24360dee10612610f1b59" path="/var/lib/kubelet/pods/d664a6d0d2a24360dee10612610f1b59/volumes" Mar 19 11:55:26.656036 master-0 kubenswrapper[7454]: I0319 11:55:26.655994 7454 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 19 11:55:26.660079 master-0 kubenswrapper[7454]: I0319 11:55:26.660009 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:26.660079 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:26.660079 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:26.660079 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:26.660617 master-0 kubenswrapper[7454]: I0319 11:55:26.660122 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:26.986899 master-0 kubenswrapper[7454]: I0319 11:55:26.986771 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:55:26.987270 master-0 kubenswrapper[7454]: I0319 11:55:26.986929 7454 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:55:27.010660 master-0 kubenswrapper[7454]: I0319 11:55:27.010606 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_4b49f09f-2efa-4657-9f5a-fbddd42bee0d/installer/0.log" Mar 19 11:55:27.010660 master-0 kubenswrapper[7454]: I0319 11:55:27.010665 7454 generic.go:334] "Generic (PLEG): container finished" podID="4b49f09f-2efa-4657-9f5a-fbddd42bee0d" containerID="1f0110e6404807316fe552282de736e25a5c73a98ca28c762d1ca02e35c0a306" exitCode=1 Mar 19 11:55:27.013552 master-0 kubenswrapper[7454]: I0319 11:55:27.013513 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_632bdf3b-0ba0-4874-a2ec-8396683c35c5/installer/0.log" Mar 19 11:55:27.013831 master-0 kubenswrapper[7454]: I0319 11:55:27.013755 7454 generic.go:334] "Generic (PLEG): container finished" podID="632bdf3b-0ba0-4874-a2ec-8396683c35c5" containerID="0db01150a16f0758697f4004ab15abe194def9a3c61ba179de9b9e1316f2ccf4" exitCode=1 Mar 19 11:55:27.659456 master-0 kubenswrapper[7454]: I0319 11:55:27.659351 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:27.659456 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:27.659456 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:27.659456 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:27.660029 master-0 kubenswrapper[7454]: I0319 11:55:27.659453 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:28.660166 master-0 kubenswrapper[7454]: I0319 11:55:28.660063 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:28.660166 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:28.660166 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:28.660166 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:28.661195 master-0 kubenswrapper[7454]: I0319 11:55:28.660177 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:28.862677 master-0 kubenswrapper[7454]: I0319 11:55:28.862595 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:55:28.862933 master-0 kubenswrapper[7454]: I0319 11:55:28.862679 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:55:29.113290 master-0 kubenswrapper[7454]: E0319 11:55:29.113104 7454 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e3c04eccbe186 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:54:55.080874374 +0000 UTC m=+64.711340287,LastTimestamp:2026-03-19 11:54:55.080874374 +0000 UTC m=+64.711340287,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:55:29.113735 master-0 kubenswrapper[7454]: I0319 11:55:29.113388 7454 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb9699aa-8885-49ec-a3b3-8c199d95bbf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T11:54:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T11:54:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [setup etcd-ensure-env-vars etcd-resources-copy]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T11:54:54Z\\\",\\\"message\\\":\\\"containers with unready status: [etcdctl etcd etcd-metrics etcd-readyz etcd-rev]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T11:54:54Z\\\",\\\"message\\\":\\\"containers with unready status: [etcdctl etcd etcd-metrics etcd-readyz etcd-rev]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T11:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.32.10\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.32.10\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"podIP\\\":\\\"192.168.32.10\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.32.10\\\"}],\\\"startTime\\\":\\\"2026-03-19T11:54:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-master-0\": Timeout: request did not complete within requested timeout - context deadline exceeded" Mar 19 11:55:29.263690 master-0 kubenswrapper[7454]: E0319 11:55:29.263577 7454 projected.go:194] Error preparing data for projected volume kube-api-access-jnp9l for pod openshift-marketplace/redhat-marketplace-cjgpg: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 19 11:55:29.263690 master-0 kubenswrapper[7454]: E0319 11:55:29.263696 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-kube-api-access-jnp9l podName:0ed7eded-1e67-49ad-9777-c2ed1e006ce3 nodeName:}" failed. No retries permitted until 2026-03-19 11:55:29.763666193 +0000 UTC m=+99.394132136 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jnp9l" (UniqueName: "kubernetes.io/projected/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-kube-api-access-jnp9l") pod "redhat-marketplace-cjgpg" (UID: "0ed7eded-1e67-49ad-9777-c2ed1e006ce3") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 19 11:55:29.659709 master-0 kubenswrapper[7454]: I0319 11:55:29.659624 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:29.659709 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:29.659709 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:29.659709 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:29.660028 master-0 kubenswrapper[7454]: I0319 11:55:29.659717 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:29.838064 master-0 kubenswrapper[7454]: I0319 11:55:29.837902 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnp9l\" (UniqueName: \"kubernetes.io/projected/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-kube-api-access-jnp9l\") pod \"redhat-marketplace-cjgpg\" (UID: \"0ed7eded-1e67-49ad-9777-c2ed1e006ce3\") " pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 11:55:29.986162 master-0 kubenswrapper[7454]: I0319 11:55:29.986045 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:55:29.986162 master-0 kubenswrapper[7454]: I0319 11:55:29.986133 7454 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:55:30.659776 master-0 kubenswrapper[7454]: I0319 11:55:30.659686 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:30.659776 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:30.659776 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:30.659776 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:30.660544 master-0 kubenswrapper[7454]: I0319 11:55:30.660461 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:31.660648 master-0 kubenswrapper[7454]: I0319 11:55:31.660610 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:31.660648 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:31.660648 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:31.660648 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:31.661440 master-0 kubenswrapper[7454]: I0319 11:55:31.661408 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:31.862914 master-0 kubenswrapper[7454]: I0319 11:55:31.862848 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:55:31.863152 master-0 kubenswrapper[7454]: I0319 11:55:31.862919 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:55:32.651324 master-0 kubenswrapper[7454]: E0319 11:55:32.651083 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 11:55:32.661769 master-0 kubenswrapper[7454]: I0319 11:55:32.661725 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:32.661769 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:32.661769 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:32.661769 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:32.662287 master-0 kubenswrapper[7454]: I0319 11:55:32.661783 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:33.660869 master-0 kubenswrapper[7454]: I0319 11:55:33.660826 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:33.660869 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:33.660869 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:33.660869 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:33.661132 master-0 kubenswrapper[7454]: I0319 11:55:33.660885 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:34.555931 master-0 kubenswrapper[7454]: I0319 11:55:34.555811 7454 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-pkgvq container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Mar 19 11:55:34.555931 master-0 kubenswrapper[7454]: I0319 11:55:34.555881 7454 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" podUID="d3017b5e-178e-49de-89d2-817a18398203" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Mar 19 11:55:34.660760 master-0 kubenswrapper[7454]: I0319 11:55:34.660670 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:34.660760 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:34.660760 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:34.660760 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:34.660760 master-0 kubenswrapper[7454]: I0319 11:55:34.660761 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:34.743820 master-0 kubenswrapper[7454]: E0319 11:55:34.743713 7454 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 11:55:34.862395 master-0 kubenswrapper[7454]: I0319 11:55:34.862246 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:55:34.862395 master-0 kubenswrapper[7454]: I0319 11:55:34.862321 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:55:35.342049 master-0 kubenswrapper[7454]: I0319 11:55:35.341963 7454 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 11:55:35.659663 master-0 kubenswrapper[7454]: I0319 11:55:35.659525 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:35.659663 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:35.659663 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:35.659663 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:35.659663 master-0 kubenswrapper[7454]: I0319 11:55:35.659599 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:36.658915 master-0 kubenswrapper[7454]: I0319 11:55:36.658787 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:36.658915 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:36.658915 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:36.658915 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:36.659443 master-0 kubenswrapper[7454]: I0319 11:55:36.659404 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:37.660555 master-0 kubenswrapper[7454]: I0319 11:55:37.660438 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:37.660555 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:37.660555 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:37.660555 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:37.661761 master-0 kubenswrapper[7454]: I0319 11:55:37.660571 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:37.862140 master-0 kubenswrapper[7454]: I0319 11:55:37.862069 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:55:37.862140 master-0 kubenswrapper[7454]: I0319 11:55:37.862142 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:55:38.660175 master-0 kubenswrapper[7454]: I0319 11:55:38.660095 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:38.660175 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:38.660175 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:38.660175 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:38.660175 master-0 kubenswrapper[7454]: I0319 11:55:38.660170 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:39.117246 master-0 kubenswrapper[7454]: I0319 11:55:39.117162 7454 generic.go:334] "Generic (PLEG): container finished" podID="0f97d998-530c-4d9d-a030-ca1d9d2d4490" containerID="fe8804b9f205d5f40aba452ae8167e7ca2d2057bbd5a93b9e42d8ec2d88c8b07" exitCode=0 Mar 19 11:55:39.658520 master-0 kubenswrapper[7454]: I0319 11:55:39.658448 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:39.658520 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:39.658520 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:39.658520 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:39.658887 master-0 kubenswrapper[7454]: I0319 11:55:39.658538 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:40.659357 master-0 kubenswrapper[7454]: I0319 11:55:40.659266 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:40.659357 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:40.659357 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:40.659357 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:40.660533 master-0 kubenswrapper[7454]: I0319 11:55:40.659374 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:40.864888 master-0 kubenswrapper[7454]: I0319 11:55:40.862463 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:55:40.864888 master-0 kubenswrapper[7454]: I0319 11:55:40.862561 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:55:41.137216 master-0 kubenswrapper[7454]: I0319 11:55:41.137148 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-wd4nx_8414b6b0-ee16-47a5-982b-ee58b136cfcf/approver/0.log" Mar 19 11:55:41.137733 master-0 kubenswrapper[7454]: I0319 11:55:41.137678 7454 generic.go:334] "Generic (PLEG): container finished" podID="8414b6b0-ee16-47a5-982b-ee58b136cfcf" containerID="acd01abcc3b9701b51c684ecc460502246e3fa79a2f3e8b56cc2aec4e47bef9f" exitCode=1 Mar 19 11:55:41.659722 master-0 kubenswrapper[7454]: I0319 11:55:41.659573 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:41.659722 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:41.659722 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:41.659722 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:41.659722 master-0 kubenswrapper[7454]: I0319 11:55:41.659639 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:42.652027 master-0 kubenswrapper[7454]: E0319 11:55:42.651930 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 11:55:42.659513 master-0 kubenswrapper[7454]: I0319 11:55:42.659455 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:42.659513 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:42.659513 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:42.659513 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:42.659769 master-0 kubenswrapper[7454]: I0319 11:55:42.659531 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:43.659660 master-0 kubenswrapper[7454]: I0319 11:55:43.659554 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:43.659660 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:43.659660 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:43.659660 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:43.659660 master-0 kubenswrapper[7454]: I0319 11:55:43.659657 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:43.863142 master-0 kubenswrapper[7454]: I0319 11:55:43.863048 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:55:43.863142 master-0 kubenswrapper[7454]: I0319 11:55:43.863131 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:55:44.156942 master-0 kubenswrapper[7454]: I0319 11:55:44.156882 7454 generic.go:334] "Generic (PLEG): container finished" podID="d3017b5e-178e-49de-89d2-817a18398203" containerID="ec99e0001708bd8c36619c411325f2d4bdab0ecd7770deeae64fffd8bdf90881" exitCode=0 Mar 19 11:55:44.158454 master-0 kubenswrapper[7454]: I0319 11:55:44.158414 7454 generic.go:334] "Generic (PLEG): container finished" podID="2151eb84-177e-459c-be71-f48465323ac2" containerID="76df0534cc0fd6a5cc55f7565b57a91fd38d7e12169a76c5133f215b1479d2db" exitCode=0 Mar 19 11:55:44.659948 master-0 kubenswrapper[7454]: I0319 11:55:44.659868 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:44.659948 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:44.659948 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:44.659948 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:44.660966 master-0 kubenswrapper[7454]: I0319 11:55:44.659964 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:44.745087 master-0 kubenswrapper[7454]: E0319 11:55:44.744620 7454 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 11:55:45.343246 master-0 kubenswrapper[7454]: I0319 11:55:45.343184 7454 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 11:55:45.610569 master-0 kubenswrapper[7454]: I0319 11:55:45.610290 7454 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.32.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 11:55:45.610569 master-0 kubenswrapper[7454]: I0319 11:55:45.610466 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" probeResult="failure" output="Get \"https://192.168.32.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 11:55:45.659771 master-0 kubenswrapper[7454]: I0319 11:55:45.659679 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:45.659771 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:45.659771 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:45.659771 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:45.660251 master-0 kubenswrapper[7454]: I0319 11:55:45.659779 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:46.659636 master-0 kubenswrapper[7454]: I0319 11:55:46.659543 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:46.659636 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:46.659636 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:46.659636 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:46.660012 master-0 kubenswrapper[7454]: I0319 11:55:46.659657 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:46.862378 master-0 kubenswrapper[7454]: I0319 11:55:46.862274 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:55:46.862378 master-0 kubenswrapper[7454]: I0319 11:55:46.862349 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:55:47.660093 master-0 kubenswrapper[7454]: I0319 11:55:47.659995 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:47.660093 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:47.660093 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:47.660093 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:47.660093 master-0 kubenswrapper[7454]: I0319 11:55:47.660078 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:48.181648 master-0 kubenswrapper[7454]: I0319 11:55:48.181574 7454 generic.go:334] "Generic (PLEG): container finished" podID="f08c5930-44f0-48e4-80dd-2563f2733b2f" containerID="41d4637f09562b9b79d583fb65c9acfd7f81986cff143ad48c1c09b266f39b23" exitCode=0 Mar 19 11:55:48.659840 master-0 kubenswrapper[7454]: I0319 11:55:48.659726 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:48.659840 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:48.659840 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:48.659840 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:48.659840 master-0 kubenswrapper[7454]: I0319 11:55:48.659828 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:49.189116 master-0 kubenswrapper[7454]: I0319 11:55:49.189046 7454 generic.go:334] "Generic (PLEG): container finished" podID="9702fc8c-4fe0-413b-b2d4-db23021d42b8" containerID="6c3d43a01987e52cadf8e3819b9c184c46b6535cb510d14c96117eed3c48a981" exitCode=0 Mar 19 11:55:49.660554 master-0 kubenswrapper[7454]: I0319 11:55:49.660471 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:49.660554 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:49.660554 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:49.660554 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:49.661313 master-0 kubenswrapper[7454]: I0319 11:55:49.660552 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:49.862584 master-0 kubenswrapper[7454]: I0319 11:55:49.862482 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:55:49.862956 master-0 kubenswrapper[7454]: I0319 11:55:49.862576 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:55:50.659438 master-0 kubenswrapper[7454]: I0319 11:55:50.659350 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:50.659438 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:50.659438 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:50.659438 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:50.659438 master-0 kubenswrapper[7454]: I0319 11:55:50.659407 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:51.659359 master-0 kubenswrapper[7454]: I0319 11:55:51.659288 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:51.659359 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:51.659359 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:51.659359 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:51.660529 master-0 kubenswrapper[7454]: I0319 11:55:51.659384 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:52.652843 master-0 kubenswrapper[7454]: E0319 11:55:52.652713 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 11:55:52.652843 master-0 kubenswrapper[7454]: E0319 11:55:52.652769 7454 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 19 11:55:52.659287 master-0 kubenswrapper[7454]: I0319 11:55:52.659234 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:52.659287 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:52.659287 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:52.659287 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:52.659435 master-0 kubenswrapper[7454]: I0319 11:55:52.659308 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:52.862938 master-0 kubenswrapper[7454]: I0319 11:55:52.862872 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:55:52.862938 master-0 kubenswrapper[7454]: I0319 11:55:52.862938 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:55:54.199634 master-0 kubenswrapper[7454]: I0319 11:55:54.199580 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:54.199634 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:54.199634 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:54.199634 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:54.200387 master-0 kubenswrapper[7454]: I0319 11:55:54.199649 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:54.312953 master-0 kubenswrapper[7454]: E0319 11:55:54.312896 7454 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod284768b8_9d70_4cf7_bace_8adc6b587186.slice/crio-conmon-4a5b36532ee146a92740f77707f5b0a6a8c33bb89c0054e1d9177bfea2033a2d.scope\": RecentStats: unable to find data in memory cache]" Mar 19 11:55:54.370192 master-0 kubenswrapper[7454]: I0319 11:55:54.370109 7454 generic.go:334] "Generic (PLEG): container finished" podID="d9ab6ec4-eec9-4d27-8b43-2aaf954f098f" containerID="9dbaaa2ce519ab256717766bb8d971f864766afcc411753d09c087dd190cf903" exitCode=0 Mar 19 11:55:54.371907 master-0 kubenswrapper[7454]: I0319 11:55:54.371879 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-nb8bk_284768b8-9d70-4cf7-bace-8adc6b587186/network-operator/0.log" Mar 19 11:55:54.371987 master-0 kubenswrapper[7454]: I0319 11:55:54.371915 7454 generic.go:334] "Generic (PLEG): container finished" podID="284768b8-9d70-4cf7-bace-8adc6b587186" containerID="4a5b36532ee146a92740f77707f5b0a6a8c33bb89c0054e1d9177bfea2033a2d" exitCode=255 Mar 19 11:55:54.373954 master-0 kubenswrapper[7454]: I0319 11:55:54.373927 7454 generic.go:334] "Generic (PLEG): container finished" podID="1089ea24-add9-482e-9276-e6ded12052d7" containerID="a04e94059c93f3fb95feb69e0b122c65aebac1f390cdd0cf514b18a508325ef8" exitCode=0 Mar 19 11:55:54.375391 master-0 kubenswrapper[7454]: I0319 11:55:54.375359 7454 generic.go:334] "Generic (PLEG): container finished" podID="06df1b1b-154e-46f9-aee0-79a137c6c928" containerID="136228bc884d9d84e6c34125e85b6f53a4eb9c869542bab1b85def5ce8ff08ff" exitCode=0 Mar 19 11:55:54.659876 master-0 kubenswrapper[7454]: I0319 11:55:54.659769 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:54.659876 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:54.659876 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:54.659876 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:54.659876 master-0 kubenswrapper[7454]: I0319 11:55:54.659867 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:54.746466 master-0 kubenswrapper[7454]: E0319 11:55:54.745384 7454 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 11:55:54.746466 master-0 kubenswrapper[7454]: I0319 11:55:54.745435 7454 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 19 11:55:55.343000 master-0 kubenswrapper[7454]: I0319 11:55:55.342898 7454 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 11:55:55.610564 master-0 kubenswrapper[7454]: I0319 11:55:55.610184 7454 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.32.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 11:55:55.610564 master-0 kubenswrapper[7454]: I0319 11:55:55.610292 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" probeResult="failure" output="Get \"https://192.168.32.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 11:55:55.660374 master-0 kubenswrapper[7454]: I0319 11:55:55.660324 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:55.660374 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:55.660374 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:55.660374 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:55.660821 master-0 kubenswrapper[7454]: I0319 11:55:55.660388 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:55.862177 master-0 kubenswrapper[7454]: I0319 11:55:55.862049 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:55:55.862177 master-0 kubenswrapper[7454]: I0319 11:55:55.862132 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:55:56.660245 master-0 kubenswrapper[7454]: I0319 11:55:56.660162 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:56.660245 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:56.660245 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:56.660245 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:56.660820 master-0 kubenswrapper[7454]: I0319 11:55:56.660325 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:57.660699 master-0 kubenswrapper[7454]: I0319 11:55:57.660607 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:57.660699 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:57.660699 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:57.660699 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:57.661692 master-0 kubenswrapper[7454]: I0319 11:55:57.660699 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:58.400331 master-0 kubenswrapper[7454]: I0319 11:55:58.400173 7454 generic.go:334] "Generic (PLEG): container finished" podID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerID="1dd2940995583a19410f74ab256d2834a4c83d4ba579f4590af5fea605682788" exitCode=0 Mar 19 11:55:58.404176 master-0 kubenswrapper[7454]: I0319 11:55:58.404107 7454 generic.go:334] "Generic (PLEG): container finished" podID="c2dbd8b3-0e02-4747-a166-80aa6a94b060" containerID="697b28a330e52c45053a0bb858d1df6049dfd854ab75b1f95587cbc7874588cd" exitCode=0 Mar 19 11:55:58.660787 master-0 kubenswrapper[7454]: I0319 11:55:58.660559 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:58.660787 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:58.660787 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:58.660787 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:58.660787 master-0 kubenswrapper[7454]: I0319 11:55:58.660692 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:55:58.862741 master-0 kubenswrapper[7454]: I0319 11:55:58.862656 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:55:58.862741 master-0 kubenswrapper[7454]: I0319 11:55:58.862726 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:55:59.659894 master-0 kubenswrapper[7454]: I0319 11:55:59.659784 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:55:59.659894 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:55:59.659894 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:55:59.659894 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:55:59.660247 master-0 kubenswrapper[7454]: I0319 11:55:59.659924 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:00.659304 master-0 kubenswrapper[7454]: I0319 11:56:00.659227 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:00.659304 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:00.659304 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:00.659304 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:00.659304 master-0 kubenswrapper[7454]: I0319 11:56:00.659291 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:00.660672 master-0 kubenswrapper[7454]: E0319 11:56:00.659600 7454 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 19 11:56:00.660672 master-0 kubenswrapper[7454]: E0319 11:56:00.659754 7454 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.027s" Mar 19 11:56:00.660672 master-0 kubenswrapper[7454]: I0319 11:56:00.659773 7454 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:56:00.660672 master-0 kubenswrapper[7454]: I0319 11:56:00.659909 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:56:00.664312 master-0 kubenswrapper[7454]: I0319 11:56:00.662235 7454 scope.go:117] "RemoveContainer" containerID="1dd2940995583a19410f74ab256d2834a4c83d4ba579f4590af5fea605682788" Mar 19 11:56:00.664312 master-0 kubenswrapper[7454]: I0319 11:56:00.663399 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:56:00.664312 master-0 kubenswrapper[7454]: I0319 11:56:00.663435 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p225c" Mar 19 11:56:00.664312 master-0 kubenswrapper[7454]: I0319 11:56:00.664092 7454 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"84d3d820222fb615bbd44a77bcef4de4c96b78f545a4bc5490f5fa77f0e958e7"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 19 11:56:00.664312 master-0 kubenswrapper[7454]: I0319 11:56:00.664191 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" containerID="cri-o://84d3d820222fb615bbd44a77bcef4de4c96b78f545a4bc5490f5fa77f0e958e7" gracePeriod=30 Mar 19 11:56:00.681599 master-0 kubenswrapper[7454]: I0319 11:56:00.681558 7454 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 19 11:56:01.427694 master-0 kubenswrapper[7454]: I0319 11:56:01.427615 7454 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="84d3d820222fb615bbd44a77bcef4de4c96b78f545a4bc5490f5fa77f0e958e7" exitCode=2 Mar 19 11:56:01.662940 master-0 kubenswrapper[7454]: I0319 11:56:01.659149 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:01.662940 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:01.662940 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:01.662940 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:01.662940 master-0 kubenswrapper[7454]: I0319 11:56:01.659200 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:01.741376 master-0 kubenswrapper[7454]: I0319 11:56:01.741318 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_632bdf3b-0ba0-4874-a2ec-8396683c35c5/installer/0.log" Mar 19 11:56:01.741548 master-0 kubenswrapper[7454]: I0319 11:56:01.741397 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 11:56:01.777156 master-0 kubenswrapper[7454]: I0319 11:56:01.777088 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_4b49f09f-2efa-4657-9f5a-fbddd42bee0d/installer/0.log" Mar 19 11:56:01.777156 master-0 kubenswrapper[7454]: I0319 11:56:01.777171 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 11:56:01.801362 master-0 kubenswrapper[7454]: I0319 11:56:01.801289 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/632bdf3b-0ba0-4874-a2ec-8396683c35c5-var-lock\") pod \"632bdf3b-0ba0-4874-a2ec-8396683c35c5\" (UID: \"632bdf3b-0ba0-4874-a2ec-8396683c35c5\") " Mar 19 11:56:01.801362 master-0 kubenswrapper[7454]: I0319 11:56:01.801358 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/632bdf3b-0ba0-4874-a2ec-8396683c35c5-kube-api-access\") pod \"632bdf3b-0ba0-4874-a2ec-8396683c35c5\" (UID: \"632bdf3b-0ba0-4874-a2ec-8396683c35c5\") " Mar 19 11:56:01.801606 master-0 kubenswrapper[7454]: I0319 11:56:01.801386 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/632bdf3b-0ba0-4874-a2ec-8396683c35c5-kubelet-dir\") pod \"632bdf3b-0ba0-4874-a2ec-8396683c35c5\" (UID: \"632bdf3b-0ba0-4874-a2ec-8396683c35c5\") " Mar 19 11:56:01.801606 master-0 kubenswrapper[7454]: I0319 11:56:01.801406 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b49f09f-2efa-4657-9f5a-fbddd42bee0d-var-lock\") pod \"4b49f09f-2efa-4657-9f5a-fbddd42bee0d\" (UID: \"4b49f09f-2efa-4657-9f5a-fbddd42bee0d\") " Mar 19 11:56:01.801606 master-0 kubenswrapper[7454]: I0319 11:56:01.801430 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b49f09f-2efa-4657-9f5a-fbddd42bee0d-kube-api-access\") pod \"4b49f09f-2efa-4657-9f5a-fbddd42bee0d\" (UID: \"4b49f09f-2efa-4657-9f5a-fbddd42bee0d\") " Mar 19 11:56:01.801606 master-0 kubenswrapper[7454]: I0319 11:56:01.801456 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b49f09f-2efa-4657-9f5a-fbddd42bee0d-kubelet-dir\") pod \"4b49f09f-2efa-4657-9f5a-fbddd42bee0d\" (UID: \"4b49f09f-2efa-4657-9f5a-fbddd42bee0d\") " Mar 19 11:56:01.801722 master-0 kubenswrapper[7454]: I0319 11:56:01.801644 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b49f09f-2efa-4657-9f5a-fbddd42bee0d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4b49f09f-2efa-4657-9f5a-fbddd42bee0d" (UID: "4b49f09f-2efa-4657-9f5a-fbddd42bee0d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:56:01.801722 master-0 kubenswrapper[7454]: I0319 11:56:01.801679 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/632bdf3b-0ba0-4874-a2ec-8396683c35c5-var-lock" (OuterVolumeSpecName: "var-lock") pod "632bdf3b-0ba0-4874-a2ec-8396683c35c5" (UID: "632bdf3b-0ba0-4874-a2ec-8396683c35c5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:56:01.802054 master-0 kubenswrapper[7454]: I0319 11:56:01.802030 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b49f09f-2efa-4657-9f5a-fbddd42bee0d-var-lock" (OuterVolumeSpecName: "var-lock") pod "4b49f09f-2efa-4657-9f5a-fbddd42bee0d" (UID: "4b49f09f-2efa-4657-9f5a-fbddd42bee0d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:56:01.802094 master-0 kubenswrapper[7454]: I0319 11:56:01.802056 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/632bdf3b-0ba0-4874-a2ec-8396683c35c5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "632bdf3b-0ba0-4874-a2ec-8396683c35c5" (UID: "632bdf3b-0ba0-4874-a2ec-8396683c35c5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:56:01.804788 master-0 kubenswrapper[7454]: I0319 11:56:01.804756 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/632bdf3b-0ba0-4874-a2ec-8396683c35c5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "632bdf3b-0ba0-4874-a2ec-8396683c35c5" (UID: "632bdf3b-0ba0-4874-a2ec-8396683c35c5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:56:01.804907 master-0 kubenswrapper[7454]: I0319 11:56:01.804895 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b49f09f-2efa-4657-9f5a-fbddd42bee0d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4b49f09f-2efa-4657-9f5a-fbddd42bee0d" (UID: "4b49f09f-2efa-4657-9f5a-fbddd42bee0d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:56:01.902783 master-0 kubenswrapper[7454]: I0319 11:56:01.902684 7454 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b49f09f-2efa-4657-9f5a-fbddd42bee0d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 11:56:01.903056 master-0 kubenswrapper[7454]: I0319 11:56:01.902775 7454 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/632bdf3b-0ba0-4874-a2ec-8396683c35c5-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 11:56:01.903056 master-0 kubenswrapper[7454]: I0319 11:56:01.902858 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/632bdf3b-0ba0-4874-a2ec-8396683c35c5-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 11:56:01.903056 master-0 kubenswrapper[7454]: I0319 11:56:01.902878 7454 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/632bdf3b-0ba0-4874-a2ec-8396683c35c5-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 11:56:01.903056 master-0 kubenswrapper[7454]: I0319 11:56:01.902895 7454 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b49f09f-2efa-4657-9f5a-fbddd42bee0d-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 11:56:01.903056 master-0 kubenswrapper[7454]: I0319 11:56:01.902941 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b49f09f-2efa-4657-9f5a-fbddd42bee0d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 11:56:02.441561 master-0 kubenswrapper[7454]: I0319 11:56:02.441503 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_632bdf3b-0ba0-4874-a2ec-8396683c35c5/installer/0.log" Mar 19 11:56:02.441784 master-0 kubenswrapper[7454]: I0319 11:56:02.441714 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 11:56:02.443603 master-0 kubenswrapper[7454]: I0319 11:56:02.443560 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_4b49f09f-2efa-4657-9f5a-fbddd42bee0d/installer/0.log" Mar 19 11:56:02.443719 master-0 kubenswrapper[7454]: I0319 11:56:02.443696 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 11:56:02.660365 master-0 kubenswrapper[7454]: I0319 11:56:02.660274 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:02.660365 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:02.660365 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:02.660365 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:02.660365 master-0 kubenswrapper[7454]: I0319 11:56:02.660374 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:03.116173 master-0 kubenswrapper[7454]: E0319 11:56:03.115915 7454 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{machine-config-server-g7mqg.189e3c04fa816801 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-server-g7mqg,UID:0e25d4ed-4ad0-4706-ad25-7822c9a1d07e,APIVersion:v1,ResourceVersion:8882,FieldPath:spec.containers{machine-config-server},},Reason:Created,Message:Created container: machine-config-server,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:54:55.310874625 +0000 UTC m=+64.941340538,LastTimestamp:2026-03-19 11:54:55.310874625 +0000 UTC m=+64.941340538,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:56:03.454435 master-0 kubenswrapper[7454]: I0319 11:56:03.454231 7454 generic.go:334] "Generic (PLEG): container finished" podID="661b8957-a890-4032-9e57-45e2e0b35249" containerID="48511943c8e0f8f2cb56a0dbe005be6b65b3cfab069bdef05e341ca254849587" exitCode=0 Mar 19 11:56:03.659916 master-0 kubenswrapper[7454]: I0319 11:56:03.659773 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:03.659916 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:03.659916 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:03.659916 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:03.659916 master-0 kubenswrapper[7454]: I0319 11:56:03.659902 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:03.841018 master-0 kubenswrapper[7454]: E0319 11:56:03.840869 7454 projected.go:194] Error preparing data for projected volume kube-api-access-jnp9l for pod openshift-marketplace/redhat-marketplace-cjgpg: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 19 11:56:03.841018 master-0 kubenswrapper[7454]: E0319 11:56:03.840998 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-kube-api-access-jnp9l podName:0ed7eded-1e67-49ad-9777-c2ed1e006ce3 nodeName:}" failed. No retries permitted until 2026-03-19 11:56:04.840968569 +0000 UTC m=+134.471434522 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jnp9l" (UniqueName: "kubernetes.io/projected/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-kube-api-access-jnp9l") pod "redhat-marketplace-cjgpg" (UID: "0ed7eded-1e67-49ad-9777-c2ed1e006ce3") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 19 11:56:04.660474 master-0 kubenswrapper[7454]: I0319 11:56:04.660382 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:04.660474 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:04.660474 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:04.660474 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:04.660474 master-0 kubenswrapper[7454]: I0319 11:56:04.660474 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:04.746568 master-0 kubenswrapper[7454]: E0319 11:56:04.746437 7454 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 19 11:56:04.849541 master-0 kubenswrapper[7454]: I0319 11:56:04.849477 7454 patch_prober.go:28] interesting pod/etcd-operator-8544cbcf9c-sc4kz container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 19 11:56:04.849541 master-0 kubenswrapper[7454]: I0319 11:56:04.849535 7454 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" podUID="9702fc8c-4fe0-413b-b2d4-db23021d42b8" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 19 11:56:04.862722 master-0 kubenswrapper[7454]: I0319 11:56:04.862628 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:56:04.862722 master-0 kubenswrapper[7454]: I0319 11:56:04.862696 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:56:04.937979 master-0 kubenswrapper[7454]: I0319 11:56:04.937816 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnp9l\" (UniqueName: \"kubernetes.io/projected/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-kube-api-access-jnp9l\") pod \"redhat-marketplace-cjgpg\" (UID: \"0ed7eded-1e67-49ad-9777-c2ed1e006ce3\") " pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 11:56:05.609813 master-0 kubenswrapper[7454]: I0319 11:56:05.609743 7454 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.32.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 11:56:05.610043 master-0 kubenswrapper[7454]: I0319 11:56:05.609870 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" probeResult="failure" output="Get \"https://192.168.32.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 11:56:05.659238 master-0 kubenswrapper[7454]: I0319 11:56:05.659148 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:05.659238 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:05.659238 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:05.659238 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:05.659550 master-0 kubenswrapper[7454]: I0319 11:56:05.659254 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:05.986947 master-0 kubenswrapper[7454]: I0319 11:56:05.986769 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:56:05.987675 master-0 kubenswrapper[7454]: I0319 11:56:05.986964 7454 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:56:06.475986 master-0 kubenswrapper[7454]: I0319 11:56:06.475933 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_2e4442dc-19e2-42a3-b5d9-7af7765b1939/installer/0.log" Mar 19 11:56:06.476347 master-0 kubenswrapper[7454]: I0319 11:56:06.476309 7454 generic.go:334] "Generic (PLEG): container finished" podID="2e4442dc-19e2-42a3-b5d9-7af7765b1939" containerID="01fb0bb7c58b7c7fb9f4e6423408b3fdefa74b9c0303c15e18382b768dd8f028" exitCode=1 Mar 19 11:56:06.659908 master-0 kubenswrapper[7454]: I0319 11:56:06.659737 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:06.659908 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:06.659908 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:06.659908 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:06.660438 master-0 kubenswrapper[7454]: I0319 11:56:06.660390 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:07.660306 master-0 kubenswrapper[7454]: I0319 11:56:07.660206 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:07.660306 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:07.660306 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:07.660306 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:07.660306 master-0 kubenswrapper[7454]: I0319 11:56:07.660293 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:07.862483 master-0 kubenswrapper[7454]: I0319 11:56:07.862397 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:56:07.862483 master-0 kubenswrapper[7454]: I0319 11:56:07.862477 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:56:08.659346 master-0 kubenswrapper[7454]: I0319 11:56:08.659295 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:08.659346 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:08.659346 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:08.659346 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:08.659676 master-0 kubenswrapper[7454]: I0319 11:56:08.659356 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:08.986309 master-0 kubenswrapper[7454]: I0319 11:56:08.986226 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:56:08.986309 master-0 kubenswrapper[7454]: I0319 11:56:08.986284 7454 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:56:09.659677 master-0 kubenswrapper[7454]: I0319 11:56:09.659585 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:09.659677 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:09.659677 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:09.659677 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:09.659677 master-0 kubenswrapper[7454]: I0319 11:56:09.659660 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:10.660011 master-0 kubenswrapper[7454]: I0319 11:56:10.659844 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:10.660011 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:10.660011 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:10.660011 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:10.660011 master-0 kubenswrapper[7454]: I0319 11:56:10.660009 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:10.862167 master-0 kubenswrapper[7454]: I0319 11:56:10.862084 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:56:10.862167 master-0 kubenswrapper[7454]: I0319 11:56:10.862160 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:56:11.659101 master-0 kubenswrapper[7454]: I0319 11:56:11.659006 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:11.659101 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:11.659101 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:11.659101 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:11.659101 master-0 kubenswrapper[7454]: I0319 11:56:11.659077 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:11.986588 master-0 kubenswrapper[7454]: I0319 11:56:11.986503 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:56:11.987234 master-0 kubenswrapper[7454]: I0319 11:56:11.986584 7454 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:56:12.660007 master-0 kubenswrapper[7454]: I0319 11:56:12.659900 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:12.660007 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:12.660007 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:12.660007 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:12.660504 master-0 kubenswrapper[7454]: I0319 11:56:12.660019 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:13.054077 master-0 kubenswrapper[7454]: E0319 11:56:13.053729 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T11:56:03Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T11:56:03Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T11:56:03Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T11:56:03Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:3ea089ab116e164d89b46dc077f87d9af22f525bc2d69403214f77ee3fd30161\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d9cbffb5a2fd538c8f19b7174d2906286acdb37a574b9dce3f9da302074591ff\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1746416849},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:c9f7bbe4799eaacbfbb60eb906000d7a813a580d6a9740def7da774cbc4cf859\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cde1da53dadc54c24c10cab8fd3e67839ce68c33ec3b556c255a79167881966a\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1252053726},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:aefc421cf2f5dba925f7c149d56ce14e910fbd969a4e22b5917fc912ca33a5b2\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:da1ee8c9ae2cb275833f329b3d793a9109915be16d938f208ec917b50d9dd66a\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1223644894},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1\\\"],\\\"sizeBytes\\\":918289953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578\\\"],\\\"sizeBytes\\\":862657321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89\\\"],\\\"sizeBytes\\\":548752816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30a2f97d7785ce8b0ea5115e67c4554b64adefbc7856bcf6f4fe6cc7e938a310\\\"],\\\"sizeBytes\\\":513582374},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098\\\"],\\\"sizeBytes\\\":511227324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:002dfb86e17ad8f5cc232a7d2dce183b23335c8ecb7e7d31dcf3e4446b390777\\\"],\\\"sizeBytes\\\":487159945},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b\\\"],\\\"sizeBytes\\\":487096305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113\\\"],\\\"sizeBytes\\\":484450894},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12\\\"],\\\"sizeBytes\\\":484187929},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97\\\"],\\\"sizeBytes\\\":470826739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc\\\"],\\\"sizeBytes\\\":468265024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\\\"],\\\"sizeBytes\\\":465090934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24\\\"],\\\"sizeBytes\\\":463705930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62\\\"],\\\"sizeBytes\\\":458126937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe\\\"],\\\"sizeBytes\\\":456576198}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 11:56:13.659914 master-0 kubenswrapper[7454]: I0319 11:56:13.659777 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:13.659914 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:13.659914 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:13.659914 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:13.659914 master-0 kubenswrapper[7454]: I0319 11:56:13.659896 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:13.863038 master-0 kubenswrapper[7454]: I0319 11:56:13.862934 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:56:13.863038 master-0 kubenswrapper[7454]: I0319 11:56:13.863028 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:56:14.660400 master-0 kubenswrapper[7454]: I0319 11:56:14.660303 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:14.660400 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:14.660400 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:14.660400 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:14.661472 master-0 kubenswrapper[7454]: I0319 11:56:14.660402 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:14.947622 master-0 kubenswrapper[7454]: E0319 11:56:14.947438 7454 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 19 11:56:15.609674 master-0 kubenswrapper[7454]: I0319 11:56:15.609577 7454 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.32.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 11:56:15.610204 master-0 kubenswrapper[7454]: I0319 11:56:15.610159 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" probeResult="failure" output="Get \"https://192.168.32.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 11:56:15.660164 master-0 kubenswrapper[7454]: I0319 11:56:15.660002 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:15.660164 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:15.660164 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:15.660164 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:15.660164 master-0 kubenswrapper[7454]: I0319 11:56:15.660074 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:16.660533 master-0 kubenswrapper[7454]: I0319 11:56:16.660368 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:16.660533 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:16.660533 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:16.660533 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:16.660533 master-0 kubenswrapper[7454]: I0319 11:56:16.660470 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:16.862603 master-0 kubenswrapper[7454]: I0319 11:56:16.862496 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:56:16.862970 master-0 kubenswrapper[7454]: I0319 11:56:16.862593 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:56:17.660232 master-0 kubenswrapper[7454]: I0319 11:56:17.660134 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:17.660232 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:17.660232 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:17.660232 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:17.661298 master-0 kubenswrapper[7454]: I0319 11:56:17.660230 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:18.660085 master-0 kubenswrapper[7454]: I0319 11:56:18.659946 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:18.660085 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:18.660085 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:18.660085 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:18.660493 master-0 kubenswrapper[7454]: I0319 11:56:18.660094 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:19.659777 master-0 kubenswrapper[7454]: I0319 11:56:19.659655 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:19.659777 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:19.659777 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:19.659777 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:19.659777 master-0 kubenswrapper[7454]: I0319 11:56:19.659731 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:19.862596 master-0 kubenswrapper[7454]: I0319 11:56:19.862514 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:56:19.862596 master-0 kubenswrapper[7454]: I0319 11:56:19.862589 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:56:20.660048 master-0 kubenswrapper[7454]: I0319 11:56:20.659982 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:20.660048 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:20.660048 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:20.660048 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:20.660048 master-0 kubenswrapper[7454]: I0319 11:56:20.660052 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:21.658965 master-0 kubenswrapper[7454]: I0319 11:56:21.658919 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:21.658965 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:21.658965 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:21.658965 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:21.659291 master-0 kubenswrapper[7454]: I0319 11:56:21.658988 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:22.660060 master-0 kubenswrapper[7454]: I0319 11:56:22.659757 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:22.660060 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:22.660060 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:22.660060 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:22.660060 master-0 kubenswrapper[7454]: I0319 11:56:22.660056 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:22.861883 master-0 kubenswrapper[7454]: I0319 11:56:22.861815 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:56:22.862111 master-0 kubenswrapper[7454]: I0319 11:56:22.861884 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:56:23.055013 master-0 kubenswrapper[7454]: E0319 11:56:23.054955 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 11:56:23.659535 master-0 kubenswrapper[7454]: I0319 11:56:23.659487 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:23.659535 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:23.659535 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:23.659535 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:23.659988 master-0 kubenswrapper[7454]: I0319 11:56:23.659545 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:24.659382 master-0 kubenswrapper[7454]: I0319 11:56:24.659314 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:24.659382 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:24.659382 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:24.659382 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:24.659973 master-0 kubenswrapper[7454]: I0319 11:56:24.659400 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:25.349485 master-0 kubenswrapper[7454]: E0319 11:56:25.349292 7454 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 19 11:56:25.608756 master-0 kubenswrapper[7454]: I0319 11:56:25.608595 7454 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.32.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 11:56:25.608756 master-0 kubenswrapper[7454]: I0319 11:56:25.608702 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" probeResult="failure" output="Get \"https://192.168.32.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 11:56:25.660142 master-0 kubenswrapper[7454]: I0319 11:56:25.660042 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:25.660142 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:25.660142 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:25.660142 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:25.661364 master-0 kubenswrapper[7454]: I0319 11:56:25.660136 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:25.862757 master-0 kubenswrapper[7454]: I0319 11:56:25.862610 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:56:25.862757 master-0 kubenswrapper[7454]: I0319 11:56:25.862683 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:56:26.659541 master-0 kubenswrapper[7454]: I0319 11:56:26.659496 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:26.659541 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:26.659541 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:26.659541 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:26.659996 master-0 kubenswrapper[7454]: I0319 11:56:26.659961 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:27.659294 master-0 kubenswrapper[7454]: I0319 11:56:27.659224 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:27.659294 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:27.659294 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:27.659294 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:27.659294 master-0 kubenswrapper[7454]: I0319 11:56:27.659294 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:28.659276 master-0 kubenswrapper[7454]: I0319 11:56:28.659191 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:28.659276 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:28.659276 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:28.659276 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:28.660002 master-0 kubenswrapper[7454]: I0319 11:56:28.659307 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:28.862367 master-0 kubenswrapper[7454]: I0319 11:56:28.862279 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:56:28.862609 master-0 kubenswrapper[7454]: I0319 11:56:28.862364 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:56:29.115742 master-0 kubenswrapper[7454]: I0319 11:56:29.115606 7454 status_manager.go:851] "Failed to get status for pod" podUID="46f265536aba6292ead501bc9b49f327" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods bootstrap-kube-controller-manager-master-0)" Mar 19 11:56:29.660390 master-0 kubenswrapper[7454]: I0319 11:56:29.660318 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:29.660390 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:29.660390 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:29.660390 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:29.661617 master-0 kubenswrapper[7454]: I0319 11:56:29.660425 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:30.660084 master-0 kubenswrapper[7454]: I0319 11:56:30.659990 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:30.660084 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:30.660084 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:30.660084 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:30.660084 master-0 kubenswrapper[7454]: I0319 11:56:30.660063 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:31.658663 master-0 kubenswrapper[7454]: I0319 11:56:31.658612 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:31.658663 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:31.658663 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:31.658663 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:31.659438 master-0 kubenswrapper[7454]: I0319 11:56:31.658677 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:31.862640 master-0 kubenswrapper[7454]: I0319 11:56:31.862573 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:56:31.862910 master-0 kubenswrapper[7454]: I0319 11:56:31.862642 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:56:32.659689 master-0 kubenswrapper[7454]: I0319 11:56:32.659617 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:32.659689 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:32.659689 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:32.659689 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:32.659689 master-0 kubenswrapper[7454]: I0319 11:56:32.659690 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:33.055533 master-0 kubenswrapper[7454]: E0319 11:56:33.055467 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 11:56:33.659942 master-0 kubenswrapper[7454]: I0319 11:56:33.659839 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:33.659942 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:33.659942 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:33.659942 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:33.660835 master-0 kubenswrapper[7454]: I0319 11:56:33.659995 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:34.659905 master-0 kubenswrapper[7454]: I0319 11:56:34.659856 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:34.659905 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:34.659905 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:34.659905 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:34.660852 master-0 kubenswrapper[7454]: I0319 11:56:34.660819 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:34.686219 master-0 kubenswrapper[7454]: E0319 11:56:34.686149 7454 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 19 11:56:34.686452 master-0 kubenswrapper[7454]: E0319 11:56:34.686365 7454 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.023s" Mar 19 11:56:34.695169 master-0 kubenswrapper[7454]: I0319 11:56:34.695088 7454 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 19 11:56:34.862408 master-0 kubenswrapper[7454]: I0319 11:56:34.862339 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:56:34.862636 master-0 kubenswrapper[7454]: I0319 11:56:34.862424 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:56:35.660481 master-0 kubenswrapper[7454]: I0319 11:56:35.660393 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:35.660481 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:35.660481 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:35.660481 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:35.660481 master-0 kubenswrapper[7454]: I0319 11:56:35.660483 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:36.150746 master-0 kubenswrapper[7454]: E0319 11:56:36.150604 7454 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 19 11:56:36.659158 master-0 kubenswrapper[7454]: I0319 11:56:36.659083 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:36.659158 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:36.659158 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:36.659158 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:36.659158 master-0 kubenswrapper[7454]: I0319 11:56:36.659146 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:37.119484 master-0 kubenswrapper[7454]: E0319 11:56:37.119281 7454 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189e3c04fe42a144 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:24b4ed170d527099878cb5fdd508a2fb,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:54:55.37386938 +0000 UTC m=+65.004335293,LastTimestamp:2026-03-19 11:54:55.37386938 +0000 UTC m=+65.004335293,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 11:56:37.661254 master-0 kubenswrapper[7454]: I0319 11:56:37.661123 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:37.661254 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:37.661254 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:37.661254 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:37.661920 master-0 kubenswrapper[7454]: I0319 11:56:37.661870 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:37.862095 master-0 kubenswrapper[7454]: I0319 11:56:37.861993 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:56:37.862095 master-0 kubenswrapper[7454]: I0319 11:56:37.862082 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:56:38.659169 master-0 kubenswrapper[7454]: I0319 11:56:38.659111 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:38.659169 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:38.659169 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:38.659169 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:38.660212 master-0 kubenswrapper[7454]: I0319 11:56:38.659934 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:38.941130 master-0 kubenswrapper[7454]: E0319 11:56:38.941003 7454 projected.go:194] Error preparing data for projected volume kube-api-access-jnp9l for pod openshift-marketplace/redhat-marketplace-cjgpg: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 19 11:56:38.941130 master-0 kubenswrapper[7454]: E0319 11:56:38.941088 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-kube-api-access-jnp9l podName:0ed7eded-1e67-49ad-9777-c2ed1e006ce3 nodeName:}" failed. No retries permitted until 2026-03-19 11:56:40.941065634 +0000 UTC m=+170.571531547 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jnp9l" (UniqueName: "kubernetes.io/projected/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-kube-api-access-jnp9l") pod "redhat-marketplace-cjgpg" (UID: "0ed7eded-1e67-49ad-9777-c2ed1e006ce3") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 19 11:56:39.659742 master-0 kubenswrapper[7454]: I0319 11:56:39.659698 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:39.659742 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:39.659742 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:39.659742 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:39.660671 master-0 kubenswrapper[7454]: I0319 11:56:39.660634 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:40.660069 master-0 kubenswrapper[7454]: I0319 11:56:40.659959 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:40.660069 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:40.660069 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:40.660069 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:40.660069 master-0 kubenswrapper[7454]: I0319 11:56:40.660031 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:40.862244 master-0 kubenswrapper[7454]: I0319 11:56:40.862174 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:56:40.862616 master-0 kubenswrapper[7454]: I0319 11:56:40.862252 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:56:41.001067 master-0 kubenswrapper[7454]: I0319 11:56:41.000849 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnp9l\" (UniqueName: \"kubernetes.io/projected/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-kube-api-access-jnp9l\") pod \"redhat-marketplace-cjgpg\" (UID: \"0ed7eded-1e67-49ad-9777-c2ed1e006ce3\") " pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 11:56:41.658921 master-0 kubenswrapper[7454]: I0319 11:56:41.658857 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:41.658921 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:41.658921 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:41.658921 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:41.659324 master-0 kubenswrapper[7454]: I0319 11:56:41.658931 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:42.658976 master-0 kubenswrapper[7454]: I0319 11:56:42.658859 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:42.658976 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:42.658976 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:42.658976 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:42.658976 master-0 kubenswrapper[7454]: I0319 11:56:42.658926 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:42.673628 master-0 kubenswrapper[7454]: I0319 11:56:42.673587 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/0.log" Mar 19 11:56:42.673814 master-0 kubenswrapper[7454]: I0319 11:56:42.673645 7454 generic.go:334] "Generic (PLEG): container finished" podID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" containerID="85ef4c835912214d79ee0e2491e95c939671fab04307a1604919b04165567448" exitCode=1 Mar 19 11:56:43.056535 master-0 kubenswrapper[7454]: E0319 11:56:43.056440 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 11:56:43.659951 master-0 kubenswrapper[7454]: I0319 11:56:43.659782 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:43.659951 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:43.659951 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:43.659951 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:43.659951 master-0 kubenswrapper[7454]: I0319 11:56:43.659892 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:43.863098 master-0 kubenswrapper[7454]: I0319 11:56:43.862994 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:56:43.863098 master-0 kubenswrapper[7454]: I0319 11:56:43.863086 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:56:44.660436 master-0 kubenswrapper[7454]: I0319 11:56:44.660383 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:44.660436 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:44.660436 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:44.660436 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:44.661639 master-0 kubenswrapper[7454]: I0319 11:56:44.661589 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:45.658493 master-0 kubenswrapper[7454]: I0319 11:56:45.658410 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:45.658493 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:45.658493 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:45.658493 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:45.659143 master-0 kubenswrapper[7454]: I0319 11:56:45.658529 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:46.659588 master-0 kubenswrapper[7454]: I0319 11:56:46.659460 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:46.659588 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:46.659588 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:46.659588 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:46.659588 master-0 kubenswrapper[7454]: I0319 11:56:46.659549 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:46.697513 master-0 kubenswrapper[7454]: I0319 11:56:46.697469 7454 generic.go:334] "Generic (PLEG): container finished" podID="b0f5939c-48b1-4d6c-9712-9128a78d603b" containerID="68ef893f247d25c990ee12be4a1311e23963264bd6e324255f2b26ed404f9f6a" exitCode=0 Mar 19 11:56:46.862077 master-0 kubenswrapper[7454]: I0319 11:56:46.861966 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 19 11:56:46.862077 master-0 kubenswrapper[7454]: I0319 11:56:46.862070 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 19 11:56:47.611074 master-0 kubenswrapper[7454]: I0319 11:56:47.610998 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnp9l\" (UniqueName: \"kubernetes.io/projected/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-kube-api-access-jnp9l\") pod \"redhat-marketplace-cjgpg\" (UID: \"0ed7eded-1e67-49ad-9777-c2ed1e006ce3\") " pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 11:56:47.620064 master-0 kubenswrapper[7454]: E0319 11:56:47.619463 7454 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.933s" Mar 19 11:56:47.620064 master-0 kubenswrapper[7454]: I0319 11:56:47.619502 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"ae877f625bae80d8605f3f0a14837fe860251e1f110b4f53ede269b520516c48"} Mar 19 11:56:47.620064 master-0 kubenswrapper[7454]: I0319 11:56:47.619530 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"4b49f09f-2efa-4657-9f5a-fbddd42bee0d","Type":"ContainerDied","Data":"1f0110e6404807316fe552282de736e25a5c73a98ca28c762d1ca02e35c0a306"} Mar 19 11:56:47.620064 master-0 kubenswrapper[7454]: I0319 11:56:47.619547 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p225c" Mar 19 11:56:47.620064 master-0 kubenswrapper[7454]: I0319 11:56:47.619560 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cjgpg"] Mar 19 11:56:47.620064 master-0 kubenswrapper[7454]: I0319 11:56:47.619575 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-flnbx" Mar 19 11:56:47.632738 master-0 kubenswrapper[7454]: I0319 11:56:47.631725 7454 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 19 11:56:47.634417 master-0 kubenswrapper[7454]: I0319 11:56:47.634392 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-flnbx"] Mar 19 11:56:47.634515 master-0 kubenswrapper[7454]: I0319 11:56:47.634503 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:56:47.634681 master-0 kubenswrapper[7454]: I0319 11:56:47.634670 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-flnbx" Mar 19 11:56:47.634828 master-0 kubenswrapper[7454]: I0319 11:56:47.634816 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p225c" Mar 19 11:56:47.634916 master-0 kubenswrapper[7454]: I0319 11:56:47.634898 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"632bdf3b-0ba0-4874-a2ec-8396683c35c5","Type":"ContainerDied","Data":"0db01150a16f0758697f4004ab15abe194def9a3c61ba179de9b9e1316f2ccf4"} Mar 19 11:56:47.634988 master-0 kubenswrapper[7454]: I0319 11:56:47.634976 7454 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 11:56:47.635420 master-0 kubenswrapper[7454]: I0319 11:56:47.635373 7454 scope.go:117] "RemoveContainer" containerID="6c3d43a01987e52cadf8e3819b9c184c46b6535cb510d14c96117eed3c48a981" Mar 19 11:56:47.636216 master-0 kubenswrapper[7454]: I0319 11:56:47.636158 7454 scope.go:117] "RemoveContainer" containerID="fe8804b9f205d5f40aba452ae8167e7ca2d2057bbd5a93b9e42d8ec2d88c8b07" Mar 19 11:56:47.636358 master-0 kubenswrapper[7454]: I0319 11:56:47.636332 7454 scope.go:117] "RemoveContainer" containerID="a04e94059c93f3fb95feb69e0b122c65aebac1f390cdd0cf514b18a508325ef8" Mar 19 11:56:47.636436 master-0 kubenswrapper[7454]: I0319 11:56:47.636421 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 19 11:56:47.636505 master-0 kubenswrapper[7454]: I0319 11:56:47.636491 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p225c" event={"ID":"77497070-ffa8-45e5-935d-5281828d6962","Type":"ContainerStarted","Data":"190d503cfa5b3468f9c6f03dd24e2c652db25c8933a5abbed215bb786c5ba261"} Mar 19 11:56:47.636570 master-0 kubenswrapper[7454]: I0319 11:56:47.636555 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"b1644591703fae93237fc31fd150f1f15c8f0859003326d27d1f2dc973286631"} Mar 19 11:56:47.636636 master-0 kubenswrapper[7454]: I0319 11:56:47.636623 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kj4wv" event={"ID":"903d114c-199f-46f9-b39b-afa52df71ea9","Type":"ContainerStarted","Data":"01c6345f7bbfbca980ae704e3ee9f222c4d7648651263304ba6c17e7dd5b469b"} Mar 19 11:56:47.636702 master-0 kubenswrapper[7454]: I0319 11:56:47.636691 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-flnbx" Mar 19 11:56:47.636770 master-0 kubenswrapper[7454]: I0319 11:56:47.636758 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"a973f0ee8875b2d0a945786f9dfa74332d931ec7b77d7601fa9f321c2f8b22ac"} Mar 19 11:56:47.636853 master-0 kubenswrapper[7454]: I0319 11:56:47.636840 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 19 11:56:47.636918 master-0 kubenswrapper[7454]: I0319 11:56:47.636905 7454 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="e157bd8e-6227-49e1-8f5c-c9dfc382218b" Mar 19 11:56:47.636982 master-0 kubenswrapper[7454]: I0319 11:56:47.636968 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"f1e75c3306e850702c2dc6476b3f22a646b8072b1c422645e39adf1879a4acf8"} Mar 19 11:56:47.637041 master-0 kubenswrapper[7454]: I0319 11:56:47.636697 7454 scope.go:117] "RemoveContainer" containerID="136228bc884d9d84e6c34125e85b6f53a4eb9c869542bab1b85def5ce8ff08ff" Mar 19 11:56:47.637972 master-0 kubenswrapper[7454]: I0319 11:56:47.637944 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 19 11:56:47.637972 master-0 kubenswrapper[7454]: I0319 11:56:47.637966 7454 scope.go:117] "RemoveContainer" containerID="ec99e0001708bd8c36619c411325f2d4bdab0ecd7770deeae64fffd8bdf90881" Mar 19 11:56:47.638039 master-0 kubenswrapper[7454]: I0319 11:56:47.637988 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kj4wv"] Mar 19 11:56:47.638039 master-0 kubenswrapper[7454]: I0319 11:56:47.638019 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 19 11:56:47.638039 master-0 kubenswrapper[7454]: I0319 11:56:47.638032 7454 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="e157bd8e-6227-49e1-8f5c-c9dfc382218b" Mar 19 11:56:47.638126 master-0 kubenswrapper[7454]: I0319 11:56:47.638049 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws" event={"ID":"0f97d998-530c-4d9d-a030-ca1d9d2d4490","Type":"ContainerDied","Data":"fe8804b9f205d5f40aba452ae8167e7ca2d2057bbd5a93b9e42d8ec2d88c8b07"} Mar 19 11:56:47.638126 master-0 kubenswrapper[7454]: I0319 11:56:47.638074 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-wd4nx" event={"ID":"8414b6b0-ee16-47a5-982b-ee58b136cfcf","Type":"ContainerDied","Data":"acd01abcc3b9701b51c684ecc460502246e3fa79a2f3e8b56cc2aec4e47bef9f"} Mar 19 11:56:47.638126 master-0 kubenswrapper[7454]: I0319 11:56:47.638096 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" event={"ID":"d3017b5e-178e-49de-89d2-817a18398203","Type":"ContainerDied","Data":"ec99e0001708bd8c36619c411325f2d4bdab0ecd7770deeae64fffd8bdf90881"} Mar 19 11:56:47.638126 master-0 kubenswrapper[7454]: I0319 11:56:47.638109 7454 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:56:47.638271 master-0 kubenswrapper[7454]: I0319 11:56:47.638176 7454 scope.go:117] "RemoveContainer" containerID="4a5b36532ee146a92740f77707f5b0a6a8c33bb89c0054e1d9177bfea2033a2d" Mar 19 11:56:47.638376 master-0 kubenswrapper[7454]: I0319 11:56:47.638360 7454 scope.go:117] "RemoveContainer" containerID="9dbaaa2ce519ab256717766bb8d971f864766afcc411753d09c087dd190cf903" Mar 19 11:56:47.638593 master-0 kubenswrapper[7454]: I0319 11:56:47.638557 7454 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"2e74e767e3ac9aff0d456d8d8b27b05725691d9b35635b73f0381a2cb7166772"} pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 19 11:56:47.638645 master-0 kubenswrapper[7454]: I0319 11:56:47.638603 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" containerID="cri-o://2e74e767e3ac9aff0d456d8d8b27b05725691d9b35635b73f0381a2cb7166772" gracePeriod=30 Mar 19 11:56:47.638833 master-0 kubenswrapper[7454]: I0319 11:56:47.638185 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kj4wv" Mar 19 11:56:47.638833 master-0 kubenswrapper[7454]: I0319 11:56:47.638721 7454 scope.go:117] "RemoveContainer" containerID="41d4637f09562b9b79d583fb65c9acfd7f81986cff143ad48c1c09b266f39b23" Mar 19 11:56:47.638913 master-0 kubenswrapper[7454]: I0319 11:56:47.638888 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kj4wv" Mar 19 11:56:47.638944 master-0 kubenswrapper[7454]: I0319 11:56:47.638920 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" event={"ID":"2151eb84-177e-459c-be71-f48465323ac2","Type":"ContainerDied","Data":"76df0534cc0fd6a5cc55f7565b57a91fd38d7e12169a76c5133f215b1479d2db"} Mar 19 11:56:47.638944 master-0 kubenswrapper[7454]: I0319 11:56:47.638939 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" event={"ID":"f08c5930-44f0-48e4-80dd-2563f2733b2f","Type":"ContainerDied","Data":"41d4637f09562b9b79d583fb65c9acfd7f81986cff143ad48c1c09b266f39b23"} Mar 19 11:56:47.639033 master-0 kubenswrapper[7454]: I0319 11:56:47.638960 7454 status_manager.go:379] "Container startup changed for unknown container" pod="kube-system/bootstrap-kube-controller-manager-master-0" containerID="cri-o://84d3d820222fb615bbd44a77bcef4de4c96b78f545a4bc5490f5fa77f0e958e7" Mar 19 11:56:47.639033 master-0 kubenswrapper[7454]: I0319 11:56:47.638971 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:56:47.639033 master-0 kubenswrapper[7454]: I0319 11:56:47.638982 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kj4wv" Mar 19 11:56:47.639033 master-0 kubenswrapper[7454]: I0319 11:56:47.638991 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" event={"ID":"9702fc8c-4fe0-413b-b2d4-db23021d42b8","Type":"ContainerDied","Data":"6c3d43a01987e52cadf8e3819b9c184c46b6535cb510d14c96117eed3c48a981"} Mar 19 11:56:47.639149 master-0 kubenswrapper[7454]: I0319 11:56:47.639060 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 19 11:56:47.639149 master-0 kubenswrapper[7454]: I0319 11:56:47.639088 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p225c" Mar 19 11:56:47.639149 master-0 kubenswrapper[7454]: I0319 11:56:47.639104 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:56:47.639149 master-0 kubenswrapper[7454]: I0319 11:56:47.639113 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" event={"ID":"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f","Type":"ContainerDied","Data":"9dbaaa2ce519ab256717766bb8d971f864766afcc411753d09c087dd190cf903"} Mar 19 11:56:47.639149 master-0 kubenswrapper[7454]: I0319 11:56:47.639125 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" event={"ID":"284768b8-9d70-4cf7-bace-8adc6b587186","Type":"ContainerDied","Data":"4a5b36532ee146a92740f77707f5b0a6a8c33bb89c0054e1d9177bfea2033a2d"} Mar 19 11:56:47.639149 master-0 kubenswrapper[7454]: I0319 11:56:47.639139 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" event={"ID":"1089ea24-add9-482e-9276-e6ded12052d7","Type":"ContainerDied","Data":"a04e94059c93f3fb95feb69e0b122c65aebac1f390cdd0cf514b18a508325ef8"} Mar 19 11:56:47.639149 master-0 kubenswrapper[7454]: I0319 11:56:47.639150 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" event={"ID":"06df1b1b-154e-46f9-aee0-79a137c6c928","Type":"ContainerDied","Data":"136228bc884d9d84e6c34125e85b6f53a4eb9c869542bab1b85def5ce8ff08ff"} Mar 19 11:56:47.639331 master-0 kubenswrapper[7454]: I0319 11:56:47.639144 7454 scope.go:117] "RemoveContainer" containerID="48511943c8e0f8f2cb56a0dbe005be6b65b3cfab069bdef05e341ca254849587" Mar 19 11:56:47.639460 master-0 kubenswrapper[7454]: I0319 11:56:47.639166 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" event={"ID":"aef8e03f-0363-4e13-b7ca-4fa871d77c62","Type":"ContainerDied","Data":"1dd2940995583a19410f74ab256d2834a4c83d4ba579f4590af5fea605682788"} Mar 19 11:56:47.639507 master-0 kubenswrapper[7454]: I0319 11:56:47.639469 7454 scope.go:117] "RemoveContainer" containerID="697b28a330e52c45053a0bb858d1df6049dfd854ab75b1f95587cbc7874588cd" Mar 19 11:56:47.639507 master-0 kubenswrapper[7454]: I0319 11:56:47.639486 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" event={"ID":"c2dbd8b3-0e02-4747-a166-80aa6a94b060","Type":"ContainerDied","Data":"697b28a330e52c45053a0bb858d1df6049dfd854ab75b1f95587cbc7874588cd"} Mar 19 11:56:47.639569 master-0 kubenswrapper[7454]: I0319 11:56:47.639517 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" event={"ID":"aef8e03f-0363-4e13-b7ca-4fa871d77c62","Type":"ContainerStarted","Data":"2e74e767e3ac9aff0d456d8d8b27b05725691d9b35635b73f0381a2cb7166772"} Mar 19 11:56:47.639569 master-0 kubenswrapper[7454]: I0319 11:56:47.639535 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"84d3d820222fb615bbd44a77bcef4de4c96b78f545a4bc5490f5fa77f0e958e7"} Mar 19 11:56:47.639569 master-0 kubenswrapper[7454]: I0319 11:56:47.639558 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"d27eb2ebaac292762ef9b813ca8ecd2562af748c7508755cdddd461f45526f66"} Mar 19 11:56:47.639652 master-0 kubenswrapper[7454]: I0319 11:56:47.639573 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"632bdf3b-0ba0-4874-a2ec-8396683c35c5","Type":"ContainerDied","Data":"1c8244ac71cff666f8f31eda66e91f3ec8411550f1be8d391239277f0b7cf02b"} Mar 19 11:56:47.639652 master-0 kubenswrapper[7454]: I0319 11:56:47.639593 7454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c8244ac71cff666f8f31eda66e91f3ec8411550f1be8d391239277f0b7cf02b" Mar 19 11:56:47.639652 master-0 kubenswrapper[7454]: I0319 11:56:47.639606 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"4b49f09f-2efa-4657-9f5a-fbddd42bee0d","Type":"ContainerDied","Data":"df06fa6144150d2fd73d9f262bf2cf21b2895ff0830d1e0b601df841982f89d6"} Mar 19 11:56:47.639652 master-0 kubenswrapper[7454]: I0319 11:56:47.639551 7454 scope.go:117] "RemoveContainer" containerID="acd01abcc3b9701b51c684ecc460502246e3fa79a2f3e8b56cc2aec4e47bef9f" Mar 19 11:56:47.639652 master-0 kubenswrapper[7454]: I0319 11:56:47.639624 7454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df06fa6144150d2fd73d9f262bf2cf21b2895ff0830d1e0b601df841982f89d6" Mar 19 11:56:47.639652 master-0 kubenswrapper[7454]: I0319 11:56:47.639636 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" event={"ID":"661b8957-a890-4032-9e57-45e2e0b35249","Type":"ContainerDied","Data":"48511943c8e0f8f2cb56a0dbe005be6b65b3cfab069bdef05e341ca254849587"} Mar 19 11:56:47.639652 master-0 kubenswrapper[7454]: I0319 11:56:47.639650 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"2e4442dc-19e2-42a3-b5d9-7af7765b1939","Type":"ContainerDied","Data":"01fb0bb7c58b7c7fb9f4e6423408b3fdefa74b9c0303c15e18382b768dd8f028"} Mar 19 11:56:47.639652 master-0 kubenswrapper[7454]: I0319 11:56:47.639657 7454 scope.go:117] "RemoveContainer" containerID="570446cbe4fe51c612e56ccc1c781b010d9f51a4701a23ab3e0e9c3afd18acfd" Mar 19 11:56:47.640087 master-0 kubenswrapper[7454]: I0319 11:56:47.639670 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" event={"ID":"b80027fd-7b39-477a-a337-ff9bb08e7eeb","Type":"ContainerDied","Data":"85ef4c835912214d79ee0e2491e95c939671fab04307a1604919b04165567448"} Mar 19 11:56:47.640087 master-0 kubenswrapper[7454]: I0319 11:56:47.639725 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" event={"ID":"b0f5939c-48b1-4d6c-9712-9128a78d603b","Type":"ContainerDied","Data":"68ef893f247d25c990ee12be4a1311e23963264bd6e324255f2b26ed404f9f6a"} Mar 19 11:56:47.640261 master-0 kubenswrapper[7454]: I0319 11:56:47.640237 7454 scope.go:117] "RemoveContainer" containerID="68ef893f247d25c990ee12be4a1311e23963264bd6e324255f2b26ed404f9f6a" Mar 19 11:56:47.640314 master-0 kubenswrapper[7454]: I0319 11:56:47.640275 7454 scope.go:117] "RemoveContainer" containerID="85ef4c835912214d79ee0e2491e95c939671fab04307a1604919b04165567448" Mar 19 11:56:47.640406 master-0 kubenswrapper[7454]: I0319 11:56:47.640391 7454 scope.go:117] "RemoveContainer" containerID="76df0534cc0fd6a5cc55f7565b57a91fd38d7e12169a76c5133f215b1479d2db" Mar 19 11:56:47.662428 master-0 kubenswrapper[7454]: I0319 11:56:47.661190 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:47.662428 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:47.662428 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:47.662428 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:47.663114 master-0 kubenswrapper[7454]: I0319 11:56:47.661249 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:47.667422 master-0 kubenswrapper[7454]: I0319 11:56:47.667390 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 19 11:56:47.681333 master-0 kubenswrapper[7454]: I0319 11:56:47.680836 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-flnbx" Mar 19 11:56:47.719602 master-0 kubenswrapper[7454]: I0319 11:56:47.719553 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kj4wv" Mar 19 11:56:47.724865 master-0 kubenswrapper[7454]: I0319 11:56:47.723509 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-djzws" Mar 19 11:56:47.727016 master-0 kubenswrapper[7454]: I0319 11:56:47.726759 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kj4wv" podUID="903d114c-199f-46f9-b39b-afa52df71ea9" containerName="registry-server" containerID="cri-o://01c6345f7bbfbca980ae704e3ee9f222c4d7648651263304ba6c17e7dd5b469b" gracePeriod=2 Mar 19 11:56:47.728739 master-0 kubenswrapper[7454]: I0319 11:56:47.728031 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-flnbx" podUID="1370cf76-52c4-4f19-8dfc-794f2901f8a6" containerName="registry-server" containerID="cri-o://119bf6827db5b99fa9f0be9581be040a80edb534abd0a1348f3d58833768911f" gracePeriod=2 Mar 19 11:56:47.732370 master-0 kubenswrapper[7454]: I0319 11:56:47.732059 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 11:56:47.754029 master-0 kubenswrapper[7454]: E0319 11:56:47.753966 7454 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 19 11:56:47.804054 master-0 kubenswrapper[7454]: I0319 11:56:47.803975 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkx8c"] Mar 19 11:56:47.804612 master-0 kubenswrapper[7454]: I0319 11:56:47.804577 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkx8c"] Mar 19 11:56:47.814711 master-0 kubenswrapper[7454]: I0319 11:56:47.814433 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6" podStartSLOduration=119.185588835 podStartE2EDuration="2m5.814401935s" podCreationTimestamp="2026-03-19 11:54:42 +0000 UTC" firstStartedPulling="2026-03-19 11:54:48.081456746 +0000 UTC m=+57.711922659" lastFinishedPulling="2026-03-19 11:54:54.710269846 +0000 UTC m=+64.340735759" observedRunningTime="2026-03-19 11:56:47.812400192 +0000 UTC m=+177.442866105" watchObservedRunningTime="2026-03-19 11:56:47.814401935 +0000 UTC m=+177.444867848" Mar 19 11:56:47.852104 master-0 kubenswrapper[7454]: I0319 11:56:47.852045 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 19 11:56:47.861939 master-0 kubenswrapper[7454]: I0319 11:56:47.861832 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 19 11:56:47.924354 master-0 kubenswrapper[7454]: I0319 11:56:47.921770 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podStartSLOduration=118.908644309 podStartE2EDuration="2m5.921756955s" podCreationTimestamp="2026-03-19 11:54:42 +0000 UTC" firstStartedPulling="2026-03-19 11:54:47.696924933 +0000 UTC m=+57.327390846" lastFinishedPulling="2026-03-19 11:54:54.710037589 +0000 UTC m=+64.340503492" observedRunningTime="2026-03-19 11:56:47.920009169 +0000 UTC m=+177.550475072" watchObservedRunningTime="2026-03-19 11:56:47.921756955 +0000 UTC m=+177.552222868" Mar 19 11:56:47.924354 master-0 kubenswrapper[7454]: I0319 11:56:47.922450 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-g7mqg" podStartSLOduration=117.922445216 podStartE2EDuration="1m57.922445216s" podCreationTimestamp="2026-03-19 11:54:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:56:47.894475583 +0000 UTC m=+177.524941496" watchObservedRunningTime="2026-03-19 11:56:47.922445216 +0000 UTC m=+177.552911129" Mar 19 11:56:48.008101 master-0 kubenswrapper[7454]: I0319 11:56:48.006924 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-flnbx" podStartSLOduration=85.496636811 podStartE2EDuration="2m5.006903083s" podCreationTimestamp="2026-03-19 11:54:43 +0000 UTC" firstStartedPulling="2026-03-19 11:54:46.158703295 +0000 UTC m=+55.789169208" lastFinishedPulling="2026-03-19 11:55:25.668969567 +0000 UTC m=+95.299435480" observedRunningTime="2026-03-19 11:56:48.006390006 +0000 UTC m=+177.636855929" watchObservedRunningTime="2026-03-19 11:56:48.006903083 +0000 UTC m=+177.637368996" Mar 19 11:56:48.125575 master-0 kubenswrapper[7454]: I0319 11:56:48.125349 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": read tcp 10.128.0.2:35566->10.128.0.29:8443: read: connection reset by peer" start-of-body= Mar 19 11:56:48.125575 master-0 kubenswrapper[7454]: I0319 11:56:48.125395 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": read tcp 10.128.0.2:35566->10.128.0.29:8443: read: connection reset by peer" Mar 19 11:56:48.202784 master-0 kubenswrapper[7454]: I0319 11:56:48.202710 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=114.202688805 podStartE2EDuration="1m54.202688805s" podCreationTimestamp="2026-03-19 11:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:56:48.199724591 +0000 UTC m=+177.830190504" watchObservedRunningTime="2026-03-19 11:56:48.202688805 +0000 UTC m=+177.833154718" Mar 19 11:56:48.256811 master-0 kubenswrapper[7454]: I0319 11:56:48.256263 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_2e4442dc-19e2-42a3-b5d9-7af7765b1939/installer/0.log" Mar 19 11:56:48.256811 master-0 kubenswrapper[7454]: I0319 11:56:48.256354 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 11:56:48.267628 master-0 kubenswrapper[7454]: I0319 11:56:48.260117 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kj4wv" podStartSLOduration=78.255116341 podStartE2EDuration="2m6.260096968s" podCreationTimestamp="2026-03-19 11:54:42 +0000 UTC" firstStartedPulling="2026-03-19 11:54:44.090434497 +0000 UTC m=+53.720900410" lastFinishedPulling="2026-03-19 11:55:32.095415124 +0000 UTC m=+101.725881037" observedRunningTime="2026-03-19 11:56:48.257881488 +0000 UTC m=+177.888347421" watchObservedRunningTime="2026-03-19 11:56:48.260096968 +0000 UTC m=+177.890562881" Mar 19 11:56:48.316241 master-0 kubenswrapper[7454]: I0319 11:56:48.315537 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-flnbx" Mar 19 11:56:48.328137 master-0 kubenswrapper[7454]: I0319 11:56:48.325397 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e4442dc-19e2-42a3-b5d9-7af7765b1939-kube-api-access\") pod \"2e4442dc-19e2-42a3-b5d9-7af7765b1939\" (UID: \"2e4442dc-19e2-42a3-b5d9-7af7765b1939\") " Mar 19 11:56:48.328137 master-0 kubenswrapper[7454]: I0319 11:56:48.325522 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e4442dc-19e2-42a3-b5d9-7af7765b1939-kubelet-dir\") pod \"2e4442dc-19e2-42a3-b5d9-7af7765b1939\" (UID: \"2e4442dc-19e2-42a3-b5d9-7af7765b1939\") " Mar 19 11:56:48.328137 master-0 kubenswrapper[7454]: I0319 11:56:48.325604 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2e4442dc-19e2-42a3-b5d9-7af7765b1939-var-lock\") pod \"2e4442dc-19e2-42a3-b5d9-7af7765b1939\" (UID: \"2e4442dc-19e2-42a3-b5d9-7af7765b1939\") " Mar 19 11:56:48.328137 master-0 kubenswrapper[7454]: I0319 11:56:48.326078 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e4442dc-19e2-42a3-b5d9-7af7765b1939-var-lock" (OuterVolumeSpecName: "var-lock") pod "2e4442dc-19e2-42a3-b5d9-7af7765b1939" (UID: "2e4442dc-19e2-42a3-b5d9-7af7765b1939"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:56:48.328137 master-0 kubenswrapper[7454]: I0319 11:56:48.327004 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e4442dc-19e2-42a3-b5d9-7af7765b1939-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2e4442dc-19e2-42a3-b5d9-7af7765b1939" (UID: "2e4442dc-19e2-42a3-b5d9-7af7765b1939"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:56:48.335386 master-0 kubenswrapper[7454]: I0319 11:56:48.332488 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e4442dc-19e2-42a3-b5d9-7af7765b1939-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2e4442dc-19e2-42a3-b5d9-7af7765b1939" (UID: "2e4442dc-19e2-42a3-b5d9-7af7765b1939"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:56:48.428466 master-0 kubenswrapper[7454]: I0319 11:56:48.428421 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1370cf76-52c4-4f19-8dfc-794f2901f8a6-catalog-content\") pod \"1370cf76-52c4-4f19-8dfc-794f2901f8a6\" (UID: \"1370cf76-52c4-4f19-8dfc-794f2901f8a6\") " Mar 19 11:56:48.431784 master-0 kubenswrapper[7454]: I0319 11:56:48.431698 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1370cf76-52c4-4f19-8dfc-794f2901f8a6-utilities\") pod \"1370cf76-52c4-4f19-8dfc-794f2901f8a6\" (UID: \"1370cf76-52c4-4f19-8dfc-794f2901f8a6\") " Mar 19 11:56:48.431863 master-0 kubenswrapper[7454]: I0319 11:56:48.431843 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxbgx\" (UniqueName: \"kubernetes.io/projected/1370cf76-52c4-4f19-8dfc-794f2901f8a6-kube-api-access-qxbgx\") pod \"1370cf76-52c4-4f19-8dfc-794f2901f8a6\" (UID: \"1370cf76-52c4-4f19-8dfc-794f2901f8a6\") " Mar 19 11:56:48.432533 master-0 kubenswrapper[7454]: I0319 11:56:48.432489 7454 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2e4442dc-19e2-42a3-b5d9-7af7765b1939-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 11:56:48.432533 master-0 kubenswrapper[7454]: I0319 11:56:48.432514 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e4442dc-19e2-42a3-b5d9-7af7765b1939-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 11:56:48.432533 master-0 kubenswrapper[7454]: I0319 11:56:48.432525 7454 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e4442dc-19e2-42a3-b5d9-7af7765b1939-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 11:56:48.437861 master-0 kubenswrapper[7454]: I0319 11:56:48.434668 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1370cf76-52c4-4f19-8dfc-794f2901f8a6-utilities" (OuterVolumeSpecName: "utilities") pod "1370cf76-52c4-4f19-8dfc-794f2901f8a6" (UID: "1370cf76-52c4-4f19-8dfc-794f2901f8a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 11:56:48.454418 master-0 kubenswrapper[7454]: I0319 11:56:48.454313 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1370cf76-52c4-4f19-8dfc-794f2901f8a6-kube-api-access-qxbgx" (OuterVolumeSpecName: "kube-api-access-qxbgx") pod "1370cf76-52c4-4f19-8dfc-794f2901f8a6" (UID: "1370cf76-52c4-4f19-8dfc-794f2901f8a6"). InnerVolumeSpecName "kube-api-access-qxbgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:56:48.496569 master-0 kubenswrapper[7454]: I0319 11:56:48.496532 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kj4wv" Mar 19 11:56:48.497937 master-0 kubenswrapper[7454]: I0319 11:56:48.497458 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cjgpg"] Mar 19 11:56:48.534662 master-0 kubenswrapper[7454]: I0319 11:56:48.534574 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/903d114c-199f-46f9-b39b-afa52df71ea9-utilities\") pod \"903d114c-199f-46f9-b39b-afa52df71ea9\" (UID: \"903d114c-199f-46f9-b39b-afa52df71ea9\") " Mar 19 11:56:48.534910 master-0 kubenswrapper[7454]: I0319 11:56:48.534683 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/903d114c-199f-46f9-b39b-afa52df71ea9-catalog-content\") pod \"903d114c-199f-46f9-b39b-afa52df71ea9\" (UID: \"903d114c-199f-46f9-b39b-afa52df71ea9\") " Mar 19 11:56:48.534910 master-0 kubenswrapper[7454]: I0319 11:56:48.534787 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk5rc\" (UniqueName: \"kubernetes.io/projected/903d114c-199f-46f9-b39b-afa52df71ea9-kube-api-access-zk5rc\") pod \"903d114c-199f-46f9-b39b-afa52df71ea9\" (UID: \"903d114c-199f-46f9-b39b-afa52df71ea9\") " Mar 19 11:56:48.535238 master-0 kubenswrapper[7454]: I0319 11:56:48.535214 7454 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1370cf76-52c4-4f19-8dfc-794f2901f8a6-utilities\") on node \"master-0\" DevicePath \"\"" Mar 19 11:56:48.535238 master-0 kubenswrapper[7454]: I0319 11:56:48.535235 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxbgx\" (UniqueName: \"kubernetes.io/projected/1370cf76-52c4-4f19-8dfc-794f2901f8a6-kube-api-access-qxbgx\") on node \"master-0\" DevicePath \"\"" Mar 19 11:56:48.544913 master-0 kubenswrapper[7454]: I0319 11:56:48.536863 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/903d114c-199f-46f9-b39b-afa52df71ea9-utilities" (OuterVolumeSpecName: "utilities") pod "903d114c-199f-46f9-b39b-afa52df71ea9" (UID: "903d114c-199f-46f9-b39b-afa52df71ea9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 11:56:48.567765 master-0 kubenswrapper[7454]: I0319 11:56:48.567712 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/903d114c-199f-46f9-b39b-afa52df71ea9-kube-api-access-zk5rc" (OuterVolumeSpecName: "kube-api-access-zk5rc") pod "903d114c-199f-46f9-b39b-afa52df71ea9" (UID: "903d114c-199f-46f9-b39b-afa52df71ea9"). InnerVolumeSpecName "kube-api-access-zk5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:56:48.612334 master-0 kubenswrapper[7454]: I0319 11:56:48.610852 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1370cf76-52c4-4f19-8dfc-794f2901f8a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1370cf76-52c4-4f19-8dfc-794f2901f8a6" (UID: "1370cf76-52c4-4f19-8dfc-794f2901f8a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 11:56:48.638498 master-0 kubenswrapper[7454]: I0319 11:56:48.638432 7454 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1370cf76-52c4-4f19-8dfc-794f2901f8a6-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 19 11:56:48.638498 master-0 kubenswrapper[7454]: I0319 11:56:48.638484 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk5rc\" (UniqueName: \"kubernetes.io/projected/903d114c-199f-46f9-b39b-afa52df71ea9-kube-api-access-zk5rc\") on node \"master-0\" DevicePath \"\"" Mar 19 11:56:48.638498 master-0 kubenswrapper[7454]: I0319 11:56:48.638499 7454 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/903d114c-199f-46f9-b39b-afa52df71ea9-utilities\") on node \"master-0\" DevicePath \"\"" Mar 19 11:56:48.652839 master-0 kubenswrapper[7454]: I0319 11:56:48.652743 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db75b266-69c4-4790-82f1-43168b5bb6a0" path="/var/lib/kubelet/pods/db75b266-69c4-4790-82f1-43168b5bb6a0/volumes" Mar 19 11:56:48.653493 master-0 kubenswrapper[7454]: I0319 11:56:48.653457 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7fd0b13-489f-42b7-a52a-6194fdc9f665" path="/var/lib/kubelet/pods/f7fd0b13-489f-42b7-a52a-6194fdc9f665/volumes" Mar 19 11:56:48.681161 master-0 kubenswrapper[7454]: I0319 11:56:48.678019 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:48.681161 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:48.681161 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:48.681161 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:48.681161 master-0 kubenswrapper[7454]: I0319 11:56:48.678068 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:48.766554 master-0 kubenswrapper[7454]: I0319 11:56:48.766475 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" event={"ID":"9702fc8c-4fe0-413b-b2d4-db23021d42b8","Type":"ContainerStarted","Data":"85e7981ab4ffb136104e696dbd4ddd983e1e59fded9b6089f718fe30c9ce6d06"} Mar 19 11:56:48.781171 master-0 kubenswrapper[7454]: I0319 11:56:48.774278 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-wd4nx_8414b6b0-ee16-47a5-982b-ee58b136cfcf/approver/0.log" Mar 19 11:56:48.798527 master-0 kubenswrapper[7454]: I0319 11:56:48.782565 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-wd4nx" event={"ID":"8414b6b0-ee16-47a5-982b-ee58b136cfcf","Type":"ContainerStarted","Data":"10c6078f6bb7ab73c8304b00dbc345f2f9442775840c07f5fbb58265a93f7893"} Mar 19 11:56:48.799859 master-0 kubenswrapper[7454]: I0319 11:56:48.799486 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" event={"ID":"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f","Type":"ContainerStarted","Data":"0e268f9e631c1c1eef9957df49c9bd3288288ceb31ee116d3f8cbb59cc95d5d3"} Mar 19 11:56:48.825213 master-0 kubenswrapper[7454]: I0319 11:56:48.825176 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-nhvl4_aef8e03f-0363-4e13-b7ca-4fa871d77c62/openshift-config-operator/1.log" Mar 19 11:56:48.835274 master-0 kubenswrapper[7454]: I0319 11:56:48.835218 7454 generic.go:334] "Generic (PLEG): container finished" podID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerID="2e74e767e3ac9aff0d456d8d8b27b05725691d9b35635b73f0381a2cb7166772" exitCode=255 Mar 19 11:56:48.835895 master-0 kubenswrapper[7454]: I0319 11:56:48.835464 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" event={"ID":"aef8e03f-0363-4e13-b7ca-4fa871d77c62","Type":"ContainerDied","Data":"2e74e767e3ac9aff0d456d8d8b27b05725691d9b35635b73f0381a2cb7166772"} Mar 19 11:56:48.835895 master-0 kubenswrapper[7454]: I0319 11:56:48.835571 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" event={"ID":"aef8e03f-0363-4e13-b7ca-4fa871d77c62","Type":"ContainerStarted","Data":"a149e2c842ba9b9ace54d5db12650852f1ae471a53fa714afaa548530c82918e"} Mar 19 11:56:48.835895 master-0 kubenswrapper[7454]: I0319 11:56:48.835652 7454 scope.go:117] "RemoveContainer" containerID="1dd2940995583a19410f74ab256d2834a4c83d4ba579f4590af5fea605682788" Mar 19 11:56:48.836340 master-0 kubenswrapper[7454]: I0319 11:56:48.836062 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:56:48.850287 master-0 kubenswrapper[7454]: I0319 11:56:48.846717 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p225c" podStartSLOduration=81.299678456 podStartE2EDuration="2m6.84669163s" podCreationTimestamp="2026-03-19 11:54:42 +0000 UTC" firstStartedPulling="2026-03-19 11:54:44.072525876 +0000 UTC m=+53.702991789" lastFinishedPulling="2026-03-19 11:55:29.61953905 +0000 UTC m=+99.250004963" observedRunningTime="2026-03-19 11:56:48.796422942 +0000 UTC m=+178.426888855" watchObservedRunningTime="2026-03-19 11:56:48.84669163 +0000 UTC m=+178.477157543" Mar 19 11:56:48.862952 master-0 kubenswrapper[7454]: I0319 11:56:48.862033 7454 generic.go:334] "Generic (PLEG): container finished" podID="1370cf76-52c4-4f19-8dfc-794f2901f8a6" containerID="119bf6827db5b99fa9f0be9581be040a80edb534abd0a1348f3d58833768911f" exitCode=0 Mar 19 11:56:48.862952 master-0 kubenswrapper[7454]: I0319 11:56:48.862160 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-flnbx" event={"ID":"1370cf76-52c4-4f19-8dfc-794f2901f8a6","Type":"ContainerDied","Data":"119bf6827db5b99fa9f0be9581be040a80edb534abd0a1348f3d58833768911f"} Mar 19 11:56:48.862952 master-0 kubenswrapper[7454]: I0319 11:56:48.862195 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-flnbx" event={"ID":"1370cf76-52c4-4f19-8dfc-794f2901f8a6","Type":"ContainerDied","Data":"673b063d313abd4fa88faf273eacc91a4214aa37217c17c5778c669aaa95fb83"} Mar 19 11:56:48.862952 master-0 kubenswrapper[7454]: I0319 11:56:48.862228 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-flnbx" Mar 19 11:56:48.892910 master-0 kubenswrapper[7454]: I0319 11:56:48.891046 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" event={"ID":"661b8957-a890-4032-9e57-45e2e0b35249","Type":"ContainerStarted","Data":"8029e790619aa38cad8e8de2a78237e40df608e3ac9eef2849bc22b648e7815d"} Mar 19 11:56:48.904205 master-0 kubenswrapper[7454]: I0319 11:56:48.904119 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws" event={"ID":"0f97d998-530c-4d9d-a030-ca1d9d2d4490","Type":"ContainerStarted","Data":"bfd9cc288e7c7c1046fa409055d9ed3be5c77cf9ef4586c6a1e9db33903a7a02"} Mar 19 11:56:48.924753 master-0 kubenswrapper[7454]: I0319 11:56:48.922918 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/903d114c-199f-46f9-b39b-afa52df71ea9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "903d114c-199f-46f9-b39b-afa52df71ea9" (UID: "903d114c-199f-46f9-b39b-afa52df71ea9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 11:56:48.924753 master-0 kubenswrapper[7454]: I0319 11:56:48.924135 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/0.log" Mar 19 11:56:48.924753 master-0 kubenswrapper[7454]: I0319 11:56:48.924209 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" event={"ID":"b80027fd-7b39-477a-a337-ff9bb08e7eeb","Type":"ContainerStarted","Data":"ff92d05d103782a47d08e29aa2fb79e226a87a90f33dcfc9e8b5555e427f0ce4"} Mar 19 11:56:48.949658 master-0 kubenswrapper[7454]: I0319 11:56:48.946139 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-nb8bk_284768b8-9d70-4cf7-bace-8adc6b587186/network-operator/0.log" Mar 19 11:56:48.949658 master-0 kubenswrapper[7454]: I0319 11:56:48.946643 7454 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/903d114c-199f-46f9-b39b-afa52df71ea9-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 19 11:56:48.949658 master-0 kubenswrapper[7454]: I0319 11:56:48.946919 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" event={"ID":"284768b8-9d70-4cf7-bace-8adc6b587186","Type":"ContainerStarted","Data":"830a4c4455c183de67016fc3718c5a97752f0cfce1dc50148405d7350be95687"} Mar 19 11:56:48.977469 master-0 kubenswrapper[7454]: I0319 11:56:48.974074 7454 scope.go:117] "RemoveContainer" containerID="119bf6827db5b99fa9f0be9581be040a80edb534abd0a1348f3d58833768911f" Mar 19 11:56:48.999269 master-0 kubenswrapper[7454]: I0319 11:56:48.999153 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" event={"ID":"c2dbd8b3-0e02-4747-a166-80aa6a94b060","Type":"ContainerStarted","Data":"9b94f3ce845e45bce6a4da02817f29d1e1e7aae7feebfaef9a611a7702ab374b"} Mar 19 11:56:49.030889 master-0 kubenswrapper[7454]: I0319 11:56:49.022078 7454 scope.go:117] "RemoveContainer" containerID="a73ef6ab87ac7f56538653c5c86cb08ccd6e50dfb5fbf1edd61c934ee0c0aadc" Mar 19 11:56:49.034247 master-0 kubenswrapper[7454]: I0319 11:56:49.034194 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" event={"ID":"d3017b5e-178e-49de-89d2-817a18398203","Type":"ContainerStarted","Data":"6dedac466f0712e9cb88164ac3beff662b4163f5b6d34ec1e978daf51f4b9061"} Mar 19 11:56:49.046051 master-0 kubenswrapper[7454]: I0319 11:56:49.044918 7454 scope.go:117] "RemoveContainer" containerID="a96fd225dafcf97e7d05dad471494b2e0d85fd2d1d63677ffa5677e1fef31cd9" Mar 19 11:56:49.050436 master-0 kubenswrapper[7454]: I0319 11:56:49.050369 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cjgpg" event={"ID":"0ed7eded-1e67-49ad-9777-c2ed1e006ce3","Type":"ContainerStarted","Data":"051890867de8ff413fdae42afc2ad5867d80bb4189ee315587bdfb2254762fa5"} Mar 19 11:56:49.061211 master-0 kubenswrapper[7454]: I0319 11:56:49.061154 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" event={"ID":"f08c5930-44f0-48e4-80dd-2563f2733b2f","Type":"ContainerStarted","Data":"5870367e9f384c10e7c8ab460353d1a0b2fecef665e3a06de42ad5c0ffebc680"} Mar 19 11:56:49.087380 master-0 kubenswrapper[7454]: I0319 11:56:49.085948 7454 scope.go:117] "RemoveContainer" containerID="119bf6827db5b99fa9f0be9581be040a80edb534abd0a1348f3d58833768911f" Mar 19 11:56:49.087380 master-0 kubenswrapper[7454]: I0319 11:56:49.086228 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" event={"ID":"2151eb84-177e-459c-be71-f48465323ac2","Type":"ContainerStarted","Data":"23286ba4628a82812269a3406ed2726173e555e1325fc481a5647ab01552687f"} Mar 19 11:56:49.093864 master-0 kubenswrapper[7454]: E0319 11:56:49.093572 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"119bf6827db5b99fa9f0be9581be040a80edb534abd0a1348f3d58833768911f\": container with ID starting with 119bf6827db5b99fa9f0be9581be040a80edb534abd0a1348f3d58833768911f not found: ID does not exist" containerID="119bf6827db5b99fa9f0be9581be040a80edb534abd0a1348f3d58833768911f" Mar 19 11:56:49.093864 master-0 kubenswrapper[7454]: I0319 11:56:49.093622 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"119bf6827db5b99fa9f0be9581be040a80edb534abd0a1348f3d58833768911f"} err="failed to get container status \"119bf6827db5b99fa9f0be9581be040a80edb534abd0a1348f3d58833768911f\": rpc error: code = NotFound desc = could not find container \"119bf6827db5b99fa9f0be9581be040a80edb534abd0a1348f3d58833768911f\": container with ID starting with 119bf6827db5b99fa9f0be9581be040a80edb534abd0a1348f3d58833768911f not found: ID does not exist" Mar 19 11:56:49.093864 master-0 kubenswrapper[7454]: I0319 11:56:49.093653 7454 scope.go:117] "RemoveContainer" containerID="a73ef6ab87ac7f56538653c5c86cb08ccd6e50dfb5fbf1edd61c934ee0c0aadc" Mar 19 11:56:49.094927 master-0 kubenswrapper[7454]: E0319 11:56:49.094707 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a73ef6ab87ac7f56538653c5c86cb08ccd6e50dfb5fbf1edd61c934ee0c0aadc\": container with ID starting with a73ef6ab87ac7f56538653c5c86cb08ccd6e50dfb5fbf1edd61c934ee0c0aadc not found: ID does not exist" containerID="a73ef6ab87ac7f56538653c5c86cb08ccd6e50dfb5fbf1edd61c934ee0c0aadc" Mar 19 11:56:49.094927 master-0 kubenswrapper[7454]: I0319 11:56:49.094751 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a73ef6ab87ac7f56538653c5c86cb08ccd6e50dfb5fbf1edd61c934ee0c0aadc"} err="failed to get container status \"a73ef6ab87ac7f56538653c5c86cb08ccd6e50dfb5fbf1edd61c934ee0c0aadc\": rpc error: code = NotFound desc = could not find container \"a73ef6ab87ac7f56538653c5c86cb08ccd6e50dfb5fbf1edd61c934ee0c0aadc\": container with ID starting with a73ef6ab87ac7f56538653c5c86cb08ccd6e50dfb5fbf1edd61c934ee0c0aadc not found: ID does not exist" Mar 19 11:56:49.094927 master-0 kubenswrapper[7454]: I0319 11:56:49.094781 7454 scope.go:117] "RemoveContainer" containerID="a96fd225dafcf97e7d05dad471494b2e0d85fd2d1d63677ffa5677e1fef31cd9" Mar 19 11:56:49.099341 master-0 kubenswrapper[7454]: E0319 11:56:49.097452 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a96fd225dafcf97e7d05dad471494b2e0d85fd2d1d63677ffa5677e1fef31cd9\": container with ID starting with a96fd225dafcf97e7d05dad471494b2e0d85fd2d1d63677ffa5677e1fef31cd9 not found: ID does not exist" containerID="a96fd225dafcf97e7d05dad471494b2e0d85fd2d1d63677ffa5677e1fef31cd9" Mar 19 11:56:49.099341 master-0 kubenswrapper[7454]: I0319 11:56:49.097517 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a96fd225dafcf97e7d05dad471494b2e0d85fd2d1d63677ffa5677e1fef31cd9"} err="failed to get container status \"a96fd225dafcf97e7d05dad471494b2e0d85fd2d1d63677ffa5677e1fef31cd9\": rpc error: code = NotFound desc = could not find container \"a96fd225dafcf97e7d05dad471494b2e0d85fd2d1d63677ffa5677e1fef31cd9\": container with ID starting with a96fd225dafcf97e7d05dad471494b2e0d85fd2d1d63677ffa5677e1fef31cd9 not found: ID does not exist" Mar 19 11:56:49.101644 master-0 kubenswrapper[7454]: I0319 11:56:49.100108 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" event={"ID":"b0f5939c-48b1-4d6c-9712-9128a78d603b","Type":"ContainerStarted","Data":"3cb3f801dd00591244b19b3ad51ca78e956ed275b4329bac7bcfc1f2f8080cd6"} Mar 19 11:56:49.101644 master-0 kubenswrapper[7454]: I0319 11:56:49.100936 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:56:49.101644 master-0 kubenswrapper[7454]: I0319 11:56:49.100997 7454 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-pr7gk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.21:8080/healthz\": dial tcp 10.128.0.21:8080: connect: connection refused" start-of-body= Mar 19 11:56:49.101644 master-0 kubenswrapper[7454]: I0319 11:56:49.101024 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" podUID="b0f5939c-48b1-4d6c-9712-9128a78d603b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.21:8080/healthz\": dial tcp 10.128.0.21:8080: connect: connection refused" Mar 19 11:56:49.104006 master-0 kubenswrapper[7454]: I0319 11:56:49.103957 7454 generic.go:334] "Generic (PLEG): container finished" podID="903d114c-199f-46f9-b39b-afa52df71ea9" containerID="01c6345f7bbfbca980ae704e3ee9f222c4d7648651263304ba6c17e7dd5b469b" exitCode=0 Mar 19 11:56:49.104006 master-0 kubenswrapper[7454]: I0319 11:56:49.103998 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kj4wv" event={"ID":"903d114c-199f-46f9-b39b-afa52df71ea9","Type":"ContainerDied","Data":"01c6345f7bbfbca980ae704e3ee9f222c4d7648651263304ba6c17e7dd5b469b"} Mar 19 11:56:49.104136 master-0 kubenswrapper[7454]: I0319 11:56:49.104014 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kj4wv" event={"ID":"903d114c-199f-46f9-b39b-afa52df71ea9","Type":"ContainerDied","Data":"785658b3a5e114a93a0f8abff53d8f934cc7da626b174692818b21ff44c148b4"} Mar 19 11:56:49.104136 master-0 kubenswrapper[7454]: I0319 11:56:49.104032 7454 scope.go:117] "RemoveContainer" containerID="01c6345f7bbfbca980ae704e3ee9f222c4d7648651263304ba6c17e7dd5b469b" Mar 19 11:56:49.104136 master-0 kubenswrapper[7454]: I0319 11:56:49.104098 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kj4wv" Mar 19 11:56:49.176453 master-0 kubenswrapper[7454]: I0319 11:56:49.176011 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" event={"ID":"06df1b1b-154e-46f9-aee0-79a137c6c928","Type":"ContainerStarted","Data":"c3e2ad91bdcbc1b884a26e458f31fc9db94f1554ee950465422acf56a15740da"} Mar 19 11:56:49.185016 master-0 kubenswrapper[7454]: I0319 11:56:49.184974 7454 scope.go:117] "RemoveContainer" containerID="8caa550d637bf8158cfa690d480d13253effda2df95bdebb7484c3a287ec13ac" Mar 19 11:56:49.191983 master-0 kubenswrapper[7454]: I0319 11:56:49.190724 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_2e4442dc-19e2-42a3-b5d9-7af7765b1939/installer/0.log" Mar 19 11:56:49.191983 master-0 kubenswrapper[7454]: I0319 11:56:49.191104 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"2e4442dc-19e2-42a3-b5d9-7af7765b1939","Type":"ContainerDied","Data":"cfaade6a812c1fae7dc2bc47f01477e66bb0563b115dfa8becda8b83dc0a10b7"} Mar 19 11:56:49.191983 master-0 kubenswrapper[7454]: I0319 11:56:49.191130 7454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfaade6a812c1fae7dc2bc47f01477e66bb0563b115dfa8becda8b83dc0a10b7" Mar 19 11:56:49.191983 master-0 kubenswrapper[7454]: I0319 11:56:49.191198 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 11:56:49.197405 master-0 kubenswrapper[7454]: I0319 11:56:49.197332 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-flnbx"] Mar 19 11:56:49.206073 master-0 kubenswrapper[7454]: I0319 11:56:49.205283 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-flnbx"] Mar 19 11:56:49.211865 master-0 kubenswrapper[7454]: I0319 11:56:49.211822 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" event={"ID":"1089ea24-add9-482e-9276-e6ded12052d7","Type":"ContainerStarted","Data":"9d4f9e0f3811159c5b4172ecd015dfd36c71001f3a7087b4596cd25f8695fe99"} Mar 19 11:56:49.224624 master-0 kubenswrapper[7454]: I0319 11:56:49.224587 7454 scope.go:117] "RemoveContainer" containerID="f0e553719f0fc9ee108ddf64a7ed8d6042b89d616fa26b1f17b8415d992c87a2" Mar 19 11:56:49.260011 master-0 kubenswrapper[7454]: I0319 11:56:49.259979 7454 scope.go:117] "RemoveContainer" containerID="01c6345f7bbfbca980ae704e3ee9f222c4d7648651263304ba6c17e7dd5b469b" Mar 19 11:56:49.264020 master-0 kubenswrapper[7454]: E0319 11:56:49.263976 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01c6345f7bbfbca980ae704e3ee9f222c4d7648651263304ba6c17e7dd5b469b\": container with ID starting with 01c6345f7bbfbca980ae704e3ee9f222c4d7648651263304ba6c17e7dd5b469b not found: ID does not exist" containerID="01c6345f7bbfbca980ae704e3ee9f222c4d7648651263304ba6c17e7dd5b469b" Mar 19 11:56:49.264170 master-0 kubenswrapper[7454]: I0319 11:56:49.264030 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01c6345f7bbfbca980ae704e3ee9f222c4d7648651263304ba6c17e7dd5b469b"} err="failed to get container status \"01c6345f7bbfbca980ae704e3ee9f222c4d7648651263304ba6c17e7dd5b469b\": rpc error: code = NotFound desc = could not find container \"01c6345f7bbfbca980ae704e3ee9f222c4d7648651263304ba6c17e7dd5b469b\": container with ID starting with 01c6345f7bbfbca980ae704e3ee9f222c4d7648651263304ba6c17e7dd5b469b not found: ID does not exist" Mar 19 11:56:49.264170 master-0 kubenswrapper[7454]: I0319 11:56:49.264062 7454 scope.go:117] "RemoveContainer" containerID="8caa550d637bf8158cfa690d480d13253effda2df95bdebb7484c3a287ec13ac" Mar 19 11:56:49.268019 master-0 kubenswrapper[7454]: E0319 11:56:49.267961 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8caa550d637bf8158cfa690d480d13253effda2df95bdebb7484c3a287ec13ac\": container with ID starting with 8caa550d637bf8158cfa690d480d13253effda2df95bdebb7484c3a287ec13ac not found: ID does not exist" containerID="8caa550d637bf8158cfa690d480d13253effda2df95bdebb7484c3a287ec13ac" Mar 19 11:56:49.268019 master-0 kubenswrapper[7454]: I0319 11:56:49.268010 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8caa550d637bf8158cfa690d480d13253effda2df95bdebb7484c3a287ec13ac"} err="failed to get container status \"8caa550d637bf8158cfa690d480d13253effda2df95bdebb7484c3a287ec13ac\": rpc error: code = NotFound desc = could not find container \"8caa550d637bf8158cfa690d480d13253effda2df95bdebb7484c3a287ec13ac\": container with ID starting with 8caa550d637bf8158cfa690d480d13253effda2df95bdebb7484c3a287ec13ac not found: ID does not exist" Mar 19 11:56:49.268247 master-0 kubenswrapper[7454]: I0319 11:56:49.268041 7454 scope.go:117] "RemoveContainer" containerID="f0e553719f0fc9ee108ddf64a7ed8d6042b89d616fa26b1f17b8415d992c87a2" Mar 19 11:56:49.268472 master-0 kubenswrapper[7454]: E0319 11:56:49.268447 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0e553719f0fc9ee108ddf64a7ed8d6042b89d616fa26b1f17b8415d992c87a2\": container with ID starting with f0e553719f0fc9ee108ddf64a7ed8d6042b89d616fa26b1f17b8415d992c87a2 not found: ID does not exist" containerID="f0e553719f0fc9ee108ddf64a7ed8d6042b89d616fa26b1f17b8415d992c87a2" Mar 19 11:56:49.268529 master-0 kubenswrapper[7454]: I0319 11:56:49.268476 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e553719f0fc9ee108ddf64a7ed8d6042b89d616fa26b1f17b8415d992c87a2"} err="failed to get container status \"f0e553719f0fc9ee108ddf64a7ed8d6042b89d616fa26b1f17b8415d992c87a2\": rpc error: code = NotFound desc = could not find container \"f0e553719f0fc9ee108ddf64a7ed8d6042b89d616fa26b1f17b8415d992c87a2\": container with ID starting with f0e553719f0fc9ee108ddf64a7ed8d6042b89d616fa26b1f17b8415d992c87a2 not found: ID does not exist" Mar 19 11:56:49.410084 master-0 kubenswrapper[7454]: I0319 11:56:49.409933 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kj4wv"] Mar 19 11:56:49.414636 master-0 kubenswrapper[7454]: I0319 11:56:49.414590 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kj4wv"] Mar 19 11:56:49.659391 master-0 kubenswrapper[7454]: I0319 11:56:49.659329 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:49.659391 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:49.659391 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:49.659391 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:49.659666 master-0 kubenswrapper[7454]: I0319 11:56:49.659398 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:50.223853 master-0 kubenswrapper[7454]: I0319 11:56:50.222688 7454 generic.go:334] "Generic (PLEG): container finished" podID="0ed7eded-1e67-49ad-9777-c2ed1e006ce3" containerID="140f5b6d0ad45c210ec34db27352588bd40a8af50088c57ef36777013e203f6c" exitCode=0 Mar 19 11:56:50.223853 master-0 kubenswrapper[7454]: I0319 11:56:50.222727 7454 generic.go:334] "Generic (PLEG): container finished" podID="0ed7eded-1e67-49ad-9777-c2ed1e006ce3" containerID="80c673b2188e95ea8d6803bb2b30df3a1dbcd94b373e0bf980cd0ab82c7ba0bd" exitCode=0 Mar 19 11:56:50.223853 master-0 kubenswrapper[7454]: I0319 11:56:50.222777 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cjgpg" event={"ID":"0ed7eded-1e67-49ad-9777-c2ed1e006ce3","Type":"ContainerDied","Data":"140f5b6d0ad45c210ec34db27352588bd40a8af50088c57ef36777013e203f6c"} Mar 19 11:56:50.223853 master-0 kubenswrapper[7454]: I0319 11:56:50.222835 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cjgpg" event={"ID":"0ed7eded-1e67-49ad-9777-c2ed1e006ce3","Type":"ContainerDied","Data":"80c673b2188e95ea8d6803bb2b30df3a1dbcd94b373e0bf980cd0ab82c7ba0bd"} Mar 19 11:56:50.231518 master-0 kubenswrapper[7454]: I0319 11:56:50.231161 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-nhvl4_aef8e03f-0363-4e13-b7ca-4fa871d77c62/openshift-config-operator/1.log" Mar 19 11:56:50.240177 master-0 kubenswrapper[7454]: I0319 11:56:50.240142 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 11:56:50.646222 master-0 kubenswrapper[7454]: I0319 11:56:50.643877 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1370cf76-52c4-4f19-8dfc-794f2901f8a6" path="/var/lib/kubelet/pods/1370cf76-52c4-4f19-8dfc-794f2901f8a6/volumes" Mar 19 11:56:50.646222 master-0 kubenswrapper[7454]: I0319 11:56:50.644532 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="903d114c-199f-46f9-b39b-afa52df71ea9" path="/var/lib/kubelet/pods/903d114c-199f-46f9-b39b-afa52df71ea9/volumes" Mar 19 11:56:50.659169 master-0 kubenswrapper[7454]: I0319 11:56:50.659118 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:50.659169 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:50.659169 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:50.659169 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:50.659391 master-0 kubenswrapper[7454]: I0319 11:56:50.659207 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:51.240832 master-0 kubenswrapper[7454]: I0319 11:56:51.240734 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cjgpg" event={"ID":"0ed7eded-1e67-49ad-9777-c2ed1e006ce3","Type":"ContainerStarted","Data":"91b07ebbaf75783989bd57123a21de110857198acfed4c894a55acc067d70af7"} Mar 19 11:56:51.269061 master-0 kubenswrapper[7454]: I0319 11:56:51.268976 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cjgpg" podStartSLOduration=115.706847841 podStartE2EDuration="1m57.268957865s" podCreationTimestamp="2026-03-19 11:54:54 +0000 UTC" firstStartedPulling="2026-03-19 11:56:49.054380348 +0000 UTC m=+178.684846251" lastFinishedPulling="2026-03-19 11:56:50.616490322 +0000 UTC m=+180.246956275" observedRunningTime="2026-03-19 11:56:51.263564844 +0000 UTC m=+180.894030767" watchObservedRunningTime="2026-03-19 11:56:51.268957865 +0000 UTC m=+180.899423778" Mar 19 11:56:51.661852 master-0 kubenswrapper[7454]: I0319 11:56:51.661714 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:51.661852 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:51.661852 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:51.661852 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:51.662265 master-0 kubenswrapper[7454]: I0319 11:56:51.662238 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:52.242512 master-0 kubenswrapper[7454]: I0319 11:56:52.242409 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 11:56:52.242512 master-0 kubenswrapper[7454]: I0319 11:56:52.242499 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 11:56:52.342380 master-0 kubenswrapper[7454]: I0319 11:56:52.342306 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:56:52.347932 master-0 kubenswrapper[7454]: I0319 11:56:52.347895 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:56:52.659058 master-0 kubenswrapper[7454]: I0319 11:56:52.658946 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:52.659058 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:52.659058 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:52.659058 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:52.659058 master-0 kubenswrapper[7454]: I0319 11:56:52.659017 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:53.053572 master-0 kubenswrapper[7454]: I0319 11:56:53.053470 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:56:53.057128 master-0 kubenswrapper[7454]: I0319 11:56:53.057089 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 11:56:53.659032 master-0 kubenswrapper[7454]: I0319 11:56:53.658976 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:53.659032 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:53.659032 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:53.659032 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:53.659565 master-0 kubenswrapper[7454]: I0319 11:56:53.659049 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:53.863048 master-0 kubenswrapper[7454]: I0319 11:56:53.862970 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 11:56:53.863232 master-0 kubenswrapper[7454]: I0319 11:56:53.863085 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 11:56:54.259122 master-0 kubenswrapper[7454]: I0319 11:56:54.259066 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-gx4w8_9ed2dbd1-aec4-4009-917a-933533912ab5/openshift-controller-manager-operator/1.log" Mar 19 11:56:54.260055 master-0 kubenswrapper[7454]: I0319 11:56:54.260012 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-gx4w8_9ed2dbd1-aec4-4009-917a-933533912ab5/openshift-controller-manager-operator/0.log" Mar 19 11:56:54.260201 master-0 kubenswrapper[7454]: I0319 11:56:54.260069 7454 generic.go:334] "Generic (PLEG): container finished" podID="9ed2dbd1-aec4-4009-917a-933533912ab5" containerID="24fd9caa7952430318d8f0070bff5d8f9a23ccd510c898e8d4b008fdb27da600" exitCode=255 Mar 19 11:56:54.260954 master-0 kubenswrapper[7454]: I0319 11:56:54.260917 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" event={"ID":"9ed2dbd1-aec4-4009-917a-933533912ab5","Type":"ContainerDied","Data":"24fd9caa7952430318d8f0070bff5d8f9a23ccd510c898e8d4b008fdb27da600"} Mar 19 11:56:54.261047 master-0 kubenswrapper[7454]: I0319 11:56:54.260985 7454 scope.go:117] "RemoveContainer" containerID="fc5332ce9b6e52d47f6ebb8b58ad2c77aaab22f1f6505f1913fed9b59e6a2824" Mar 19 11:56:54.261350 master-0 kubenswrapper[7454]: I0319 11:56:54.261295 7454 scope.go:117] "RemoveContainer" containerID="24fd9caa7952430318d8f0070bff5d8f9a23ccd510c898e8d4b008fdb27da600" Mar 19 11:56:54.261505 master-0 kubenswrapper[7454]: E0319 11:56:54.261474 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-8c94f4649-gx4w8_openshift-controller-manager-operator(9ed2dbd1-aec4-4009-917a-933533912ab5)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" podUID="9ed2dbd1-aec4-4009-917a-933533912ab5" Mar 19 11:56:54.660051 master-0 kubenswrapper[7454]: I0319 11:56:54.659896 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:54.660051 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:54.660051 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:54.660051 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:54.660051 master-0 kubenswrapper[7454]: I0319 11:56:54.659978 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:54.986099 master-0 kubenswrapper[7454]: I0319 11:56:54.986030 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 11:56:54.986332 master-0 kubenswrapper[7454]: I0319 11:56:54.986113 7454 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 11:56:55.267212 master-0 kubenswrapper[7454]: I0319 11:56:55.267086 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-gx4w8_9ed2dbd1-aec4-4009-917a-933533912ab5/openshift-controller-manager-operator/1.log" Mar 19 11:56:55.660154 master-0 kubenswrapper[7454]: I0319 11:56:55.659970 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:55.660154 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:55.660154 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:55.660154 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:55.660154 master-0 kubenswrapper[7454]: I0319 11:56:55.660078 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:55.838301 master-0 kubenswrapper[7454]: I0319 11:56:55.838226 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p225c"] Mar 19 11:56:55.838567 master-0 kubenswrapper[7454]: I0319 11:56:55.838535 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p225c" podUID="77497070-ffa8-45e5-935d-5281828d6962" containerName="registry-server" containerID="cri-o://190d503cfa5b3468f9c6f03dd24e2c652db25c8933a5abbed215bb786c5ba261" gracePeriod=2 Mar 19 11:56:56.208846 master-0 kubenswrapper[7454]: I0319 11:56:56.208733 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p225c" Mar 19 11:56:56.250616 master-0 kubenswrapper[7454]: I0319 11:56:56.250015 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5rxc\" (UniqueName: \"kubernetes.io/projected/77497070-ffa8-45e5-935d-5281828d6962-kube-api-access-d5rxc\") pod \"77497070-ffa8-45e5-935d-5281828d6962\" (UID: \"77497070-ffa8-45e5-935d-5281828d6962\") " Mar 19 11:56:56.250616 master-0 kubenswrapper[7454]: I0319 11:56:56.250079 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77497070-ffa8-45e5-935d-5281828d6962-utilities\") pod \"77497070-ffa8-45e5-935d-5281828d6962\" (UID: \"77497070-ffa8-45e5-935d-5281828d6962\") " Mar 19 11:56:56.250616 master-0 kubenswrapper[7454]: I0319 11:56:56.250189 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77497070-ffa8-45e5-935d-5281828d6962-catalog-content\") pod \"77497070-ffa8-45e5-935d-5281828d6962\" (UID: \"77497070-ffa8-45e5-935d-5281828d6962\") " Mar 19 11:56:56.251843 master-0 kubenswrapper[7454]: I0319 11:56:56.251006 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77497070-ffa8-45e5-935d-5281828d6962-utilities" (OuterVolumeSpecName: "utilities") pod "77497070-ffa8-45e5-935d-5281828d6962" (UID: "77497070-ffa8-45e5-935d-5281828d6962"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 11:56:56.270766 master-0 kubenswrapper[7454]: I0319 11:56:56.270726 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77497070-ffa8-45e5-935d-5281828d6962-kube-api-access-d5rxc" (OuterVolumeSpecName: "kube-api-access-d5rxc") pod "77497070-ffa8-45e5-935d-5281828d6962" (UID: "77497070-ffa8-45e5-935d-5281828d6962"). InnerVolumeSpecName "kube-api-access-d5rxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:56:56.276474 master-0 kubenswrapper[7454]: I0319 11:56:56.276429 7454 generic.go:334] "Generic (PLEG): container finished" podID="77497070-ffa8-45e5-935d-5281828d6962" containerID="190d503cfa5b3468f9c6f03dd24e2c652db25c8933a5abbed215bb786c5ba261" exitCode=0 Mar 19 11:56:56.276580 master-0 kubenswrapper[7454]: I0319 11:56:56.276474 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p225c" event={"ID":"77497070-ffa8-45e5-935d-5281828d6962","Type":"ContainerDied","Data":"190d503cfa5b3468f9c6f03dd24e2c652db25c8933a5abbed215bb786c5ba261"} Mar 19 11:56:56.276580 master-0 kubenswrapper[7454]: I0319 11:56:56.276500 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p225c" event={"ID":"77497070-ffa8-45e5-935d-5281828d6962","Type":"ContainerDied","Data":"f071d5c6e7e1f35bc260aa337d9b194fe82c1243aca8a2aec9d30be0bb3216e9"} Mar 19 11:56:56.276580 master-0 kubenswrapper[7454]: I0319 11:56:56.276516 7454 scope.go:117] "RemoveContainer" containerID="190d503cfa5b3468f9c6f03dd24e2c652db25c8933a5abbed215bb786c5ba261" Mar 19 11:56:56.276693 master-0 kubenswrapper[7454]: I0319 11:56:56.276658 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p225c" Mar 19 11:56:56.289208 master-0 kubenswrapper[7454]: I0319 11:56:56.289167 7454 scope.go:117] "RemoveContainer" containerID="4d2f9c965608f2ecf77aeff13b4656c12f2fe4c17a81e488d5e58ef86cf4113b" Mar 19 11:56:56.308466 master-0 kubenswrapper[7454]: I0319 11:56:56.308420 7454 scope.go:117] "RemoveContainer" containerID="28ea8e21ba4afb486655483dd5710a876c83a2c01ce9d2707b582b55e7a1e992" Mar 19 11:56:56.312848 master-0 kubenswrapper[7454]: I0319 11:56:56.312812 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77497070-ffa8-45e5-935d-5281828d6962-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77497070-ffa8-45e5-935d-5281828d6962" (UID: "77497070-ffa8-45e5-935d-5281828d6962"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 11:56:56.323843 master-0 kubenswrapper[7454]: I0319 11:56:56.323182 7454 scope.go:117] "RemoveContainer" containerID="190d503cfa5b3468f9c6f03dd24e2c652db25c8933a5abbed215bb786c5ba261" Mar 19 11:56:56.323843 master-0 kubenswrapper[7454]: E0319 11:56:56.323550 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"190d503cfa5b3468f9c6f03dd24e2c652db25c8933a5abbed215bb786c5ba261\": container with ID starting with 190d503cfa5b3468f9c6f03dd24e2c652db25c8933a5abbed215bb786c5ba261 not found: ID does not exist" containerID="190d503cfa5b3468f9c6f03dd24e2c652db25c8933a5abbed215bb786c5ba261" Mar 19 11:56:56.323843 master-0 kubenswrapper[7454]: I0319 11:56:56.323593 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"190d503cfa5b3468f9c6f03dd24e2c652db25c8933a5abbed215bb786c5ba261"} err="failed to get container status \"190d503cfa5b3468f9c6f03dd24e2c652db25c8933a5abbed215bb786c5ba261\": rpc error: code = NotFound desc = could not find container \"190d503cfa5b3468f9c6f03dd24e2c652db25c8933a5abbed215bb786c5ba261\": container with ID starting with 190d503cfa5b3468f9c6f03dd24e2c652db25c8933a5abbed215bb786c5ba261 not found: ID does not exist" Mar 19 11:56:56.323843 master-0 kubenswrapper[7454]: I0319 11:56:56.323612 7454 scope.go:117] "RemoveContainer" containerID="4d2f9c965608f2ecf77aeff13b4656c12f2fe4c17a81e488d5e58ef86cf4113b" Mar 19 11:56:56.324049 master-0 kubenswrapper[7454]: E0319 11:56:56.323927 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d2f9c965608f2ecf77aeff13b4656c12f2fe4c17a81e488d5e58ef86cf4113b\": container with ID starting with 4d2f9c965608f2ecf77aeff13b4656c12f2fe4c17a81e488d5e58ef86cf4113b not found: ID does not exist" containerID="4d2f9c965608f2ecf77aeff13b4656c12f2fe4c17a81e488d5e58ef86cf4113b" Mar 19 11:56:56.324049 master-0 kubenswrapper[7454]: I0319 11:56:56.323944 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d2f9c965608f2ecf77aeff13b4656c12f2fe4c17a81e488d5e58ef86cf4113b"} err="failed to get container status \"4d2f9c965608f2ecf77aeff13b4656c12f2fe4c17a81e488d5e58ef86cf4113b\": rpc error: code = NotFound desc = could not find container \"4d2f9c965608f2ecf77aeff13b4656c12f2fe4c17a81e488d5e58ef86cf4113b\": container with ID starting with 4d2f9c965608f2ecf77aeff13b4656c12f2fe4c17a81e488d5e58ef86cf4113b not found: ID does not exist" Mar 19 11:56:56.324049 master-0 kubenswrapper[7454]: I0319 11:56:56.323975 7454 scope.go:117] "RemoveContainer" containerID="28ea8e21ba4afb486655483dd5710a876c83a2c01ce9d2707b582b55e7a1e992" Mar 19 11:56:56.325581 master-0 kubenswrapper[7454]: E0319 11:56:56.325552 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28ea8e21ba4afb486655483dd5710a876c83a2c01ce9d2707b582b55e7a1e992\": container with ID starting with 28ea8e21ba4afb486655483dd5710a876c83a2c01ce9d2707b582b55e7a1e992 not found: ID does not exist" containerID="28ea8e21ba4afb486655483dd5710a876c83a2c01ce9d2707b582b55e7a1e992" Mar 19 11:56:56.325652 master-0 kubenswrapper[7454]: I0319 11:56:56.325576 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28ea8e21ba4afb486655483dd5710a876c83a2c01ce9d2707b582b55e7a1e992"} err="failed to get container status \"28ea8e21ba4afb486655483dd5710a876c83a2c01ce9d2707b582b55e7a1e992\": rpc error: code = NotFound desc = could not find container \"28ea8e21ba4afb486655483dd5710a876c83a2c01ce9d2707b582b55e7a1e992\": container with ID starting with 28ea8e21ba4afb486655483dd5710a876c83a2c01ce9d2707b582b55e7a1e992 not found: ID does not exist" Mar 19 11:56:56.352010 master-0 kubenswrapper[7454]: I0319 11:56:56.351973 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5rxc\" (UniqueName: \"kubernetes.io/projected/77497070-ffa8-45e5-935d-5281828d6962-kube-api-access-d5rxc\") on node \"master-0\" DevicePath \"\"" Mar 19 11:56:56.352232 master-0 kubenswrapper[7454]: I0319 11:56:56.352221 7454 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77497070-ffa8-45e5-935d-5281828d6962-utilities\") on node \"master-0\" DevicePath \"\"" Mar 19 11:56:56.352302 master-0 kubenswrapper[7454]: I0319 11:56:56.352293 7454 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77497070-ffa8-45e5-935d-5281828d6962-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 19 11:56:56.640107 master-0 kubenswrapper[7454]: I0319 11:56:56.640070 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p225c"] Mar 19 11:56:56.646219 master-0 kubenswrapper[7454]: I0319 11:56:56.646165 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p225c"] Mar 19 11:56:56.659100 master-0 kubenswrapper[7454]: I0319 11:56:56.658814 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:56:56.659100 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:56:56.659100 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:56:56.659100 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:56:56.659100 master-0 kubenswrapper[7454]: I0319 11:56:56.658882 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:56:56.659100 master-0 kubenswrapper[7454]: I0319 11:56:56.658945 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:56:56.660051 master-0 kubenswrapper[7454]: I0319 11:56:56.659472 7454 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"2f120a0d94fdbfa9eb3c076343f202eb79687478095e8ae9cb88dc10339e167a"} pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" containerMessage="Container router failed startup probe, will be restarted" Mar 19 11:56:56.660051 master-0 kubenswrapper[7454]: I0319 11:56:56.659517 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" containerID="cri-o://2f120a0d94fdbfa9eb3c076343f202eb79687478095e8ae9cb88dc10339e167a" gracePeriod=3600 Mar 19 11:56:56.863241 master-0 kubenswrapper[7454]: I0319 11:56:56.863068 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 11:56:56.863241 master-0 kubenswrapper[7454]: I0319 11:56:56.863162 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 11:56:57.734827 master-0 kubenswrapper[7454]: I0319 11:56:57.733320 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 11:56:57.734827 master-0 kubenswrapper[7454]: I0319 11:56:57.733382 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 11:56:57.784727 master-0 kubenswrapper[7454]: I0319 11:56:57.784651 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 11:56:57.986336 master-0 kubenswrapper[7454]: I0319 11:56:57.986200 7454 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-nhvl4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 11:56:57.986336 master-0 kubenswrapper[7454]: I0319 11:56:57.986268 7454 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" podUID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 11:56:58.332176 master-0 kubenswrapper[7454]: I0319 11:56:58.332060 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 11:56:58.640708 master-0 kubenswrapper[7454]: I0319 11:56:58.640589 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77497070-ffa8-45e5-935d-5281828d6962" path="/var/lib/kubelet/pods/77497070-ffa8-45e5-935d-5281828d6962/volumes" Mar 19 11:56:59.838846 master-0 kubenswrapper[7454]: I0319 11:56:59.836789 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 11:57:00.310030 master-0 kubenswrapper[7454]: I0319 11:57:00.309935 7454 generic.go:334] "Generic (PLEG): container finished" podID="3a6b082a-649b-43f6-8e24-cf222873fe39" containerID="1934bc0b600f1e74a406788cec8a674a8b6f1a56fe70fd8bd4ae9f2fb2ad6292" exitCode=0 Mar 19 11:57:00.310358 master-0 kubenswrapper[7454]: I0319 11:57:00.310051 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" event={"ID":"3a6b082a-649b-43f6-8e24-cf222873fe39","Type":"ContainerDied","Data":"1934bc0b600f1e74a406788cec8a674a8b6f1a56fe70fd8bd4ae9f2fb2ad6292"} Mar 19 11:57:00.310961 master-0 kubenswrapper[7454]: I0319 11:57:00.310924 7454 scope.go:117] "RemoveContainer" containerID="1934bc0b600f1e74a406788cec8a674a8b6f1a56fe70fd8bd4ae9f2fb2ad6292" Mar 19 11:57:00.313622 master-0 kubenswrapper[7454]: I0319 11:57:00.313566 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-ftml6_19de6601-10d4-4112-a21f-0398d2b160d1/cluster-baremetal-operator/0.log" Mar 19 11:57:00.313884 master-0 kubenswrapper[7454]: I0319 11:57:00.313630 7454 generic.go:334] "Generic (PLEG): container finished" podID="19de6601-10d4-4112-a21f-0398d2b160d1" containerID="612732ed0120924fb77ef10b06bafbb001e3d8734f333029971f71583a5972b4" exitCode=1 Mar 19 11:57:00.313884 master-0 kubenswrapper[7454]: I0319 11:57:00.313746 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" event={"ID":"19de6601-10d4-4112-a21f-0398d2b160d1","Type":"ContainerDied","Data":"612732ed0120924fb77ef10b06bafbb001e3d8734f333029971f71583a5972b4"} Mar 19 11:57:00.314405 master-0 kubenswrapper[7454]: I0319 11:57:00.314366 7454 scope.go:117] "RemoveContainer" containerID="612732ed0120924fb77ef10b06bafbb001e3d8734f333029971f71583a5972b4" Mar 19 11:57:00.324204 master-0 kubenswrapper[7454]: I0319 11:57:00.324147 7454 generic.go:334] "Generic (PLEG): container finished" podID="bf226d89-450d-4876-a113-345632b94ee9" containerID="e708db8e66828556f8b708025575f23f8aa12842fc7126337dc3672b562dc4b1" exitCode=0 Mar 19 11:57:00.324332 master-0 kubenswrapper[7454]: I0319 11:57:00.324202 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" event={"ID":"bf226d89-450d-4876-a113-345632b94ee9","Type":"ContainerDied","Data":"e708db8e66828556f8b708025575f23f8aa12842fc7126337dc3672b562dc4b1"} Mar 19 11:57:00.324713 master-0 kubenswrapper[7454]: I0319 11:57:00.324631 7454 scope.go:117] "RemoveContainer" containerID="e708db8e66828556f8b708025575f23f8aa12842fc7126337dc3672b562dc4b1" Mar 19 11:57:01.332424 master-0 kubenswrapper[7454]: I0319 11:57:01.332350 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" event={"ID":"3a6b082a-649b-43f6-8e24-cf222873fe39","Type":"ContainerStarted","Data":"9525efea18e9168adb2e8691fffa21e20effeae4cf60811da09efa9acd76b65f"} Mar 19 11:57:01.333214 master-0 kubenswrapper[7454]: I0319 11:57:01.332859 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 11:57:01.335482 master-0 kubenswrapper[7454]: I0319 11:57:01.335445 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-ftml6_19de6601-10d4-4112-a21f-0398d2b160d1/cluster-baremetal-operator/0.log" Mar 19 11:57:01.335556 master-0 kubenswrapper[7454]: I0319 11:57:01.335526 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" event={"ID":"19de6601-10d4-4112-a21f-0398d2b160d1","Type":"ContainerStarted","Data":"dbd72cd315e8f5fa6faaefc2be981b3f9a0d499a3d7eead86b3d71318cde1c34"} Mar 19 11:57:01.337954 master-0 kubenswrapper[7454]: I0319 11:57:01.337921 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 11:57:01.339923 master-0 kubenswrapper[7454]: I0319 11:57:01.339879 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" event={"ID":"bf226d89-450d-4876-a113-345632b94ee9","Type":"ContainerStarted","Data":"3d6c29fa2fea2a4028ae9bf07fe3dfb5fccd02ce108e84c4ff9630eee5fdf4b0"} Mar 19 11:57:02.347593 master-0 kubenswrapper[7454]: I0319 11:57:02.347546 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-9mpxd_5238840f-3bef-43ad-ae68-ac187f073019/manager/0.log" Mar 19 11:57:02.348224 master-0 kubenswrapper[7454]: I0319 11:57:02.347599 7454 generic.go:334] "Generic (PLEG): container finished" podID="5238840f-3bef-43ad-ae68-ac187f073019" containerID="387948abcb2cbae673b88cb3d7a8d043f5ef4d37ef318a38ca6b5a6a836dff73" exitCode=1 Mar 19 11:57:02.348224 master-0 kubenswrapper[7454]: I0319 11:57:02.347636 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" event={"ID":"5238840f-3bef-43ad-ae68-ac187f073019","Type":"ContainerDied","Data":"387948abcb2cbae673b88cb3d7a8d043f5ef4d37ef318a38ca6b5a6a836dff73"} Mar 19 11:57:02.348224 master-0 kubenswrapper[7454]: I0319 11:57:02.348199 7454 scope.go:117] "RemoveContainer" containerID="387948abcb2cbae673b88cb3d7a8d043f5ef4d37ef318a38ca6b5a6a836dff73" Mar 19 11:57:02.351012 master-0 kubenswrapper[7454]: I0319 11:57:02.350922 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-j2w8z_919daf8d-763a-44bc-8916-86b425a27cbd/manager/0.log" Mar 19 11:57:02.351471 master-0 kubenswrapper[7454]: I0319 11:57:02.351424 7454 generic.go:334] "Generic (PLEG): container finished" podID="919daf8d-763a-44bc-8916-86b425a27cbd" containerID="b41786c9c913f59caa3ab9f044ef31b0ba5e946f6fab91d0cf640d642dc24031" exitCode=1 Mar 19 11:57:02.351606 master-0 kubenswrapper[7454]: I0319 11:57:02.351484 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" event={"ID":"919daf8d-763a-44bc-8916-86b425a27cbd","Type":"ContainerDied","Data":"b41786c9c913f59caa3ab9f044ef31b0ba5e946f6fab91d0cf640d642dc24031"} Mar 19 11:57:02.352392 master-0 kubenswrapper[7454]: I0319 11:57:02.352364 7454 scope.go:117] "RemoveContainer" containerID="b41786c9c913f59caa3ab9f044ef31b0ba5e946f6fab91d0cf640d642dc24031" Mar 19 11:57:03.359904 master-0 kubenswrapper[7454]: I0319 11:57:03.359853 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-j2w8z_919daf8d-763a-44bc-8916-86b425a27cbd/manager/0.log" Mar 19 11:57:03.360730 master-0 kubenswrapper[7454]: I0319 11:57:03.360686 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" event={"ID":"919daf8d-763a-44bc-8916-86b425a27cbd","Type":"ContainerStarted","Data":"48baf89d0a5776fb35854b24f12ca1544d0d250398de394c850b09cf7a229ce1"} Mar 19 11:57:03.361296 master-0 kubenswrapper[7454]: I0319 11:57:03.361275 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:57:03.364032 master-0 kubenswrapper[7454]: I0319 11:57:03.363997 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-9mpxd_5238840f-3bef-43ad-ae68-ac187f073019/manager/0.log" Mar 19 11:57:03.364231 master-0 kubenswrapper[7454]: I0319 11:57:03.364173 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" event={"ID":"5238840f-3bef-43ad-ae68-ac187f073019","Type":"ContainerStarted","Data":"80a4b06853370526b35bd2b1f042248803efc6dea62506012de0886df3162aa5"} Mar 19 11:57:03.364930 master-0 kubenswrapper[7454]: I0319 11:57:03.364780 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:57:04.371571 master-0 kubenswrapper[7454]: I0319 11:57:04.371527 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-6m654_944eac68-e72b-4aed-b5dc-d7d9703178a3/snapshot-controller/0.log" Mar 19 11:57:04.372150 master-0 kubenswrapper[7454]: I0319 11:57:04.371591 7454 generic.go:334] "Generic (PLEG): container finished" podID="944eac68-e72b-4aed-b5dc-d7d9703178a3" containerID="bdf696c39db6c9beaa009fbd69e576a7d8040c99b8de9bd67204a49a32f0a1ba" exitCode=1 Mar 19 11:57:04.372150 master-0 kubenswrapper[7454]: I0319 11:57:04.371696 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" event={"ID":"944eac68-e72b-4aed-b5dc-d7d9703178a3","Type":"ContainerDied","Data":"bdf696c39db6c9beaa009fbd69e576a7d8040c99b8de9bd67204a49a32f0a1ba"} Mar 19 11:57:04.372216 master-0 kubenswrapper[7454]: I0319 11:57:04.372166 7454 scope.go:117] "RemoveContainer" containerID="bdf696c39db6c9beaa009fbd69e576a7d8040c99b8de9bd67204a49a32f0a1ba" Mar 19 11:57:05.381065 master-0 kubenswrapper[7454]: I0319 11:57:05.380979 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-6m654_944eac68-e72b-4aed-b5dc-d7d9703178a3/snapshot-controller/0.log" Mar 19 11:57:05.381065 master-0 kubenswrapper[7454]: I0319 11:57:05.381047 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" event={"ID":"944eac68-e72b-4aed-b5dc-d7d9703178a3","Type":"ContainerStarted","Data":"7b0aee976f8444b82e3c4d17e235fff6c9975468ebf15542296951ae3166eacc"} Mar 19 11:57:05.633440 master-0 kubenswrapper[7454]: I0319 11:57:05.633298 7454 scope.go:117] "RemoveContainer" containerID="24fd9caa7952430318d8f0070bff5d8f9a23ccd510c898e8d4b008fdb27da600" Mar 19 11:57:06.393486 master-0 kubenswrapper[7454]: I0319 11:57:06.393430 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-gx4w8_9ed2dbd1-aec4-4009-917a-933533912ab5/openshift-controller-manager-operator/1.log" Mar 19 11:57:06.394042 master-0 kubenswrapper[7454]: I0319 11:57:06.393490 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" event={"ID":"9ed2dbd1-aec4-4009-917a-933533912ab5","Type":"ContainerStarted","Data":"0cb96d6164884cc3f0bac4337734cdc20f98a5daca48411010d7f82e0122afa1"} Mar 19 11:57:08.540128 master-0 kubenswrapper[7454]: I0319 11:57:08.540025 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 11:57:09.774880 master-0 kubenswrapper[7454]: I0319 11:57:09.774808 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 11:57:16.243488 master-0 kubenswrapper[7454]: I0319 11:57:16.243414 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fbd5s"] Mar 19 11:57:16.244103 master-0 kubenswrapper[7454]: E0319 11:57:16.243872 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77497070-ffa8-45e5-935d-5281828d6962" containerName="extract-utilities" Mar 19 11:57:16.244103 master-0 kubenswrapper[7454]: I0319 11:57:16.243901 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="77497070-ffa8-45e5-935d-5281828d6962" containerName="extract-utilities" Mar 19 11:57:16.244103 master-0 kubenswrapper[7454]: E0319 11:57:16.243925 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="903d114c-199f-46f9-b39b-afa52df71ea9" containerName="registry-server" Mar 19 11:57:16.244103 master-0 kubenswrapper[7454]: I0319 11:57:16.243942 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="903d114c-199f-46f9-b39b-afa52df71ea9" containerName="registry-server" Mar 19 11:57:16.244103 master-0 kubenswrapper[7454]: E0319 11:57:16.243973 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fd0b13-489f-42b7-a52a-6194fdc9f665" containerName="installer" Mar 19 11:57:16.244103 master-0 kubenswrapper[7454]: I0319 11:57:16.243992 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fd0b13-489f-42b7-a52a-6194fdc9f665" containerName="installer" Mar 19 11:57:16.244103 master-0 kubenswrapper[7454]: E0319 11:57:16.244016 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77497070-ffa8-45e5-935d-5281828d6962" containerName="extract-content" Mar 19 11:57:16.244103 master-0 kubenswrapper[7454]: I0319 11:57:16.244033 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="77497070-ffa8-45e5-935d-5281828d6962" containerName="extract-content" Mar 19 11:57:16.244103 master-0 kubenswrapper[7454]: E0319 11:57:16.244062 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77497070-ffa8-45e5-935d-5281828d6962" containerName="registry-server" Mar 19 11:57:16.244103 master-0 kubenswrapper[7454]: I0319 11:57:16.244078 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="77497070-ffa8-45e5-935d-5281828d6962" containerName="registry-server" Mar 19 11:57:16.244103 master-0 kubenswrapper[7454]: E0319 11:57:16.244105 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db75b266-69c4-4790-82f1-43168b5bb6a0" containerName="extract-utilities" Mar 19 11:57:16.244541 master-0 kubenswrapper[7454]: I0319 11:57:16.244157 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="db75b266-69c4-4790-82f1-43168b5bb6a0" containerName="extract-utilities" Mar 19 11:57:16.244541 master-0 kubenswrapper[7454]: E0319 11:57:16.244176 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1370cf76-52c4-4f19-8dfc-794f2901f8a6" containerName="extract-utilities" Mar 19 11:57:16.244541 master-0 kubenswrapper[7454]: I0319 11:57:16.244193 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="1370cf76-52c4-4f19-8dfc-794f2901f8a6" containerName="extract-utilities" Mar 19 11:57:16.244541 master-0 kubenswrapper[7454]: E0319 11:57:16.244219 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="903d114c-199f-46f9-b39b-afa52df71ea9" containerName="extract-utilities" Mar 19 11:57:16.244541 master-0 kubenswrapper[7454]: I0319 11:57:16.244236 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="903d114c-199f-46f9-b39b-afa52df71ea9" containerName="extract-utilities" Mar 19 11:57:16.244541 master-0 kubenswrapper[7454]: E0319 11:57:16.244261 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b49f09f-2efa-4657-9f5a-fbddd42bee0d" containerName="installer" Mar 19 11:57:16.244541 master-0 kubenswrapper[7454]: I0319 11:57:16.244278 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b49f09f-2efa-4657-9f5a-fbddd42bee0d" containerName="installer" Mar 19 11:57:16.244541 master-0 kubenswrapper[7454]: E0319 11:57:16.244355 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4442dc-19e2-42a3-b5d9-7af7765b1939" containerName="installer" Mar 19 11:57:16.244541 master-0 kubenswrapper[7454]: I0319 11:57:16.244376 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4442dc-19e2-42a3-b5d9-7af7765b1939" containerName="installer" Mar 19 11:57:16.244913 master-0 kubenswrapper[7454]: E0319 11:57:16.244567 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db75b266-69c4-4790-82f1-43168b5bb6a0" containerName="extract-content" Mar 19 11:57:16.244913 master-0 kubenswrapper[7454]: I0319 11:57:16.244595 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="db75b266-69c4-4790-82f1-43168b5bb6a0" containerName="extract-content" Mar 19 11:57:16.244913 master-0 kubenswrapper[7454]: E0319 11:57:16.244615 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="903d114c-199f-46f9-b39b-afa52df71ea9" containerName="extract-content" Mar 19 11:57:16.244913 master-0 kubenswrapper[7454]: I0319 11:57:16.244632 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="903d114c-199f-46f9-b39b-afa52df71ea9" containerName="extract-content" Mar 19 11:57:16.244913 master-0 kubenswrapper[7454]: E0319 11:57:16.244659 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1370cf76-52c4-4f19-8dfc-794f2901f8a6" containerName="registry-server" Mar 19 11:57:16.244913 master-0 kubenswrapper[7454]: I0319 11:57:16.244677 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="1370cf76-52c4-4f19-8dfc-794f2901f8a6" containerName="registry-server" Mar 19 11:57:16.244913 master-0 kubenswrapper[7454]: E0319 11:57:16.244704 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f83dfb-da04-483f-b281-ebdb39f3ab27" containerName="installer" Mar 19 11:57:16.244913 master-0 kubenswrapper[7454]: I0319 11:57:16.244722 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f83dfb-da04-483f-b281-ebdb39f3ab27" containerName="installer" Mar 19 11:57:16.244913 master-0 kubenswrapper[7454]: E0319 11:57:16.244756 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1370cf76-52c4-4f19-8dfc-794f2901f8a6" containerName="extract-content" Mar 19 11:57:16.244913 master-0 kubenswrapper[7454]: I0319 11:57:16.244774 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="1370cf76-52c4-4f19-8dfc-794f2901f8a6" containerName="extract-content" Mar 19 11:57:16.244913 master-0 kubenswrapper[7454]: E0319 11:57:16.244830 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632bdf3b-0ba0-4874-a2ec-8396683c35c5" containerName="installer" Mar 19 11:57:16.244913 master-0 kubenswrapper[7454]: I0319 11:57:16.244849 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="632bdf3b-0ba0-4874-a2ec-8396683c35c5" containerName="installer" Mar 19 11:57:16.247668 master-0 kubenswrapper[7454]: I0319 11:57:16.247627 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fd0b13-489f-42b7-a52a-6194fdc9f665" containerName="installer" Mar 19 11:57:16.247824 master-0 kubenswrapper[7454]: I0319 11:57:16.247692 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="db75b266-69c4-4790-82f1-43168b5bb6a0" containerName="extract-content" Mar 19 11:57:16.247824 master-0 kubenswrapper[7454]: I0319 11:57:16.247721 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="1370cf76-52c4-4f19-8dfc-794f2901f8a6" containerName="registry-server" Mar 19 11:57:16.247824 master-0 kubenswrapper[7454]: I0319 11:57:16.247746 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="77497070-ffa8-45e5-935d-5281828d6962" containerName="registry-server" Mar 19 11:57:16.247824 master-0 kubenswrapper[7454]: I0319 11:57:16.247771 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e4442dc-19e2-42a3-b5d9-7af7765b1939" containerName="installer" Mar 19 11:57:16.247824 master-0 kubenswrapper[7454]: I0319 11:57:16.247819 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="632bdf3b-0ba0-4874-a2ec-8396683c35c5" containerName="installer" Mar 19 11:57:16.248013 master-0 kubenswrapper[7454]: I0319 11:57:16.247841 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="11f83dfb-da04-483f-b281-ebdb39f3ab27" containerName="installer" Mar 19 11:57:16.248013 master-0 kubenswrapper[7454]: I0319 11:57:16.247863 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="903d114c-199f-46f9-b39b-afa52df71ea9" containerName="registry-server" Mar 19 11:57:16.248013 master-0 kubenswrapper[7454]: I0319 11:57:16.247882 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b49f09f-2efa-4657-9f5a-fbddd42bee0d" containerName="installer" Mar 19 11:57:16.250039 master-0 kubenswrapper[7454]: I0319 11:57:16.249995 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 11:57:16.250565 master-0 kubenswrapper[7454]: I0319 11:57:16.250518 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s22fd"] Mar 19 11:57:16.252335 master-0 kubenswrapper[7454]: I0319 11:57:16.252254 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-lcm2r" Mar 19 11:57:16.252708 master-0 kubenswrapper[7454]: I0319 11:57:16.252667 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s22fd" Mar 19 11:57:16.254090 master-0 kubenswrapper[7454]: I0319 11:57:16.254054 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-v8nqn" Mar 19 11:57:16.258456 master-0 kubenswrapper[7454]: I0319 11:57:16.258419 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tdnkp"] Mar 19 11:57:16.259492 master-0 kubenswrapper[7454]: I0319 11:57:16.259459 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 11:57:16.261982 master-0 kubenswrapper[7454]: I0319 11:57:16.261953 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fbd5s"] Mar 19 11:57:16.262285 master-0 kubenswrapper[7454]: I0319 11:57:16.262244 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-gfs2v" Mar 19 11:57:16.316161 master-0 kubenswrapper[7454]: I0319 11:57:16.315660 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tdnkp"] Mar 19 11:57:16.325851 master-0 kubenswrapper[7454]: I0319 11:57:16.324138 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s22fd"] Mar 19 11:57:16.326303 master-0 kubenswrapper[7454]: I0319 11:57:16.326162 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7383e647-63b0-452d-a39b-02ad27a9b053-catalog-content\") pod \"community-operators-s22fd\" (UID: \"7383e647-63b0-452d-a39b-02ad27a9b053\") " pod="openshift-marketplace/community-operators-s22fd" Mar 19 11:57:16.327672 master-0 kubenswrapper[7454]: I0319 11:57:16.326444 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fz85\" (UniqueName: \"kubernetes.io/projected/f05dca6c-7626-4970-a869-4208ff5605a2-kube-api-access-5fz85\") pod \"redhat-operators-fbd5s\" (UID: \"f05dca6c-7626-4970-a869-4208ff5605a2\") " pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 11:57:16.327672 master-0 kubenswrapper[7454]: I0319 11:57:16.326690 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f05dca6c-7626-4970-a869-4208ff5605a2-utilities\") pod \"redhat-operators-fbd5s\" (UID: \"f05dca6c-7626-4970-a869-4208ff5605a2\") " pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 11:57:16.327672 master-0 kubenswrapper[7454]: I0319 11:57:16.326938 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c52bbbe7-bc16-432f-a471-bc561083a853-utilities\") pod \"certified-operators-tdnkp\" (UID: \"c52bbbe7-bc16-432f-a471-bc561083a853\") " pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 11:57:16.327672 master-0 kubenswrapper[7454]: I0319 11:57:16.326990 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ztf7\" (UniqueName: \"kubernetes.io/projected/c52bbbe7-bc16-432f-a471-bc561083a853-kube-api-access-4ztf7\") pod \"certified-operators-tdnkp\" (UID: \"c52bbbe7-bc16-432f-a471-bc561083a853\") " pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 11:57:16.328035 master-0 kubenswrapper[7454]: I0319 11:57:16.327853 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f05dca6c-7626-4970-a869-4208ff5605a2-catalog-content\") pod \"redhat-operators-fbd5s\" (UID: \"f05dca6c-7626-4970-a869-4208ff5605a2\") " pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 11:57:16.328035 master-0 kubenswrapper[7454]: I0319 11:57:16.327920 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c52bbbe7-bc16-432f-a471-bc561083a853-catalog-content\") pod \"certified-operators-tdnkp\" (UID: \"c52bbbe7-bc16-432f-a471-bc561083a853\") " pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 11:57:16.328035 master-0 kubenswrapper[7454]: I0319 11:57:16.327955 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7383e647-63b0-452d-a39b-02ad27a9b053-utilities\") pod \"community-operators-s22fd\" (UID: \"7383e647-63b0-452d-a39b-02ad27a9b053\") " pod="openshift-marketplace/community-operators-s22fd" Mar 19 11:57:16.328035 master-0 kubenswrapper[7454]: I0319 11:57:16.328025 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xz8h\" (UniqueName: \"kubernetes.io/projected/7383e647-63b0-452d-a39b-02ad27a9b053-kube-api-access-2xz8h\") pod \"community-operators-s22fd\" (UID: \"7383e647-63b0-452d-a39b-02ad27a9b053\") " pod="openshift-marketplace/community-operators-s22fd" Mar 19 11:57:16.428916 master-0 kubenswrapper[7454]: I0319 11:57:16.428836 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f05dca6c-7626-4970-a869-4208ff5605a2-utilities\") pod \"redhat-operators-fbd5s\" (UID: \"f05dca6c-7626-4970-a869-4208ff5605a2\") " pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 11:57:16.428916 master-0 kubenswrapper[7454]: I0319 11:57:16.428895 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c52bbbe7-bc16-432f-a471-bc561083a853-utilities\") pod \"certified-operators-tdnkp\" (UID: \"c52bbbe7-bc16-432f-a471-bc561083a853\") " pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 11:57:16.428916 master-0 kubenswrapper[7454]: I0319 11:57:16.428928 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ztf7\" (UniqueName: \"kubernetes.io/projected/c52bbbe7-bc16-432f-a471-bc561083a853-kube-api-access-4ztf7\") pod \"certified-operators-tdnkp\" (UID: \"c52bbbe7-bc16-432f-a471-bc561083a853\") " pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 11:57:16.429264 master-0 kubenswrapper[7454]: I0319 11:57:16.428973 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f05dca6c-7626-4970-a869-4208ff5605a2-catalog-content\") pod \"redhat-operators-fbd5s\" (UID: \"f05dca6c-7626-4970-a869-4208ff5605a2\") " pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 11:57:16.429264 master-0 kubenswrapper[7454]: I0319 11:57:16.429116 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c52bbbe7-bc16-432f-a471-bc561083a853-catalog-content\") pod \"certified-operators-tdnkp\" (UID: \"c52bbbe7-bc16-432f-a471-bc561083a853\") " pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 11:57:16.429264 master-0 kubenswrapper[7454]: I0319 11:57:16.429138 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7383e647-63b0-452d-a39b-02ad27a9b053-utilities\") pod \"community-operators-s22fd\" (UID: \"7383e647-63b0-452d-a39b-02ad27a9b053\") " pod="openshift-marketplace/community-operators-s22fd" Mar 19 11:57:16.429264 master-0 kubenswrapper[7454]: I0319 11:57:16.429162 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xz8h\" (UniqueName: \"kubernetes.io/projected/7383e647-63b0-452d-a39b-02ad27a9b053-kube-api-access-2xz8h\") pod \"community-operators-s22fd\" (UID: \"7383e647-63b0-452d-a39b-02ad27a9b053\") " pod="openshift-marketplace/community-operators-s22fd" Mar 19 11:57:16.429264 master-0 kubenswrapper[7454]: I0319 11:57:16.429189 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7383e647-63b0-452d-a39b-02ad27a9b053-catalog-content\") pod \"community-operators-s22fd\" (UID: \"7383e647-63b0-452d-a39b-02ad27a9b053\") " pod="openshift-marketplace/community-operators-s22fd" Mar 19 11:57:16.429264 master-0 kubenswrapper[7454]: I0319 11:57:16.429206 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fz85\" (UniqueName: \"kubernetes.io/projected/f05dca6c-7626-4970-a869-4208ff5605a2-kube-api-access-5fz85\") pod \"redhat-operators-fbd5s\" (UID: \"f05dca6c-7626-4970-a869-4208ff5605a2\") " pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 11:57:16.429621 master-0 kubenswrapper[7454]: I0319 11:57:16.429600 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f05dca6c-7626-4970-a869-4208ff5605a2-utilities\") pod \"redhat-operators-fbd5s\" (UID: \"f05dca6c-7626-4970-a869-4208ff5605a2\") " pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 11:57:16.429767 master-0 kubenswrapper[7454]: I0319 11:57:16.429736 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f05dca6c-7626-4970-a869-4208ff5605a2-catalog-content\") pod \"redhat-operators-fbd5s\" (UID: \"f05dca6c-7626-4970-a869-4208ff5605a2\") " pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 11:57:16.429914 master-0 kubenswrapper[7454]: I0319 11:57:16.429848 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c52bbbe7-bc16-432f-a471-bc561083a853-utilities\") pod \"certified-operators-tdnkp\" (UID: \"c52bbbe7-bc16-432f-a471-bc561083a853\") " pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 11:57:16.430265 master-0 kubenswrapper[7454]: I0319 11:57:16.430219 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7383e647-63b0-452d-a39b-02ad27a9b053-utilities\") pod \"community-operators-s22fd\" (UID: \"7383e647-63b0-452d-a39b-02ad27a9b053\") " pod="openshift-marketplace/community-operators-s22fd" Mar 19 11:57:16.430335 master-0 kubenswrapper[7454]: I0319 11:57:16.430289 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c52bbbe7-bc16-432f-a471-bc561083a853-catalog-content\") pod \"certified-operators-tdnkp\" (UID: \"c52bbbe7-bc16-432f-a471-bc561083a853\") " pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 11:57:16.430335 master-0 kubenswrapper[7454]: I0319 11:57:16.430308 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7383e647-63b0-452d-a39b-02ad27a9b053-catalog-content\") pod \"community-operators-s22fd\" (UID: \"7383e647-63b0-452d-a39b-02ad27a9b053\") " pod="openshift-marketplace/community-operators-s22fd" Mar 19 11:57:16.453321 master-0 kubenswrapper[7454]: I0319 11:57:16.453242 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xz8h\" (UniqueName: \"kubernetes.io/projected/7383e647-63b0-452d-a39b-02ad27a9b053-kube-api-access-2xz8h\") pod \"community-operators-s22fd\" (UID: \"7383e647-63b0-452d-a39b-02ad27a9b053\") " pod="openshift-marketplace/community-operators-s22fd" Mar 19 11:57:16.454922 master-0 kubenswrapper[7454]: I0319 11:57:16.454879 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fz85\" (UniqueName: \"kubernetes.io/projected/f05dca6c-7626-4970-a869-4208ff5605a2-kube-api-access-5fz85\") pod \"redhat-operators-fbd5s\" (UID: \"f05dca6c-7626-4970-a869-4208ff5605a2\") " pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 11:57:16.456714 master-0 kubenswrapper[7454]: I0319 11:57:16.456671 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ztf7\" (UniqueName: \"kubernetes.io/projected/c52bbbe7-bc16-432f-a471-bc561083a853-kube-api-access-4ztf7\") pod \"certified-operators-tdnkp\" (UID: \"c52bbbe7-bc16-432f-a471-bc561083a853\") " pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 11:57:16.642742 master-0 kubenswrapper[7454]: I0319 11:57:16.642597 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 11:57:16.662788 master-0 kubenswrapper[7454]: I0319 11:57:16.662696 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s22fd" Mar 19 11:57:16.675130 master-0 kubenswrapper[7454]: I0319 11:57:16.675064 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 11:57:17.066947 master-0 kubenswrapper[7454]: I0319 11:57:17.066854 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fbd5s"] Mar 19 11:57:17.070204 master-0 kubenswrapper[7454]: W0319 11:57:17.070137 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf05dca6c_7626_4970_a869_4208ff5605a2.slice/crio-5971350293b565068e613eaa81b7b38f49914ad973eb8343f33aa9abaed290e9 WatchSource:0}: Error finding container 5971350293b565068e613eaa81b7b38f49914ad973eb8343f33aa9abaed290e9: Status 404 returned error can't find the container with id 5971350293b565068e613eaa81b7b38f49914ad973eb8343f33aa9abaed290e9 Mar 19 11:57:17.141625 master-0 kubenswrapper[7454]: I0319 11:57:17.141481 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s22fd"] Mar 19 11:57:17.189576 master-0 kubenswrapper[7454]: I0319 11:57:17.189521 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tdnkp"] Mar 19 11:57:17.466938 master-0 kubenswrapper[7454]: I0319 11:57:17.466857 7454 generic.go:334] "Generic (PLEG): container finished" podID="f05dca6c-7626-4970-a869-4208ff5605a2" containerID="60bc1dc90b88b8a914cc55873afedd31f4e84b73bea5030f4f1cb08c053d6c7d" exitCode=0 Mar 19 11:57:17.467664 master-0 kubenswrapper[7454]: I0319 11:57:17.466942 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbd5s" event={"ID":"f05dca6c-7626-4970-a869-4208ff5605a2","Type":"ContainerDied","Data":"60bc1dc90b88b8a914cc55873afedd31f4e84b73bea5030f4f1cb08c053d6c7d"} Mar 19 11:57:17.467664 master-0 kubenswrapper[7454]: I0319 11:57:17.467036 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbd5s" event={"ID":"f05dca6c-7626-4970-a869-4208ff5605a2","Type":"ContainerStarted","Data":"5971350293b565068e613eaa81b7b38f49914ad973eb8343f33aa9abaed290e9"} Mar 19 11:57:17.469889 master-0 kubenswrapper[7454]: I0319 11:57:17.469824 7454 generic.go:334] "Generic (PLEG): container finished" podID="c52bbbe7-bc16-432f-a471-bc561083a853" containerID="6c5c4d40a16417076e4498cb487b735b6cf2450b0bf97275a9d9f7f4cc5ea19e" exitCode=0 Mar 19 11:57:17.470023 master-0 kubenswrapper[7454]: I0319 11:57:17.469993 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdnkp" event={"ID":"c52bbbe7-bc16-432f-a471-bc561083a853","Type":"ContainerDied","Data":"6c5c4d40a16417076e4498cb487b735b6cf2450b0bf97275a9d9f7f4cc5ea19e"} Mar 19 11:57:17.470104 master-0 kubenswrapper[7454]: I0319 11:57:17.470029 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdnkp" event={"ID":"c52bbbe7-bc16-432f-a471-bc561083a853","Type":"ContainerStarted","Data":"1bedd36b2e748d7ffe9c8b9ed3a8c9c7331d2765980332a3cebdddee8a321573"} Mar 19 11:57:17.472885 master-0 kubenswrapper[7454]: I0319 11:57:17.472228 7454 generic.go:334] "Generic (PLEG): container finished" podID="7383e647-63b0-452d-a39b-02ad27a9b053" containerID="88999f37d32fea17c2f7cb71f197065956c6e3b527bdca5b8e8d64ee4a63831d" exitCode=0 Mar 19 11:57:17.472885 master-0 kubenswrapper[7454]: I0319 11:57:17.472335 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s22fd" event={"ID":"7383e647-63b0-452d-a39b-02ad27a9b053","Type":"ContainerDied","Data":"88999f37d32fea17c2f7cb71f197065956c6e3b527bdca5b8e8d64ee4a63831d"} Mar 19 11:57:17.472885 master-0 kubenswrapper[7454]: I0319 11:57:17.472377 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s22fd" event={"ID":"7383e647-63b0-452d-a39b-02ad27a9b053","Type":"ContainerStarted","Data":"20538e6325cc6dc9adb3e30dce1ce797ed61d07679d7f2cd71ef1bf8c18874ea"} Mar 19 11:57:18.461867 master-0 kubenswrapper[7454]: I0319 11:57:18.459531 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-djdmh"] Mar 19 11:57:18.461867 master-0 kubenswrapper[7454]: I0319 11:57:18.460648 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 11:57:18.466921 master-0 kubenswrapper[7454]: I0319 11:57:18.466882 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 19 11:57:18.468182 master-0 kubenswrapper[7454]: I0319 11:57:18.467722 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-cmchf" Mar 19 11:57:18.468182 master-0 kubenswrapper[7454]: I0319 11:57:18.467885 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 19 11:57:18.468580 master-0 kubenswrapper[7454]: I0319 11:57:18.468185 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 19 11:57:18.468580 master-0 kubenswrapper[7454]: I0319 11:57:18.468354 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 19 11:57:18.472252 master-0 kubenswrapper[7454]: I0319 11:57:18.472221 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 19 11:57:18.477967 master-0 kubenswrapper[7454]: I0319 11:57:18.477893 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4"] Mar 19 11:57:18.482195 master-0 kubenswrapper[7454]: I0319 11:57:18.479138 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-cx8l9"] Mar 19 11:57:18.482195 master-0 kubenswrapper[7454]: I0319 11:57:18.479939 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-cx8l9" Mar 19 11:57:18.482195 master-0 kubenswrapper[7454]: I0319 11:57:18.480397 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" Mar 19 11:57:18.482678 master-0 kubenswrapper[7454]: I0319 11:57:18.482629 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-djdmh"] Mar 19 11:57:18.500863 master-0 kubenswrapper[7454]: I0319 11:57:18.492869 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 19 11:57:18.500863 master-0 kubenswrapper[7454]: I0319 11:57:18.493190 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 19 11:57:18.500863 master-0 kubenswrapper[7454]: I0319 11:57:18.493328 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-sklzz" Mar 19 11:57:18.500863 master-0 kubenswrapper[7454]: I0319 11:57:18.493541 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 19 11:57:18.500863 master-0 kubenswrapper[7454]: I0319 11:57:18.494238 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 19 11:57:18.500863 master-0 kubenswrapper[7454]: I0319 11:57:18.494362 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 19 11:57:18.500863 master-0 kubenswrapper[7454]: I0319 11:57:18.498967 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-gz8pl" Mar 19 11:57:18.500863 master-0 kubenswrapper[7454]: I0319 11:57:18.499477 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 19 11:57:18.500863 master-0 kubenswrapper[7454]: I0319 11:57:18.499946 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 19 11:57:18.509505 master-0 kubenswrapper[7454]: I0319 11:57:18.508454 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-cx8l9"] Mar 19 11:57:18.527433 master-0 kubenswrapper[7454]: I0319 11:57:18.527360 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4"] Mar 19 11:57:18.558919 master-0 kubenswrapper[7454]: I0319 11:57:18.558642 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/667757ee-2670-4019-ad93-156521d3c2e7-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-cx8l9\" (UID: \"667757ee-2670-4019-ad93-156521d3c2e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-cx8l9" Mar 19 11:57:18.558919 master-0 kubenswrapper[7454]: I0319 11:57:18.558704 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad327a59-7879-4215-bb95-3f2be64cb97f-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-nr2k4\" (UID: \"ad327a59-7879-4215-bb95-3f2be64cb97f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" Mar 19 11:57:18.558919 master-0 kubenswrapper[7454]: I0319 11:57:18.558728 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4264e82c-387f-4aa6-9ef6-b7beb61e098c-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 11:57:18.558919 master-0 kubenswrapper[7454]: I0319 11:57:18.558762 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/4264e82c-387f-4aa6-9ef6-b7beb61e098c-snapshots\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 11:57:18.558919 master-0 kubenswrapper[7454]: I0319 11:57:18.558791 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4264e82c-387f-4aa6-9ef6-b7beb61e098c-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 11:57:18.558919 master-0 kubenswrapper[7454]: I0319 11:57:18.558886 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc94p\" (UniqueName: \"kubernetes.io/projected/667757ee-2670-4019-ad93-156521d3c2e7-kube-api-access-rc94p\") pod \"cluster-samples-operator-85f7577d78-cx8l9\" (UID: \"667757ee-2670-4019-ad93-156521d3c2e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-cx8l9" Mar 19 11:57:18.559246 master-0 kubenswrapper[7454]: I0319 11:57:18.558979 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fgj5\" (UniqueName: \"kubernetes.io/projected/ad327a59-7879-4215-bb95-3f2be64cb97f-kube-api-access-9fgj5\") pod \"cloud-credential-operator-744f9dbf77-nr2k4\" (UID: \"ad327a59-7879-4215-bb95-3f2be64cb97f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" Mar 19 11:57:18.559246 master-0 kubenswrapper[7454]: I0319 11:57:18.559007 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4264e82c-387f-4aa6-9ef6-b7beb61e098c-serving-cert\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 11:57:18.559246 master-0 kubenswrapper[7454]: I0319 11:57:18.559030 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ad327a59-7879-4215-bb95-3f2be64cb97f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-nr2k4\" (UID: \"ad327a59-7879-4215-bb95-3f2be64cb97f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" Mar 19 11:57:18.559246 master-0 kubenswrapper[7454]: I0319 11:57:18.559060 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wfsr\" (UniqueName: \"kubernetes.io/projected/4264e82c-387f-4aa6-9ef6-b7beb61e098c-kube-api-access-8wfsr\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 11:57:18.570640 master-0 kubenswrapper[7454]: I0319 11:57:18.567992 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l"] Mar 19 11:57:18.571715 master-0 kubenswrapper[7454]: I0319 11:57:18.571144 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 11:57:18.575505 master-0 kubenswrapper[7454]: I0319 11:57:18.573819 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 19 11:57:18.575505 master-0 kubenswrapper[7454]: I0319 11:57:18.573828 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 19 11:57:18.575505 master-0 kubenswrapper[7454]: I0319 11:57:18.573923 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 19 11:57:18.575505 master-0 kubenswrapper[7454]: I0319 11:57:18.573940 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-mc2cj" Mar 19 11:57:18.575505 master-0 kubenswrapper[7454]: I0319 11:57:18.574072 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 19 11:57:18.575505 master-0 kubenswrapper[7454]: I0319 11:57:18.574152 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 19 11:57:18.584623 master-0 kubenswrapper[7454]: I0319 11:57:18.581888 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86"] Mar 19 11:57:18.585076 master-0 kubenswrapper[7454]: I0319 11:57:18.582788 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86" Mar 19 11:57:18.588385 master-0 kubenswrapper[7454]: I0319 11:57:18.588339 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 19 11:57:18.589404 master-0 kubenswrapper[7454]: I0319 11:57:18.589057 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-67sx5" Mar 19 11:57:18.619082 master-0 kubenswrapper[7454]: I0319 11:57:18.618731 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4"] Mar 19 11:57:18.620956 master-0 kubenswrapper[7454]: I0319 11:57:18.620224 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" Mar 19 11:57:18.628663 master-0 kubenswrapper[7454]: I0319 11:57:18.628611 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs"] Mar 19 11:57:18.630047 master-0 kubenswrapper[7454]: I0319 11:57:18.630028 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" Mar 19 11:57:18.635602 master-0 kubenswrapper[7454]: I0319 11:57:18.633192 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c"] Mar 19 11:57:18.635602 master-0 kubenswrapper[7454]: I0319 11:57:18.634380 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 19 11:57:18.635602 master-0 kubenswrapper[7454]: I0319 11:57:18.634657 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 19 11:57:18.635602 master-0 kubenswrapper[7454]: I0319 11:57:18.634768 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 19 11:57:18.635602 master-0 kubenswrapper[7454]: I0319 11:57:18.634841 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-dr8qt" Mar 19 11:57:18.635602 master-0 kubenswrapper[7454]: I0319 11:57:18.635396 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 19 11:57:18.635602 master-0 kubenswrapper[7454]: I0319 11:57:18.635538 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 19 11:57:18.636013 master-0 kubenswrapper[7454]: I0319 11:57:18.635662 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 19 11:57:18.636013 master-0 kubenswrapper[7454]: I0319 11:57:18.635758 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-pcp8m" Mar 19 11:57:18.636013 master-0 kubenswrapper[7454]: I0319 11:57:18.635929 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 19 11:57:18.654315 master-0 kubenswrapper[7454]: I0319 11:57:18.653965 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 11:57:18.655990 master-0 kubenswrapper[7454]: I0319 11:57:18.655702 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 19 11:57:18.655990 master-0 kubenswrapper[7454]: I0319 11:57:18.655905 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-ww9m4" Mar 19 11:57:18.659071 master-0 kubenswrapper[7454]: I0319 11:57:18.658912 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 19 11:57:18.661958 master-0 kubenswrapper[7454]: I0319 11:57:18.659711 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4264e82c-387f-4aa6-9ef6-b7beb61e098c-serving-cert\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 11:57:18.661958 master-0 kubenswrapper[7454]: I0319 11:57:18.659740 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ad327a59-7879-4215-bb95-3f2be64cb97f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-nr2k4\" (UID: \"ad327a59-7879-4215-bb95-3f2be64cb97f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" Mar 19 11:57:18.661958 master-0 kubenswrapper[7454]: I0319 11:57:18.659771 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wfsr\" (UniqueName: \"kubernetes.io/projected/4264e82c-387f-4aa6-9ef6-b7beb61e098c-kube-api-access-8wfsr\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 11:57:18.661958 master-0 kubenswrapper[7454]: I0319 11:57:18.659822 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/667757ee-2670-4019-ad93-156521d3c2e7-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-cx8l9\" (UID: \"667757ee-2670-4019-ad93-156521d3c2e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-cx8l9" Mar 19 11:57:18.661958 master-0 kubenswrapper[7454]: I0319 11:57:18.659850 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/44469a78-9300-4260-89e9-ea939de1357b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-tql86\" (UID: \"44469a78-9300-4260-89e9-ea939de1357b\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86" Mar 19 11:57:18.661958 master-0 kubenswrapper[7454]: I0319 11:57:18.659884 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad327a59-7879-4215-bb95-3f2be64cb97f-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-nr2k4\" (UID: \"ad327a59-7879-4215-bb95-3f2be64cb97f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" Mar 19 11:57:18.661958 master-0 kubenswrapper[7454]: I0319 11:57:18.659904 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4264e82c-387f-4aa6-9ef6-b7beb61e098c-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 11:57:18.661958 master-0 kubenswrapper[7454]: I0319 11:57:18.659929 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd40498c-f50a-408c-9a50-5d85ae666124-config\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 11:57:18.661958 master-0 kubenswrapper[7454]: I0319 11:57:18.659945 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7zpw\" (UniqueName: \"kubernetes.io/projected/44469a78-9300-4260-89e9-ea939de1357b-kube-api-access-t7zpw\") pod \"control-plane-machine-set-operator-6f97756bc8-tql86\" (UID: \"44469a78-9300-4260-89e9-ea939de1357b\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86" Mar 19 11:57:18.661958 master-0 kubenswrapper[7454]: I0319 11:57:18.659974 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/4264e82c-387f-4aa6-9ef6-b7beb61e098c-snapshots\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 11:57:18.661958 master-0 kubenswrapper[7454]: I0319 11:57:18.660003 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4264e82c-387f-4aa6-9ef6-b7beb61e098c-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 11:57:18.661958 master-0 kubenswrapper[7454]: I0319 11:57:18.660123 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fd40498c-f50a-408c-9a50-5d85ae666124-machine-approver-tls\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 11:57:18.661958 master-0 kubenswrapper[7454]: I0319 11:57:18.660704 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad327a59-7879-4215-bb95-3f2be64cb97f-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-nr2k4\" (UID: \"ad327a59-7879-4215-bb95-3f2be64cb97f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" Mar 19 11:57:18.661958 master-0 kubenswrapper[7454]: I0319 11:57:18.661147 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4264e82c-387f-4aa6-9ef6-b7beb61e098c-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 11:57:18.661958 master-0 kubenswrapper[7454]: I0319 11:57:18.661815 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/4264e82c-387f-4aa6-9ef6-b7beb61e098c-snapshots\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 11:57:18.665072 master-0 kubenswrapper[7454]: I0319 11:57:18.664594 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4264e82c-387f-4aa6-9ef6-b7beb61e098c-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 11:57:18.666526 master-0 kubenswrapper[7454]: I0319 11:57:18.665489 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 19 11:57:18.676911 master-0 kubenswrapper[7454]: I0319 11:57:18.672533 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/667757ee-2670-4019-ad93-156521d3c2e7-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-cx8l9\" (UID: \"667757ee-2670-4019-ad93-156521d3c2e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-cx8l9" Mar 19 11:57:18.676911 master-0 kubenswrapper[7454]: I0319 11:57:18.672539 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4264e82c-387f-4aa6-9ef6-b7beb61e098c-serving-cert\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 11:57:18.676911 master-0 kubenswrapper[7454]: I0319 11:57:18.672703 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc94p\" (UniqueName: \"kubernetes.io/projected/667757ee-2670-4019-ad93-156521d3c2e7-kube-api-access-rc94p\") pod \"cluster-samples-operator-85f7577d78-cx8l9\" (UID: \"667757ee-2670-4019-ad93-156521d3c2e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-cx8l9" Mar 19 11:57:18.676911 master-0 kubenswrapper[7454]: I0319 11:57:18.673048 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fd40498c-f50a-408c-9a50-5d85ae666124-auth-proxy-config\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 11:57:18.676911 master-0 kubenswrapper[7454]: I0319 11:57:18.673116 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fgj5\" (UniqueName: \"kubernetes.io/projected/ad327a59-7879-4215-bb95-3f2be64cb97f-kube-api-access-9fgj5\") pod \"cloud-credential-operator-744f9dbf77-nr2k4\" (UID: \"ad327a59-7879-4215-bb95-3f2be64cb97f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" Mar 19 11:57:18.676911 master-0 kubenswrapper[7454]: I0319 11:57:18.674051 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rmw5\" (UniqueName: \"kubernetes.io/projected/fd40498c-f50a-408c-9a50-5d85ae666124-kube-api-access-2rmw5\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 11:57:18.676911 master-0 kubenswrapper[7454]: I0319 11:57:18.674894 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ad327a59-7879-4215-bb95-3f2be64cb97f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-nr2k4\" (UID: \"ad327a59-7879-4215-bb95-3f2be64cb97f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" Mar 19 11:57:18.686248 master-0 kubenswrapper[7454]: I0319 11:57:18.686196 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86"] Mar 19 11:57:18.686248 master-0 kubenswrapper[7454]: I0319 11:57:18.686234 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs"] Mar 19 11:57:18.686248 master-0 kubenswrapper[7454]: I0319 11:57:18.686247 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c"] Mar 19 11:57:18.705384 master-0 kubenswrapper[7454]: I0319 11:57:18.703565 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fgj5\" (UniqueName: \"kubernetes.io/projected/ad327a59-7879-4215-bb95-3f2be64cb97f-kube-api-access-9fgj5\") pod \"cloud-credential-operator-744f9dbf77-nr2k4\" (UID: \"ad327a59-7879-4215-bb95-3f2be64cb97f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" Mar 19 11:57:18.710159 master-0 kubenswrapper[7454]: I0319 11:57:18.709568 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc94p\" (UniqueName: \"kubernetes.io/projected/667757ee-2670-4019-ad93-156521d3c2e7-kube-api-access-rc94p\") pod \"cluster-samples-operator-85f7577d78-cx8l9\" (UID: \"667757ee-2670-4019-ad93-156521d3c2e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-cx8l9" Mar 19 11:57:18.710159 master-0 kubenswrapper[7454]: I0319 11:57:18.710117 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wfsr\" (UniqueName: \"kubernetes.io/projected/4264e82c-387f-4aa6-9ef6-b7beb61e098c-kube-api-access-8wfsr\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 11:57:18.776711 master-0 kubenswrapper[7454]: I0319 11:57:18.776487 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd40498c-f50a-408c-9a50-5d85ae666124-config\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 11:57:18.776711 master-0 kubenswrapper[7454]: I0319 11:57:18.776547 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7zpw\" (UniqueName: \"kubernetes.io/projected/44469a78-9300-4260-89e9-ea939de1357b-kube-api-access-t7zpw\") pod \"control-plane-machine-set-operator-6f97756bc8-tql86\" (UID: \"44469a78-9300-4260-89e9-ea939de1357b\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86" Mar 19 11:57:18.776711 master-0 kubenswrapper[7454]: I0319 11:57:18.776580 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxw6t\" (UniqueName: \"kubernetes.io/projected/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-kube-api-access-dxw6t\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 11:57:18.777281 master-0 kubenswrapper[7454]: I0319 11:57:18.776869 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64twc\" (UniqueName: \"kubernetes.io/projected/cf6b6560-1731-4fb1-b3c2-8257002842d6-kube-api-access-64twc\") pod \"cluster-autoscaler-operator-866dc4744-fblgs\" (UID: \"cf6b6560-1731-4fb1-b3c2-8257002842d6\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" Mar 19 11:57:18.777281 master-0 kubenswrapper[7454]: I0319 11:57:18.776957 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/891094d1-558d-40b7-ad44-c3b2fef6f859-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4\" (UID: \"891094d1-558d-40b7-ad44-c3b2fef6f859\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" Mar 19 11:57:18.777281 master-0 kubenswrapper[7454]: I0319 11:57:18.777001 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-images\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 11:57:18.777281 master-0 kubenswrapper[7454]: I0319 11:57:18.777032 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/891094d1-558d-40b7-ad44-c3b2fef6f859-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4\" (UID: \"891094d1-558d-40b7-ad44-c3b2fef6f859\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" Mar 19 11:57:18.777281 master-0 kubenswrapper[7454]: I0319 11:57:18.777069 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf6b6560-1731-4fb1-b3c2-8257002842d6-cert\") pod \"cluster-autoscaler-operator-866dc4744-fblgs\" (UID: \"cf6b6560-1731-4fb1-b3c2-8257002842d6\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" Mar 19 11:57:18.777281 master-0 kubenswrapper[7454]: I0319 11:57:18.777105 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fd40498c-f50a-408c-9a50-5d85ae666124-machine-approver-tls\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 11:57:18.777281 master-0 kubenswrapper[7454]: I0319 11:57:18.777144 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9jxv\" (UniqueName: \"kubernetes.io/projected/891094d1-558d-40b7-ad44-c3b2fef6f859-kube-api-access-f9jxv\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4\" (UID: \"891094d1-558d-40b7-ad44-c3b2fef6f859\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" Mar 19 11:57:18.777281 master-0 kubenswrapper[7454]: I0319 11:57:18.777177 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fd40498c-f50a-408c-9a50-5d85ae666124-auth-proxy-config\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 11:57:18.777281 master-0 kubenswrapper[7454]: I0319 11:57:18.777208 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rmw5\" (UniqueName: \"kubernetes.io/projected/fd40498c-f50a-408c-9a50-5d85ae666124-kube-api-access-2rmw5\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 11:57:18.777281 master-0 kubenswrapper[7454]: I0319 11:57:18.777257 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 11:57:18.778013 master-0 kubenswrapper[7454]: I0319 11:57:18.777931 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd40498c-f50a-408c-9a50-5d85ae666124-config\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 11:57:18.779192 master-0 kubenswrapper[7454]: I0319 11:57:18.778941 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fd40498c-f50a-408c-9a50-5d85ae666124-auth-proxy-config\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 11:57:18.779192 master-0 kubenswrapper[7454]: I0319 11:57:18.777286 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/891094d1-558d-40b7-ad44-c3b2fef6f859-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4\" (UID: \"891094d1-558d-40b7-ad44-c3b2fef6f859\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" Mar 19 11:57:18.779192 master-0 kubenswrapper[7454]: I0319 11:57:18.779044 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-config\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 11:57:18.779192 master-0 kubenswrapper[7454]: I0319 11:57:18.779097 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/44469a78-9300-4260-89e9-ea939de1357b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-tql86\" (UID: \"44469a78-9300-4260-89e9-ea939de1357b\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86" Mar 19 11:57:18.779192 master-0 kubenswrapper[7454]: I0319 11:57:18.779163 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/891094d1-558d-40b7-ad44-c3b2fef6f859-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4\" (UID: \"891094d1-558d-40b7-ad44-c3b2fef6f859\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" Mar 19 11:57:18.779457 master-0 kubenswrapper[7454]: I0319 11:57:18.779219 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf6b6560-1731-4fb1-b3c2-8257002842d6-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-fblgs\" (UID: \"cf6b6560-1731-4fb1-b3c2-8257002842d6\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" Mar 19 11:57:18.781583 master-0 kubenswrapper[7454]: I0319 11:57:18.781126 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fd40498c-f50a-408c-9a50-5d85ae666124-machine-approver-tls\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 11:57:18.782639 master-0 kubenswrapper[7454]: I0319 11:57:18.782559 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/44469a78-9300-4260-89e9-ea939de1357b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-tql86\" (UID: \"44469a78-9300-4260-89e9-ea939de1357b\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86" Mar 19 11:57:18.811143 master-0 kubenswrapper[7454]: I0319 11:57:18.811087 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7zpw\" (UniqueName: \"kubernetes.io/projected/44469a78-9300-4260-89e9-ea939de1357b-kube-api-access-t7zpw\") pod \"control-plane-machine-set-operator-6f97756bc8-tql86\" (UID: \"44469a78-9300-4260-89e9-ea939de1357b\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86" Mar 19 11:57:18.814977 master-0 kubenswrapper[7454]: I0319 11:57:18.814907 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rmw5\" (UniqueName: \"kubernetes.io/projected/fd40498c-f50a-408c-9a50-5d85ae666124-kube-api-access-2rmw5\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 11:57:18.830602 master-0 kubenswrapper[7454]: I0319 11:57:18.830220 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 11:57:18.854425 master-0 kubenswrapper[7454]: I0319 11:57:18.854298 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-cx8l9" Mar 19 11:57:18.871880 master-0 kubenswrapper[7454]: I0319 11:57:18.871771 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" Mar 19 11:57:18.886375 master-0 kubenswrapper[7454]: I0319 11:57:18.880750 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9jxv\" (UniqueName: \"kubernetes.io/projected/891094d1-558d-40b7-ad44-c3b2fef6f859-kube-api-access-f9jxv\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4\" (UID: \"891094d1-558d-40b7-ad44-c3b2fef6f859\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" Mar 19 11:57:18.886375 master-0 kubenswrapper[7454]: I0319 11:57:18.880825 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 11:57:18.886375 master-0 kubenswrapper[7454]: I0319 11:57:18.881139 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/891094d1-558d-40b7-ad44-c3b2fef6f859-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4\" (UID: \"891094d1-558d-40b7-ad44-c3b2fef6f859\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" Mar 19 11:57:18.886375 master-0 kubenswrapper[7454]: I0319 11:57:18.881232 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-config\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 11:57:18.886375 master-0 kubenswrapper[7454]: I0319 11:57:18.881311 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/891094d1-558d-40b7-ad44-c3b2fef6f859-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4\" (UID: \"891094d1-558d-40b7-ad44-c3b2fef6f859\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" Mar 19 11:57:18.886375 master-0 kubenswrapper[7454]: I0319 11:57:18.881498 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf6b6560-1731-4fb1-b3c2-8257002842d6-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-fblgs\" (UID: \"cf6b6560-1731-4fb1-b3c2-8257002842d6\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" Mar 19 11:57:18.886375 master-0 kubenswrapper[7454]: I0319 11:57:18.881571 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxw6t\" (UniqueName: \"kubernetes.io/projected/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-kube-api-access-dxw6t\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 11:57:18.886375 master-0 kubenswrapper[7454]: I0319 11:57:18.881638 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64twc\" (UniqueName: \"kubernetes.io/projected/cf6b6560-1731-4fb1-b3c2-8257002842d6-kube-api-access-64twc\") pod \"cluster-autoscaler-operator-866dc4744-fblgs\" (UID: \"cf6b6560-1731-4fb1-b3c2-8257002842d6\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" Mar 19 11:57:18.886375 master-0 kubenswrapper[7454]: I0319 11:57:18.881709 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/891094d1-558d-40b7-ad44-c3b2fef6f859-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4\" (UID: \"891094d1-558d-40b7-ad44-c3b2fef6f859\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" Mar 19 11:57:18.886375 master-0 kubenswrapper[7454]: I0319 11:57:18.881860 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-images\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 11:57:18.886375 master-0 kubenswrapper[7454]: I0319 11:57:18.881997 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/891094d1-558d-40b7-ad44-c3b2fef6f859-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4\" (UID: \"891094d1-558d-40b7-ad44-c3b2fef6f859\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" Mar 19 11:57:18.886375 master-0 kubenswrapper[7454]: I0319 11:57:18.882117 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf6b6560-1731-4fb1-b3c2-8257002842d6-cert\") pod \"cluster-autoscaler-operator-866dc4744-fblgs\" (UID: \"cf6b6560-1731-4fb1-b3c2-8257002842d6\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" Mar 19 11:57:18.886375 master-0 kubenswrapper[7454]: I0319 11:57:18.882355 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/891094d1-558d-40b7-ad44-c3b2fef6f859-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4\" (UID: \"891094d1-558d-40b7-ad44-c3b2fef6f859\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" Mar 19 11:57:18.886375 master-0 kubenswrapper[7454]: I0319 11:57:18.882832 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/891094d1-558d-40b7-ad44-c3b2fef6f859-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4\" (UID: \"891094d1-558d-40b7-ad44-c3b2fef6f859\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" Mar 19 11:57:18.886375 master-0 kubenswrapper[7454]: I0319 11:57:18.883064 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf6b6560-1731-4fb1-b3c2-8257002842d6-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-fblgs\" (UID: \"cf6b6560-1731-4fb1-b3c2-8257002842d6\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" Mar 19 11:57:18.886375 master-0 kubenswrapper[7454]: I0319 11:57:18.883708 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/891094d1-558d-40b7-ad44-c3b2fef6f859-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4\" (UID: \"891094d1-558d-40b7-ad44-c3b2fef6f859\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" Mar 19 11:57:18.886375 master-0 kubenswrapper[7454]: I0319 11:57:18.883883 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-images\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 11:57:18.887423 master-0 kubenswrapper[7454]: I0319 11:57:18.887106 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-config\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 11:57:18.902819 master-0 kubenswrapper[7454]: I0319 11:57:18.895532 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 11:57:18.902819 master-0 kubenswrapper[7454]: I0319 11:57:18.898853 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/891094d1-558d-40b7-ad44-c3b2fef6f859-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4\" (UID: \"891094d1-558d-40b7-ad44-c3b2fef6f859\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" Mar 19 11:57:18.902819 master-0 kubenswrapper[7454]: I0319 11:57:18.900288 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9jxv\" (UniqueName: \"kubernetes.io/projected/891094d1-558d-40b7-ad44-c3b2fef6f859-kube-api-access-f9jxv\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4\" (UID: \"891094d1-558d-40b7-ad44-c3b2fef6f859\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" Mar 19 11:57:18.905573 master-0 kubenswrapper[7454]: I0319 11:57:18.905528 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf6b6560-1731-4fb1-b3c2-8257002842d6-cert\") pod \"cluster-autoscaler-operator-866dc4744-fblgs\" (UID: \"cf6b6560-1731-4fb1-b3c2-8257002842d6\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" Mar 19 11:57:18.916941 master-0 kubenswrapper[7454]: I0319 11:57:18.915914 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64twc\" (UniqueName: \"kubernetes.io/projected/cf6b6560-1731-4fb1-b3c2-8257002842d6-kube-api-access-64twc\") pod \"cluster-autoscaler-operator-866dc4744-fblgs\" (UID: \"cf6b6560-1731-4fb1-b3c2-8257002842d6\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" Mar 19 11:57:18.916941 master-0 kubenswrapper[7454]: I0319 11:57:18.916880 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxw6t\" (UniqueName: \"kubernetes.io/projected/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-kube-api-access-dxw6t\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 11:57:18.935647 master-0 kubenswrapper[7454]: I0319 11:57:18.934200 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 11:57:18.965847 master-0 kubenswrapper[7454]: I0319 11:57:18.964393 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86" Mar 19 11:57:19.141970 master-0 kubenswrapper[7454]: I0319 11:57:19.141928 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" Mar 19 11:57:19.142574 master-0 kubenswrapper[7454]: I0319 11:57:19.142358 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" Mar 19 11:57:19.196881 master-0 kubenswrapper[7454]: W0319 11:57:19.196776 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod891094d1_558d_40b7_ad44_c3b2fef6f859.slice/crio-bb4c10b9f47152ed83ad689d19722c380edca47f32e2136f5369af5ec33772a4 WatchSource:0}: Error finding container bb4c10b9f47152ed83ad689d19722c380edca47f32e2136f5369af5ec33772a4: Status 404 returned error can't find the container with id bb4c10b9f47152ed83ad689d19722c380edca47f32e2136f5369af5ec33772a4 Mar 19 11:57:19.205559 master-0 kubenswrapper[7454]: I0319 11:57:19.202559 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 11:57:19.274194 master-0 kubenswrapper[7454]: I0319 11:57:19.274134 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj"] Mar 19 11:57:19.276410 master-0 kubenswrapper[7454]: I0319 11:57:19.275989 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 11:57:19.292854 master-0 kubenswrapper[7454]: I0319 11:57:19.288719 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 19 11:57:19.292854 master-0 kubenswrapper[7454]: I0319 11:57:19.288786 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-hjms6" Mar 19 11:57:19.292854 master-0 kubenswrapper[7454]: I0319 11:57:19.289138 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 19 11:57:19.292854 master-0 kubenswrapper[7454]: I0319 11:57:19.292027 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj"] Mar 19 11:57:19.294700 master-0 kubenswrapper[7454]: I0319 11:57:19.294536 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 19 11:57:19.349913 master-0 kubenswrapper[7454]: I0319 11:57:19.347954 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-djdmh"] Mar 19 11:57:19.391444 master-0 kubenswrapper[7454]: I0319 11:57:19.391375 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/86884445-e29b-492b-8810-b63b938b9170-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 11:57:19.392633 master-0 kubenswrapper[7454]: I0319 11:57:19.392591 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kcbw\" (UniqueName: \"kubernetes.io/projected/86884445-e29b-492b-8810-b63b938b9170-kube-api-access-5kcbw\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 11:57:19.392718 master-0 kubenswrapper[7454]: I0319 11:57:19.392683 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/86884445-e29b-492b-8810-b63b938b9170-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 11:57:19.392761 master-0 kubenswrapper[7454]: I0319 11:57:19.392725 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/86884445-e29b-492b-8810-b63b938b9170-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 11:57:19.396716 master-0 kubenswrapper[7454]: I0319 11:57:19.396659 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-cx8l9"] Mar 19 11:57:19.428346 master-0 kubenswrapper[7454]: W0319 11:57:19.428295 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad327a59_7879_4215_bb95_3f2be64cb97f.slice/crio-3e9cb8897ccc8cd32e99de4908536f646397f9314e55ffb6dadd385187e9f1b0 WatchSource:0}: Error finding container 3e9cb8897ccc8cd32e99de4908536f646397f9314e55ffb6dadd385187e9f1b0: Status 404 returned error can't find the container with id 3e9cb8897ccc8cd32e99de4908536f646397f9314e55ffb6dadd385187e9f1b0 Mar 19 11:57:19.428487 master-0 kubenswrapper[7454]: I0319 11:57:19.428165 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4"] Mar 19 11:57:19.510650 master-0 kubenswrapper[7454]: I0319 11:57:19.510443 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/86884445-e29b-492b-8810-b63b938b9170-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 11:57:19.510650 master-0 kubenswrapper[7454]: I0319 11:57:19.510502 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/86884445-e29b-492b-8810-b63b938b9170-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 11:57:19.510650 master-0 kubenswrapper[7454]: I0319 11:57:19.510621 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/86884445-e29b-492b-8810-b63b938b9170-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 11:57:19.510650 master-0 kubenswrapper[7454]: I0319 11:57:19.510651 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kcbw\" (UniqueName: \"kubernetes.io/projected/86884445-e29b-492b-8810-b63b938b9170-kube-api-access-5kcbw\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 11:57:19.513911 master-0 kubenswrapper[7454]: I0319 11:57:19.513866 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/86884445-e29b-492b-8810-b63b938b9170-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 11:57:19.526930 master-0 kubenswrapper[7454]: I0319 11:57:19.524279 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/86884445-e29b-492b-8810-b63b938b9170-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 11:57:19.526930 master-0 kubenswrapper[7454]: I0319 11:57:19.526454 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/86884445-e29b-492b-8810-b63b938b9170-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 11:57:19.536520 master-0 kubenswrapper[7454]: I0319 11:57:19.535914 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kcbw\" (UniqueName: \"kubernetes.io/projected/86884445-e29b-492b-8810-b63b938b9170-kube-api-access-5kcbw\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 11:57:19.560058 master-0 kubenswrapper[7454]: I0319 11:57:19.555876 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86"] Mar 19 11:57:19.560058 master-0 kubenswrapper[7454]: I0319 11:57:19.557177 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbd5s" event={"ID":"f05dca6c-7626-4970-a869-4208ff5605a2","Type":"ContainerStarted","Data":"8cc0b059aa2839b58a2ae2c6d2b64bd0a41bd8d8facc9d7c47f7f2b8dedcba42"} Mar 19 11:57:19.576448 master-0 kubenswrapper[7454]: I0319 11:57:19.569775 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" event={"ID":"ad327a59-7879-4215-bb95-3f2be64cb97f","Type":"ContainerStarted","Data":"3e9cb8897ccc8cd32e99de4908536f646397f9314e55ffb6dadd385187e9f1b0"} Mar 19 11:57:19.620834 master-0 kubenswrapper[7454]: I0319 11:57:19.612041 7454 generic.go:334] "Generic (PLEG): container finished" podID="7383e647-63b0-452d-a39b-02ad27a9b053" containerID="851231fe9ccfeac8a5cba3d3576e738d92e2cffbc59eaab8e823a5bea8c281c6" exitCode=0 Mar 19 11:57:19.620834 master-0 kubenswrapper[7454]: I0319 11:57:19.612149 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s22fd" event={"ID":"7383e647-63b0-452d-a39b-02ad27a9b053","Type":"ContainerDied","Data":"851231fe9ccfeac8a5cba3d3576e738d92e2cffbc59eaab8e823a5bea8c281c6"} Mar 19 11:57:19.646834 master-0 kubenswrapper[7454]: I0319 11:57:19.637329 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" event={"ID":"891094d1-558d-40b7-ad44-c3b2fef6f859","Type":"ContainerStarted","Data":"bb4c10b9f47152ed83ad689d19722c380edca47f32e2136f5369af5ec33772a4"} Mar 19 11:57:19.646834 master-0 kubenswrapper[7454]: I0319 11:57:19.645647 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" event={"ID":"4264e82c-387f-4aa6-9ef6-b7beb61e098c","Type":"ContainerStarted","Data":"6b418b5a6ab7d2f0fbb7cd5733cda224a66315648fe46c18f09905494c67309d"} Mar 19 11:57:19.667113 master-0 kubenswrapper[7454]: I0319 11:57:19.667041 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" event={"ID":"fd40498c-f50a-408c-9a50-5d85ae666124","Type":"ContainerStarted","Data":"47e3fb5631a40e9b92709ff30c22c315e40cbc372e790281e7ae838990e489ce"} Mar 19 11:57:19.667113 master-0 kubenswrapper[7454]: I0319 11:57:19.667100 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" event={"ID":"fd40498c-f50a-408c-9a50-5d85ae666124","Type":"ContainerStarted","Data":"efd1c78ff9997efb11562e8d2fb6b9b151d43775e34fa6be423195823f01520e"} Mar 19 11:57:19.696735 master-0 kubenswrapper[7454]: I0319 11:57:19.694181 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 11:57:19.704593 master-0 kubenswrapper[7454]: W0319 11:57:19.704171 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf6b6560_1731_4fb1_b3c2_8257002842d6.slice/crio-28d0f82641cafb71075882375625371208c9e0463ead97b0053c16e9ee43470f WatchSource:0}: Error finding container 28d0f82641cafb71075882375625371208c9e0463ead97b0053c16e9ee43470f: Status 404 returned error can't find the container with id 28d0f82641cafb71075882375625371208c9e0463ead97b0053c16e9ee43470f Mar 19 11:57:19.708651 master-0 kubenswrapper[7454]: I0319 11:57:19.708447 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs"] Mar 19 11:57:19.765266 master-0 kubenswrapper[7454]: I0319 11:57:19.765125 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c"] Mar 19 11:57:20.196543 master-0 kubenswrapper[7454]: I0319 11:57:20.196485 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj"] Mar 19 11:57:20.216555 master-0 kubenswrapper[7454]: W0319 11:57:20.216500 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86884445_e29b_492b_8810_b63b938b9170.slice/crio-4cdef734b9abebf7ad3957d15cc0c1c6f03e77f6869e579c27076c986f6c0a2c WatchSource:0}: Error finding container 4cdef734b9abebf7ad3957d15cc0c1c6f03e77f6869e579c27076c986f6c0a2c: Status 404 returned error can't find the container with id 4cdef734b9abebf7ad3957d15cc0c1c6f03e77f6869e579c27076c986f6c0a2c Mar 19 11:57:20.631981 master-0 kubenswrapper[7454]: I0319 11:57:20.628710 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4"] Mar 19 11:57:20.728822 master-0 kubenswrapper[7454]: I0319 11:57:20.728759 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" event={"ID":"cf6b6560-1731-4fb1-b3c2-8257002842d6","Type":"ContainerStarted","Data":"a261d8bdf1e7e76c8b1b20173c8026d62eabfa6b56b8af0e0ad1ecbaa8d3be35"} Mar 19 11:57:20.728822 master-0 kubenswrapper[7454]: I0319 11:57:20.728817 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" event={"ID":"cf6b6560-1731-4fb1-b3c2-8257002842d6","Type":"ContainerStarted","Data":"28d0f82641cafb71075882375625371208c9e0463ead97b0053c16e9ee43470f"} Mar 19 11:57:20.732742 master-0 kubenswrapper[7454]: I0319 11:57:20.732703 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-cx8l9" event={"ID":"667757ee-2670-4019-ad93-156521d3c2e7","Type":"ContainerStarted","Data":"8ac7f6216c5921740646509c9d1e443feacb80b056e20b3a4f138b334049ff2c"} Mar 19 11:57:20.734862 master-0 kubenswrapper[7454]: I0319 11:57:20.734775 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" event={"ID":"7b2ecb08-a0f9-4127-967c-7087dea4c0f6","Type":"ContainerStarted","Data":"84cb664afef1ffa9eee06b4e53f8ab677638d974a315efd207acbf4961b60b62"} Mar 19 11:57:20.734949 master-0 kubenswrapper[7454]: I0319 11:57:20.734899 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" event={"ID":"7b2ecb08-a0f9-4127-967c-7087dea4c0f6","Type":"ContainerStarted","Data":"f40dd28398740e1b8b665d870680e26bbfe5f4e3541ded3a1a95c827cd013960"} Mar 19 11:57:20.736430 master-0 kubenswrapper[7454]: I0319 11:57:20.736386 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" event={"ID":"86884445-e29b-492b-8810-b63b938b9170","Type":"ContainerStarted","Data":"4cdef734b9abebf7ad3957d15cc0c1c6f03e77f6869e579c27076c986f6c0a2c"} Mar 19 11:57:20.746123 master-0 kubenswrapper[7454]: I0319 11:57:20.746046 7454 generic.go:334] "Generic (PLEG): container finished" podID="f05dca6c-7626-4970-a869-4208ff5605a2" containerID="8cc0b059aa2839b58a2ae2c6d2b64bd0a41bd8d8facc9d7c47f7f2b8dedcba42" exitCode=0 Mar 19 11:57:20.746406 master-0 kubenswrapper[7454]: I0319 11:57:20.746190 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbd5s" event={"ID":"f05dca6c-7626-4970-a869-4208ff5605a2","Type":"ContainerDied","Data":"8cc0b059aa2839b58a2ae2c6d2b64bd0a41bd8d8facc9d7c47f7f2b8dedcba42"} Mar 19 11:57:20.750159 master-0 kubenswrapper[7454]: I0319 11:57:20.748273 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" event={"ID":"ad327a59-7879-4215-bb95-3f2be64cb97f","Type":"ContainerStarted","Data":"b542a0a205c4216757caf2ac82ee713e0af9234cbecce9a45c9d2181668c5b5f"} Mar 19 11:57:20.755593 master-0 kubenswrapper[7454]: I0319 11:57:20.755235 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s22fd" event={"ID":"7383e647-63b0-452d-a39b-02ad27a9b053","Type":"ContainerStarted","Data":"99db6b111a8ae8175395fd33144dd0fae5b0fb24a8148c8cce21b262a325ae04"} Mar 19 11:57:20.758398 master-0 kubenswrapper[7454]: I0319 11:57:20.758332 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86" event={"ID":"44469a78-9300-4260-89e9-ea939de1357b","Type":"ContainerStarted","Data":"81e5dd60f8e8f398fbc94edc5ee4b7a7c46081fef1fa9b130b775ed3aebea712"} Mar 19 11:57:20.762169 master-0 kubenswrapper[7454]: I0319 11:57:20.762120 7454 generic.go:334] "Generic (PLEG): container finished" podID="c52bbbe7-bc16-432f-a471-bc561083a853" containerID="2a28f91cb7fa0c9891cfe8e8b101fe6954743be580a42629eefdf4e346a6ff36" exitCode=0 Mar 19 11:57:20.762169 master-0 kubenswrapper[7454]: I0319 11:57:20.762166 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdnkp" event={"ID":"c52bbbe7-bc16-432f-a471-bc561083a853","Type":"ContainerDied","Data":"2a28f91cb7fa0c9891cfe8e8b101fe6954743be580a42629eefdf4e346a6ff36"} Mar 19 11:57:20.849941 master-0 kubenswrapper[7454]: I0319 11:57:20.849791 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s22fd" podStartSLOduration=31.217830632 podStartE2EDuration="33.849758607s" podCreationTimestamp="2026-03-19 11:56:47 +0000 UTC" firstStartedPulling="2026-03-19 11:57:17.473654984 +0000 UTC m=+207.104120897" lastFinishedPulling="2026-03-19 11:57:20.105582949 +0000 UTC m=+209.736048872" observedRunningTime="2026-03-19 11:57:20.84954593 +0000 UTC m=+210.480011853" watchObservedRunningTime="2026-03-19 11:57:20.849758607 +0000 UTC m=+210.480224520" Mar 19 11:57:23.782580 master-0 kubenswrapper[7454]: I0319 11:57:23.782525 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdnkp" event={"ID":"c52bbbe7-bc16-432f-a471-bc561083a853","Type":"ContainerStarted","Data":"0019343e8f4400d93f41987413b176dbe918d03e7b86f27caac5590cef35ee85"} Mar 19 11:57:23.787062 master-0 kubenswrapper[7454]: I0319 11:57:23.786668 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86" event={"ID":"44469a78-9300-4260-89e9-ea939de1357b","Type":"ContainerStarted","Data":"bcbe72e4cc3e493a5ae6c052d3dcfb298a861d9613583852bbc5958392be50c4"} Mar 19 11:57:23.801785 master-0 kubenswrapper[7454]: I0319 11:57:23.801579 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tdnkp" podStartSLOduration=22.124720132 podStartE2EDuration="27.801560321s" podCreationTimestamp="2026-03-19 11:56:56 +0000 UTC" firstStartedPulling="2026-03-19 11:57:17.472537899 +0000 UTC m=+207.103003812" lastFinishedPulling="2026-03-19 11:57:23.149378088 +0000 UTC m=+212.779844001" observedRunningTime="2026-03-19 11:57:23.801453349 +0000 UTC m=+213.431919262" watchObservedRunningTime="2026-03-19 11:57:23.801560321 +0000 UTC m=+213.432026234" Mar 19 11:57:23.829312 master-0 kubenswrapper[7454]: I0319 11:57:23.829238 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86" podStartSLOduration=2.463953035 podStartE2EDuration="5.829200505s" podCreationTimestamp="2026-03-19 11:57:18 +0000 UTC" firstStartedPulling="2026-03-19 11:57:19.62233992 +0000 UTC m=+209.252805833" lastFinishedPulling="2026-03-19 11:57:22.98758739 +0000 UTC m=+212.618053303" observedRunningTime="2026-03-19 11:57:23.827662266 +0000 UTC m=+213.458128209" watchObservedRunningTime="2026-03-19 11:57:23.829200505 +0000 UTC m=+213.459666428" Mar 19 11:57:26.663759 master-0 kubenswrapper[7454]: I0319 11:57:26.663400 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s22fd" Mar 19 11:57:26.663759 master-0 kubenswrapper[7454]: I0319 11:57:26.663703 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s22fd" Mar 19 11:57:26.676253 master-0 kubenswrapper[7454]: I0319 11:57:26.676215 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 11:57:26.676333 master-0 kubenswrapper[7454]: I0319 11:57:26.676276 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 11:57:26.705535 master-0 kubenswrapper[7454]: I0319 11:57:26.705012 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s22fd" Mar 19 11:57:26.716852 master-0 kubenswrapper[7454]: I0319 11:57:26.716758 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 11:57:26.855016 master-0 kubenswrapper[7454]: I0319 11:57:26.854951 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s22fd" Mar 19 11:57:36.731346 master-0 kubenswrapper[7454]: I0319 11:57:36.731199 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 11:57:39.909597 master-0 kubenswrapper[7454]: I0319 11:57:39.909475 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" event={"ID":"ad327a59-7879-4215-bb95-3f2be64cb97f","Type":"ContainerStarted","Data":"7a0aee18c6ade16f5dc3c2a0ca7d68e80c9464dbbdb5b334dc3cb62a886ac05b"} Mar 19 11:57:39.912616 master-0 kubenswrapper[7454]: I0319 11:57:39.912575 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" event={"ID":"4264e82c-387f-4aa6-9ef6-b7beb61e098c","Type":"ContainerStarted","Data":"4392eed388b37017bcd8b20c517d84851e3de5ed941a95e348cdc1b041c929fe"} Mar 19 11:57:39.914624 master-0 kubenswrapper[7454]: I0319 11:57:39.914603 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" event={"ID":"7b2ecb08-a0f9-4127-967c-7087dea4c0f6","Type":"ContainerStarted","Data":"5d31241cbfd7328f7ef428f31ffdbc26b4a00817c912ba157ca859080b34581f"} Mar 19 11:57:39.917727 master-0 kubenswrapper[7454]: I0319 11:57:39.917701 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" event={"ID":"86884445-e29b-492b-8810-b63b938b9170","Type":"ContainerStarted","Data":"9a33e8bf768939fc576f216c83904eddb4b6108a48fa3ad7d6887c3aeeead5a2"} Mar 19 11:57:39.917813 master-0 kubenswrapper[7454]: I0319 11:57:39.917728 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" event={"ID":"86884445-e29b-492b-8810-b63b938b9170","Type":"ContainerStarted","Data":"1be8c2666854aac943053f72754ad0c3aba91de2e34250e5d62ae8eaba4c2068"} Mar 19 11:57:39.921804 master-0 kubenswrapper[7454]: I0319 11:57:39.921715 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" event={"ID":"fd40498c-f50a-408c-9a50-5d85ae666124","Type":"ContainerStarted","Data":"e46402e9e37c366c46da921e8257890f1d201b54bbd07d4bc4010bce5ecefa6c"} Mar 19 11:57:39.927987 master-0 kubenswrapper[7454]: I0319 11:57:39.927920 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbd5s" event={"ID":"f05dca6c-7626-4970-a869-4208ff5605a2","Type":"ContainerStarted","Data":"96977713e154e93d664058f9f0f65da647b96a0bdfa3cdac600f920a4e8fba45"} Mar 19 11:57:39.930121 master-0 kubenswrapper[7454]: I0319 11:57:39.930074 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" event={"ID":"891094d1-558d-40b7-ad44-c3b2fef6f859","Type":"ContainerStarted","Data":"d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97"} Mar 19 11:57:39.930206 master-0 kubenswrapper[7454]: I0319 11:57:39.930124 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" event={"ID":"891094d1-558d-40b7-ad44-c3b2fef6f859","Type":"ContainerStarted","Data":"bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03"} Mar 19 11:57:39.932652 master-0 kubenswrapper[7454]: I0319 11:57:39.932626 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" event={"ID":"cf6b6560-1731-4fb1-b3c2-8257002842d6","Type":"ContainerStarted","Data":"2967f90f9e5ee8c36ca7fef93b23beb026c519e6def04180aa006bfd822ac758"} Mar 19 11:57:39.934637 master-0 kubenswrapper[7454]: I0319 11:57:39.934616 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-cx8l9" event={"ID":"667757ee-2670-4019-ad93-156521d3c2e7","Type":"ContainerStarted","Data":"fab4e41dbdf7006317173754803d59ea687667d13dc23f66e84993ddd9fb8ddc"} Mar 19 11:57:39.934717 master-0 kubenswrapper[7454]: I0319 11:57:39.934642 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-cx8l9" event={"ID":"667757ee-2670-4019-ad93-156521d3c2e7","Type":"ContainerStarted","Data":"eef877dc83b0dd04cd79603e1f9575dcad4c817c1f02a232d92a837843d742ba"} Mar 19 11:57:40.755580 master-0 kubenswrapper[7454]: I0319 11:57:40.754871 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" podStartSLOduration=3.505372648 podStartE2EDuration="22.754849112s" podCreationTimestamp="2026-03-19 11:57:18 +0000 UTC" firstStartedPulling="2026-03-19 11:57:19.71544108 +0000 UTC m=+209.345906993" lastFinishedPulling="2026-03-19 11:57:38.964917544 +0000 UTC m=+228.595383457" observedRunningTime="2026-03-19 11:57:40.721727217 +0000 UTC m=+230.352193120" watchObservedRunningTime="2026-03-19 11:57:40.754849112 +0000 UTC m=+230.385315015" Mar 19 11:57:40.759451 master-0 kubenswrapper[7454]: I0319 11:57:40.759209 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" podStartSLOduration=3.21750647 podStartE2EDuration="22.759186919s" podCreationTimestamp="2026-03-19 11:57:18 +0000 UTC" firstStartedPulling="2026-03-19 11:57:19.372242084 +0000 UTC m=+209.002707987" lastFinishedPulling="2026-03-19 11:57:38.913922493 +0000 UTC m=+228.544388436" observedRunningTime="2026-03-19 11:57:40.756312748 +0000 UTC m=+230.386778671" watchObservedRunningTime="2026-03-19 11:57:40.759186919 +0000 UTC m=+230.389652832" Mar 19 11:57:40.789094 master-0 kubenswrapper[7454]: I0319 11:57:40.789032 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" podStartSLOduration=3.094251672 podStartE2EDuration="21.789012891s" podCreationTimestamp="2026-03-19 11:57:19 +0000 UTC" firstStartedPulling="2026-03-19 11:57:20.219230567 +0000 UTC m=+209.849696480" lastFinishedPulling="2026-03-19 11:57:38.913991786 +0000 UTC m=+228.544457699" observedRunningTime="2026-03-19 11:57:40.784297852 +0000 UTC m=+230.414763765" watchObservedRunningTime="2026-03-19 11:57:40.789012891 +0000 UTC m=+230.419478804" Mar 19 11:57:40.844002 master-0 kubenswrapper[7454]: I0319 11:57:40.843918 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" podStartSLOduration=3.917488812 podStartE2EDuration="22.843896594s" podCreationTimestamp="2026-03-19 11:57:18 +0000 UTC" firstStartedPulling="2026-03-19 11:57:19.987536472 +0000 UTC m=+209.618002385" lastFinishedPulling="2026-03-19 11:57:38.913944264 +0000 UTC m=+228.544410167" observedRunningTime="2026-03-19 11:57:40.835315113 +0000 UTC m=+230.465781026" watchObservedRunningTime="2026-03-19 11:57:40.843896594 +0000 UTC m=+230.474362507" Mar 19 11:57:40.869003 master-0 kubenswrapper[7454]: I0319 11:57:40.868927 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fbd5s" podStartSLOduration=32.375563978 podStartE2EDuration="53.868910144s" podCreationTimestamp="2026-03-19 11:56:47 +0000 UTC" firstStartedPulling="2026-03-19 11:57:17.469640257 +0000 UTC m=+207.100106170" lastFinishedPulling="2026-03-19 11:57:38.962986423 +0000 UTC m=+228.593452336" observedRunningTime="2026-03-19 11:57:40.867649414 +0000 UTC m=+230.498115327" watchObservedRunningTime="2026-03-19 11:57:40.868910144 +0000 UTC m=+230.499376057" Mar 19 11:57:40.903945 master-0 kubenswrapper[7454]: I0319 11:57:40.903869 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" podStartSLOduration=3.883959492 podStartE2EDuration="22.903847547s" podCreationTimestamp="2026-03-19 11:57:18 +0000 UTC" firstStartedPulling="2026-03-19 11:57:20.003058181 +0000 UTC m=+209.633524094" lastFinishedPulling="2026-03-19 11:57:39.022946236 +0000 UTC m=+228.653412149" observedRunningTime="2026-03-19 11:57:40.900014496 +0000 UTC m=+230.530480409" watchObservedRunningTime="2026-03-19 11:57:40.903847547 +0000 UTC m=+230.534313460" Mar 19 11:57:40.933104 master-0 kubenswrapper[7454]: I0319 11:57:40.933032 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-cx8l9" podStartSLOduration=3.664771971 podStartE2EDuration="22.933014998s" podCreationTimestamp="2026-03-19 11:57:18 +0000 UTC" firstStartedPulling="2026-03-19 11:57:19.645716298 +0000 UTC m=+209.276182201" lastFinishedPulling="2026-03-19 11:57:38.913959315 +0000 UTC m=+228.544425228" observedRunningTime="2026-03-19 11:57:40.927216444 +0000 UTC m=+230.557682357" watchObservedRunningTime="2026-03-19 11:57:40.933014998 +0000 UTC m=+230.563480911" Mar 19 11:57:40.968592 master-0 kubenswrapper[7454]: I0319 11:57:40.968452 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" event={"ID":"891094d1-558d-40b7-ad44-c3b2fef6f859","Type":"ContainerStarted","Data":"43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b"} Mar 19 11:57:40.970921 master-0 kubenswrapper[7454]: I0319 11:57:40.970822 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" podStartSLOduration=3.41263675 podStartE2EDuration="22.970802771s" podCreationTimestamp="2026-03-19 11:57:18 +0000 UTC" firstStartedPulling="2026-03-19 11:57:19.355876446 +0000 UTC m=+208.986342359" lastFinishedPulling="2026-03-19 11:57:38.914042467 +0000 UTC m=+228.544508380" observedRunningTime="2026-03-19 11:57:40.968421966 +0000 UTC m=+230.598887879" watchObservedRunningTime="2026-03-19 11:57:40.970802771 +0000 UTC m=+230.601268684" Mar 19 11:57:40.976368 master-0 kubenswrapper[7454]: I0319 11:57:40.975422 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" podUID="891094d1-558d-40b7-ad44-c3b2fef6f859" containerName="config-sync-controllers" containerID="cri-o://d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97" gracePeriod=30 Mar 19 11:57:40.976368 master-0 kubenswrapper[7454]: I0319 11:57:40.975422 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" podUID="891094d1-558d-40b7-ad44-c3b2fef6f859" containerName="cluster-cloud-controller-manager" containerID="cri-o://bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03" gracePeriod=30 Mar 19 11:57:40.976368 master-0 kubenswrapper[7454]: I0319 11:57:40.975417 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" podUID="891094d1-558d-40b7-ad44-c3b2fef6f859" containerName="kube-rbac-proxy" containerID="cri-o://43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b" gracePeriod=30 Mar 19 11:57:41.138891 master-0 kubenswrapper[7454]: I0319 11:57:41.138789 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" Mar 19 11:57:41.230972 master-0 kubenswrapper[7454]: I0319 11:57:41.230742 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/891094d1-558d-40b7-ad44-c3b2fef6f859-cloud-controller-manager-operator-tls\") pod \"891094d1-558d-40b7-ad44-c3b2fef6f859\" (UID: \"891094d1-558d-40b7-ad44-c3b2fef6f859\") " Mar 19 11:57:41.230972 master-0 kubenswrapper[7454]: I0319 11:57:41.230912 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/891094d1-558d-40b7-ad44-c3b2fef6f859-auth-proxy-config\") pod \"891094d1-558d-40b7-ad44-c3b2fef6f859\" (UID: \"891094d1-558d-40b7-ad44-c3b2fef6f859\") " Mar 19 11:57:41.231231 master-0 kubenswrapper[7454]: I0319 11:57:41.231080 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9jxv\" (UniqueName: \"kubernetes.io/projected/891094d1-558d-40b7-ad44-c3b2fef6f859-kube-api-access-f9jxv\") pod \"891094d1-558d-40b7-ad44-c3b2fef6f859\" (UID: \"891094d1-558d-40b7-ad44-c3b2fef6f859\") " Mar 19 11:57:41.231231 master-0 kubenswrapper[7454]: I0319 11:57:41.231173 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/891094d1-558d-40b7-ad44-c3b2fef6f859-images\") pod \"891094d1-558d-40b7-ad44-c3b2fef6f859\" (UID: \"891094d1-558d-40b7-ad44-c3b2fef6f859\") " Mar 19 11:57:41.231320 master-0 kubenswrapper[7454]: I0319 11:57:41.231254 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/891094d1-558d-40b7-ad44-c3b2fef6f859-host-etc-kube\") pod \"891094d1-558d-40b7-ad44-c3b2fef6f859\" (UID: \"891094d1-558d-40b7-ad44-c3b2fef6f859\") " Mar 19 11:57:41.231968 master-0 kubenswrapper[7454]: I0319 11:57:41.231917 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/891094d1-558d-40b7-ad44-c3b2fef6f859-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "891094d1-558d-40b7-ad44-c3b2fef6f859" (UID: "891094d1-558d-40b7-ad44-c3b2fef6f859"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:57:41.233597 master-0 kubenswrapper[7454]: I0319 11:57:41.233535 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/891094d1-558d-40b7-ad44-c3b2fef6f859-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "891094d1-558d-40b7-ad44-c3b2fef6f859" (UID: "891094d1-558d-40b7-ad44-c3b2fef6f859"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:57:41.233775 master-0 kubenswrapper[7454]: I0319 11:57:41.233719 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/891094d1-558d-40b7-ad44-c3b2fef6f859-images" (OuterVolumeSpecName: "images") pod "891094d1-558d-40b7-ad44-c3b2fef6f859" (UID: "891094d1-558d-40b7-ad44-c3b2fef6f859"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:57:41.235923 master-0 kubenswrapper[7454]: I0319 11:57:41.235891 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/891094d1-558d-40b7-ad44-c3b2fef6f859-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "891094d1-558d-40b7-ad44-c3b2fef6f859" (UID: "891094d1-558d-40b7-ad44-c3b2fef6f859"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 11:57:41.238026 master-0 kubenswrapper[7454]: I0319 11:57:41.237967 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/891094d1-558d-40b7-ad44-c3b2fef6f859-kube-api-access-f9jxv" (OuterVolumeSpecName: "kube-api-access-f9jxv") pod "891094d1-558d-40b7-ad44-c3b2fef6f859" (UID: "891094d1-558d-40b7-ad44-c3b2fef6f859"). InnerVolumeSpecName "kube-api-access-f9jxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:57:41.333328 master-0 kubenswrapper[7454]: I0319 11:57:41.333241 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9jxv\" (UniqueName: \"kubernetes.io/projected/891094d1-558d-40b7-ad44-c3b2fef6f859-kube-api-access-f9jxv\") on node \"master-0\" DevicePath \"\"" Mar 19 11:57:41.333328 master-0 kubenswrapper[7454]: I0319 11:57:41.333304 7454 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/891094d1-558d-40b7-ad44-c3b2fef6f859-images\") on node \"master-0\" DevicePath \"\"" Mar 19 11:57:41.333328 master-0 kubenswrapper[7454]: I0319 11:57:41.333319 7454 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/891094d1-558d-40b7-ad44-c3b2fef6f859-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Mar 19 11:57:41.333328 master-0 kubenswrapper[7454]: I0319 11:57:41.333332 7454 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/891094d1-558d-40b7-ad44-c3b2fef6f859-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Mar 19 11:57:41.333328 master-0 kubenswrapper[7454]: I0319 11:57:41.333346 7454 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/891094d1-558d-40b7-ad44-c3b2fef6f859-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 19 11:57:41.981003 master-0 kubenswrapper[7454]: I0319 11:57:41.980952 7454 generic.go:334] "Generic (PLEG): container finished" podID="891094d1-558d-40b7-ad44-c3b2fef6f859" containerID="43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b" exitCode=0 Mar 19 11:57:41.981003 master-0 kubenswrapper[7454]: I0319 11:57:41.980988 7454 generic.go:334] "Generic (PLEG): container finished" podID="891094d1-558d-40b7-ad44-c3b2fef6f859" containerID="d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97" exitCode=0 Mar 19 11:57:41.981003 master-0 kubenswrapper[7454]: I0319 11:57:41.981000 7454 generic.go:334] "Generic (PLEG): container finished" podID="891094d1-558d-40b7-ad44-c3b2fef6f859" containerID="bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03" exitCode=0 Mar 19 11:57:41.981519 master-0 kubenswrapper[7454]: I0319 11:57:41.981026 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" event={"ID":"891094d1-558d-40b7-ad44-c3b2fef6f859","Type":"ContainerDied","Data":"43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b"} Mar 19 11:57:41.981519 master-0 kubenswrapper[7454]: I0319 11:57:41.981057 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" event={"ID":"891094d1-558d-40b7-ad44-c3b2fef6f859","Type":"ContainerDied","Data":"d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97"} Mar 19 11:57:41.981519 master-0 kubenswrapper[7454]: I0319 11:57:41.981072 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" event={"ID":"891094d1-558d-40b7-ad44-c3b2fef6f859","Type":"ContainerDied","Data":"bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03"} Mar 19 11:57:41.981519 master-0 kubenswrapper[7454]: I0319 11:57:41.981099 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" event={"ID":"891094d1-558d-40b7-ad44-c3b2fef6f859","Type":"ContainerDied","Data":"bb4c10b9f47152ed83ad689d19722c380edca47f32e2136f5369af5ec33772a4"} Mar 19 11:57:41.981519 master-0 kubenswrapper[7454]: I0319 11:57:41.981121 7454 scope.go:117] "RemoveContainer" containerID="43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b" Mar 19 11:57:41.981519 master-0 kubenswrapper[7454]: I0319 11:57:41.981257 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4" Mar 19 11:57:41.997077 master-0 kubenswrapper[7454]: I0319 11:57:41.997026 7454 scope.go:117] "RemoveContainer" containerID="d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97" Mar 19 11:57:42.032085 master-0 kubenswrapper[7454]: I0319 11:57:42.027859 7454 scope.go:117] "RemoveContainer" containerID="bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03" Mar 19 11:57:42.046148 master-0 kubenswrapper[7454]: I0319 11:57:42.046043 7454 scope.go:117] "RemoveContainer" containerID="43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b" Mar 19 11:57:42.048201 master-0 kubenswrapper[7454]: E0319 11:57:42.047200 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b\": container with ID starting with 43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b not found: ID does not exist" containerID="43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b" Mar 19 11:57:42.048201 master-0 kubenswrapper[7454]: I0319 11:57:42.047247 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b"} err="failed to get container status \"43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b\": rpc error: code = NotFound desc = could not find container \"43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b\": container with ID starting with 43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b not found: ID does not exist" Mar 19 11:57:42.048201 master-0 kubenswrapper[7454]: I0319 11:57:42.047275 7454 scope.go:117] "RemoveContainer" containerID="d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97" Mar 19 11:57:42.048201 master-0 kubenswrapper[7454]: E0319 11:57:42.047569 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97\": container with ID starting with d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97 not found: ID does not exist" containerID="d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97" Mar 19 11:57:42.048201 master-0 kubenswrapper[7454]: I0319 11:57:42.047592 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97"} err="failed to get container status \"d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97\": rpc error: code = NotFound desc = could not find container \"d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97\": container with ID starting with d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97 not found: ID does not exist" Mar 19 11:57:42.048201 master-0 kubenswrapper[7454]: I0319 11:57:42.047608 7454 scope.go:117] "RemoveContainer" containerID="bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03" Mar 19 11:57:42.048201 master-0 kubenswrapper[7454]: E0319 11:57:42.047878 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03\": container with ID starting with bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03 not found: ID does not exist" containerID="bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03" Mar 19 11:57:42.048201 master-0 kubenswrapper[7454]: I0319 11:57:42.047900 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03"} err="failed to get container status \"bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03\": rpc error: code = NotFound desc = could not find container \"bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03\": container with ID starting with bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03 not found: ID does not exist" Mar 19 11:57:42.048201 master-0 kubenswrapper[7454]: I0319 11:57:42.047917 7454 scope.go:117] "RemoveContainer" containerID="43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b" Mar 19 11:57:42.048671 master-0 kubenswrapper[7454]: I0319 11:57:42.048315 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b"} err="failed to get container status \"43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b\": rpc error: code = NotFound desc = could not find container \"43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b\": container with ID starting with 43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b not found: ID does not exist" Mar 19 11:57:42.048671 master-0 kubenswrapper[7454]: I0319 11:57:42.048368 7454 scope.go:117] "RemoveContainer" containerID="d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97" Mar 19 11:57:42.048901 master-0 kubenswrapper[7454]: I0319 11:57:42.048785 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97"} err="failed to get container status \"d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97\": rpc error: code = NotFound desc = could not find container \"d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97\": container with ID starting with d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97 not found: ID does not exist" Mar 19 11:57:42.048968 master-0 kubenswrapper[7454]: I0319 11:57:42.048899 7454 scope.go:117] "RemoveContainer" containerID="bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03" Mar 19 11:57:42.049653 master-0 kubenswrapper[7454]: I0319 11:57:42.049254 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03"} err="failed to get container status \"bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03\": rpc error: code = NotFound desc = could not find container \"bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03\": container with ID starting with bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03 not found: ID does not exist" Mar 19 11:57:42.049653 master-0 kubenswrapper[7454]: I0319 11:57:42.049300 7454 scope.go:117] "RemoveContainer" containerID="43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b" Mar 19 11:57:42.050922 master-0 kubenswrapper[7454]: I0319 11:57:42.050197 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b"} err="failed to get container status \"43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b\": rpc error: code = NotFound desc = could not find container \"43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b\": container with ID starting with 43e668f2c9af3f7f0f89d2f05be1ec65c45f0b4edf2abb0769dbed1ecbc6244b not found: ID does not exist" Mar 19 11:57:42.050922 master-0 kubenswrapper[7454]: I0319 11:57:42.050222 7454 scope.go:117] "RemoveContainer" containerID="d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97" Mar 19 11:57:42.050922 master-0 kubenswrapper[7454]: I0319 11:57:42.050553 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97"} err="failed to get container status \"d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97\": rpc error: code = NotFound desc = could not find container \"d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97\": container with ID starting with d4b229fbf42637e1c141bef12988c4d3dec5b17b1b22148bdcd2ae952b3baa97 not found: ID does not exist" Mar 19 11:57:42.050922 master-0 kubenswrapper[7454]: I0319 11:57:42.050577 7454 scope.go:117] "RemoveContainer" containerID="bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03" Mar 19 11:57:42.050922 master-0 kubenswrapper[7454]: I0319 11:57:42.050855 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03"} err="failed to get container status \"bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03\": rpc error: code = NotFound desc = could not find container \"bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03\": container with ID starting with bd1e4ab19492e89981aaffb8ab1ee3314080ab148b4b9aabd707f5bbcfc1ff03 not found: ID does not exist" Mar 19 11:57:42.050922 master-0 kubenswrapper[7454]: I0319 11:57:42.050892 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4"] Mar 19 11:57:42.056420 master-0 kubenswrapper[7454]: I0319 11:57:42.056365 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-6j2q4"] Mar 19 11:57:42.084706 master-0 kubenswrapper[7454]: I0319 11:57:42.084648 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4"] Mar 19 11:57:42.084946 master-0 kubenswrapper[7454]: E0319 11:57:42.084932 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891094d1-558d-40b7-ad44-c3b2fef6f859" containerName="config-sync-controllers" Mar 19 11:57:42.084987 master-0 kubenswrapper[7454]: I0319 11:57:42.084951 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="891094d1-558d-40b7-ad44-c3b2fef6f859" containerName="config-sync-controllers" Mar 19 11:57:42.084987 master-0 kubenswrapper[7454]: E0319 11:57:42.084965 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891094d1-558d-40b7-ad44-c3b2fef6f859" containerName="cluster-cloud-controller-manager" Mar 19 11:57:42.084987 master-0 kubenswrapper[7454]: I0319 11:57:42.084973 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="891094d1-558d-40b7-ad44-c3b2fef6f859" containerName="cluster-cloud-controller-manager" Mar 19 11:57:42.085067 master-0 kubenswrapper[7454]: E0319 11:57:42.085003 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891094d1-558d-40b7-ad44-c3b2fef6f859" containerName="kube-rbac-proxy" Mar 19 11:57:42.085067 master-0 kubenswrapper[7454]: I0319 11:57:42.085012 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="891094d1-558d-40b7-ad44-c3b2fef6f859" containerName="kube-rbac-proxy" Mar 19 11:57:42.085149 master-0 kubenswrapper[7454]: I0319 11:57:42.085132 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="891094d1-558d-40b7-ad44-c3b2fef6f859" containerName="cluster-cloud-controller-manager" Mar 19 11:57:42.085192 master-0 kubenswrapper[7454]: I0319 11:57:42.085163 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="891094d1-558d-40b7-ad44-c3b2fef6f859" containerName="config-sync-controllers" Mar 19 11:57:42.085192 master-0 kubenswrapper[7454]: I0319 11:57:42.085179 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="891094d1-558d-40b7-ad44-c3b2fef6f859" containerName="kube-rbac-proxy" Mar 19 11:57:42.087915 master-0 kubenswrapper[7454]: I0319 11:57:42.086993 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 11:57:42.090136 master-0 kubenswrapper[7454]: I0319 11:57:42.089754 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 19 11:57:42.090136 master-0 kubenswrapper[7454]: I0319 11:57:42.089811 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-dr8qt" Mar 19 11:57:42.090136 master-0 kubenswrapper[7454]: I0319 11:57:42.089767 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 19 11:57:42.090136 master-0 kubenswrapper[7454]: I0319 11:57:42.089982 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 19 11:57:42.091873 master-0 kubenswrapper[7454]: I0319 11:57:42.090380 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 19 11:57:42.091873 master-0 kubenswrapper[7454]: I0319 11:57:42.090844 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 19 11:57:42.147895 master-0 kubenswrapper[7454]: I0319 11:57:42.147824 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee3529ac-6135-438b-9334-40c63c1fbd3d-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 11:57:42.148145 master-0 kubenswrapper[7454]: I0319 11:57:42.147972 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8hpg\" (UniqueName: \"kubernetes.io/projected/ee3529ac-6135-438b-9334-40c63c1fbd3d-kube-api-access-c8hpg\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 11:57:42.148145 master-0 kubenswrapper[7454]: I0319 11:57:42.148055 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee3529ac-6135-438b-9334-40c63c1fbd3d-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 11:57:42.148145 master-0 kubenswrapper[7454]: I0319 11:57:42.148102 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ee3529ac-6135-438b-9334-40c63c1fbd3d-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 11:57:42.148145 master-0 kubenswrapper[7454]: I0319 11:57:42.148121 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ee3529ac-6135-438b-9334-40c63c1fbd3d-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 11:57:42.250110 master-0 kubenswrapper[7454]: I0319 11:57:42.249950 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee3529ac-6135-438b-9334-40c63c1fbd3d-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 11:57:42.250110 master-0 kubenswrapper[7454]: I0319 11:57:42.250033 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8hpg\" (UniqueName: \"kubernetes.io/projected/ee3529ac-6135-438b-9334-40c63c1fbd3d-kube-api-access-c8hpg\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 11:57:42.250110 master-0 kubenswrapper[7454]: I0319 11:57:42.250068 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee3529ac-6135-438b-9334-40c63c1fbd3d-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 11:57:42.250110 master-0 kubenswrapper[7454]: I0319 11:57:42.250100 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ee3529ac-6135-438b-9334-40c63c1fbd3d-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 11:57:42.250454 master-0 kubenswrapper[7454]: I0319 11:57:42.250122 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ee3529ac-6135-438b-9334-40c63c1fbd3d-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 11:57:42.250454 master-0 kubenswrapper[7454]: I0319 11:57:42.250276 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ee3529ac-6135-438b-9334-40c63c1fbd3d-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 11:57:42.251098 master-0 kubenswrapper[7454]: I0319 11:57:42.251066 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee3529ac-6135-438b-9334-40c63c1fbd3d-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 11:57:42.251491 master-0 kubenswrapper[7454]: I0319 11:57:42.251457 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ee3529ac-6135-438b-9334-40c63c1fbd3d-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 11:57:42.261785 master-0 kubenswrapper[7454]: I0319 11:57:42.261735 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee3529ac-6135-438b-9334-40c63c1fbd3d-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 11:57:42.271503 master-0 kubenswrapper[7454]: I0319 11:57:42.271442 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8hpg\" (UniqueName: \"kubernetes.io/projected/ee3529ac-6135-438b-9334-40c63c1fbd3d-kube-api-access-c8hpg\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 11:57:42.413251 master-0 kubenswrapper[7454]: I0319 11:57:42.413162 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 11:57:42.438861 master-0 kubenswrapper[7454]: W0319 11:57:42.438807 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee3529ac_6135_438b_9334_40c63c1fbd3d.slice/crio-b8ab4adb571de7e6d61b60e1752c759892824492154b5310933386ea2f807133 WatchSource:0}: Error finding container b8ab4adb571de7e6d61b60e1752c759892824492154b5310933386ea2f807133: Status 404 returned error can't find the container with id b8ab4adb571de7e6d61b60e1752c759892824492154b5310933386ea2f807133 Mar 19 11:57:42.623460 master-0 kubenswrapper[7454]: I0319 11:57:42.622951 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h"] Mar 19 11:57:42.624520 master-0 kubenswrapper[7454]: I0319 11:57:42.624471 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 11:57:42.626766 master-0 kubenswrapper[7454]: I0319 11:57:42.626695 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 19 11:57:42.627111 master-0 kubenswrapper[7454]: I0319 11:57:42.627069 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-shkfs" Mar 19 11:57:42.641821 master-0 kubenswrapper[7454]: I0319 11:57:42.633437 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 19 11:57:42.649826 master-0 kubenswrapper[7454]: I0319 11:57:42.648401 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="891094d1-558d-40b7-ad44-c3b2fef6f859" path="/var/lib/kubelet/pods/891094d1-558d-40b7-ad44-c3b2fef6f859/volumes" Mar 19 11:57:42.649826 master-0 kubenswrapper[7454]: I0319 11:57:42.649138 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-lpndz"] Mar 19 11:57:42.655693 master-0 kubenswrapper[7454]: I0319 11:57:42.650236 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h"] Mar 19 11:57:42.655693 master-0 kubenswrapper[7454]: I0319 11:57:42.650351 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.655693 master-0 kubenswrapper[7454]: I0319 11:57:42.652244 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-xbkxv" Mar 19 11:57:42.655693 master-0 kubenswrapper[7454]: I0319 11:57:42.653658 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 19 11:57:42.655693 master-0 kubenswrapper[7454]: I0319 11:57:42.653812 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 19 11:57:42.656181 master-0 kubenswrapper[7454]: I0319 11:57:42.655829 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/de39c80c-acfa-4bc1-a844-95b170169b44-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 11:57:42.656181 master-0 kubenswrapper[7454]: I0319 11:57:42.655889 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x2v6\" (UniqueName: \"kubernetes.io/projected/de39c80c-acfa-4bc1-a844-95b170169b44-kube-api-access-6x2v6\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 11:57:42.656181 master-0 kubenswrapper[7454]: I0319 11:57:42.655954 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/de39c80c-acfa-4bc1-a844-95b170169b44-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 11:57:42.656181 master-0 kubenswrapper[7454]: I0319 11:57:42.655993 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de39c80c-acfa-4bc1-a844-95b170169b44-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 11:57:42.704555 master-0 kubenswrapper[7454]: I0319 11:57:42.704504 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q"] Mar 19 11:57:42.715177 master-0 kubenswrapper[7454]: I0319 11:57:42.714416 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 11:57:42.717432 master-0 kubenswrapper[7454]: I0319 11:57:42.717394 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-48w96" Mar 19 11:57:42.721353 master-0 kubenswrapper[7454]: I0319 11:57:42.721298 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 19 11:57:42.722702 master-0 kubenswrapper[7454]: I0319 11:57:42.722178 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 19 11:57:42.722702 master-0 kubenswrapper[7454]: I0319 11:57:42.722625 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 19 11:57:42.732478 master-0 kubenswrapper[7454]: I0319 11:57:42.732411 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q"] Mar 19 11:57:42.757228 master-0 kubenswrapper[7454]: I0319 11:57:42.757137 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-tls\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.757228 master-0 kubenswrapper[7454]: I0319 11:57:42.757203 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 11:57:42.757228 master-0 kubenswrapper[7454]: I0319 11:57:42.757250 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/de39c80c-acfa-4bc1-a844-95b170169b44-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 11:57:42.758244 master-0 kubenswrapper[7454]: I0319 11:57:42.757282 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq9p4\" (UniqueName: \"kubernetes.io/projected/a9d191d1-631d-4091-af8b-382283c18a5a-kube-api-access-cq9p4\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.758244 master-0 kubenswrapper[7454]: I0319 11:57:42.757316 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 11:57:42.758244 master-0 kubenswrapper[7454]: I0319 11:57:42.757340 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-textfile\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.758244 master-0 kubenswrapper[7454]: I0319 11:57:42.757365 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x2v6\" (UniqueName: \"kubernetes.io/projected/de39c80c-acfa-4bc1-a844-95b170169b44-kube-api-access-6x2v6\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 11:57:42.758244 master-0 kubenswrapper[7454]: I0319 11:57:42.757390 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/bb1000ab-4419-43ce-b1b7-8f43413e017f-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 11:57:42.758244 master-0 kubenswrapper[7454]: I0319 11:57:42.757427 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a9d191d1-631d-4091-af8b-382283c18a5a-root\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.758244 master-0 kubenswrapper[7454]: I0319 11:57:42.757446 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a9d191d1-631d-4091-af8b-382283c18a5a-metrics-client-ca\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.758244 master-0 kubenswrapper[7454]: I0319 11:57:42.757463 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.758244 master-0 kubenswrapper[7454]: I0319 11:57:42.757494 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/de39c80c-acfa-4bc1-a844-95b170169b44-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 11:57:42.758244 master-0 kubenswrapper[7454]: I0319 11:57:42.757525 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de39c80c-acfa-4bc1-a844-95b170169b44-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 11:57:42.758244 master-0 kubenswrapper[7454]: I0319 11:57:42.757551 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 11:57:42.758244 master-0 kubenswrapper[7454]: I0319 11:57:42.757568 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hk8l\" (UniqueName: \"kubernetes.io/projected/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-api-access-6hk8l\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 11:57:42.758244 master-0 kubenswrapper[7454]: I0319 11:57:42.757589 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a9d191d1-631d-4091-af8b-382283c18a5a-sys\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.758244 master-0 kubenswrapper[7454]: I0319 11:57:42.757609 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-wtmp\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.758244 master-0 kubenswrapper[7454]: I0319 11:57:42.757628 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bb1000ab-4419-43ce-b1b7-8f43413e017f-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 11:57:42.758244 master-0 kubenswrapper[7454]: E0319 11:57:42.757839 7454 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Mar 19 11:57:42.758244 master-0 kubenswrapper[7454]: E0319 11:57:42.757903 7454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de39c80c-acfa-4bc1-a844-95b170169b44-openshift-state-metrics-tls podName:de39c80c-acfa-4bc1-a844-95b170169b44 nodeName:}" failed. No retries permitted until 2026-03-19 11:57:43.257879259 +0000 UTC m=+232.888345232 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/de39c80c-acfa-4bc1-a844-95b170169b44-openshift-state-metrics-tls") pod "openshift-state-metrics-5dc6c74576-k464h" (UID: "de39c80c-acfa-4bc1-a844-95b170169b44") : secret "openshift-state-metrics-tls" not found Mar 19 11:57:42.759394 master-0 kubenswrapper[7454]: I0319 11:57:42.759161 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de39c80c-acfa-4bc1-a844-95b170169b44-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 11:57:42.761555 master-0 kubenswrapper[7454]: I0319 11:57:42.761012 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/de39c80c-acfa-4bc1-a844-95b170169b44-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 11:57:42.783674 master-0 kubenswrapper[7454]: I0319 11:57:42.782462 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x2v6\" (UniqueName: \"kubernetes.io/projected/de39c80c-acfa-4bc1-a844-95b170169b44-kube-api-access-6x2v6\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.859682 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a9d191d1-631d-4091-af8b-382283c18a5a-root\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.859741 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a9d191d1-631d-4091-af8b-382283c18a5a-metrics-client-ca\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.859821 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a9d191d1-631d-4091-af8b-382283c18a5a-root\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.859869 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.859934 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.859966 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hk8l\" (UniqueName: \"kubernetes.io/projected/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-api-access-6hk8l\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.859994 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a9d191d1-631d-4091-af8b-382283c18a5a-sys\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.860018 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-wtmp\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.860039 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bb1000ab-4419-43ce-b1b7-8f43413e017f-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.860067 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-tls\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.860092 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.860140 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq9p4\" (UniqueName: \"kubernetes.io/projected/a9d191d1-631d-4091-af8b-382283c18a5a-kube-api-access-cq9p4\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.860168 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.860191 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-textfile\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.860218 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/bb1000ab-4419-43ce-b1b7-8f43413e017f-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.860737 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/bb1000ab-4419-43ce-b1b7-8f43413e017f-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.861410 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a9d191d1-631d-4091-af8b-382283c18a5a-metrics-client-ca\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.862564 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bb1000ab-4419-43ce-b1b7-8f43413e017f-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.863233 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.863550 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a9d191d1-631d-4091-af8b-382283c18a5a-sys\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.863715 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-wtmp\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.867150 master-0 kubenswrapper[7454]: I0319 11:57:42.865197 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.867933 master-0 kubenswrapper[7454]: I0319 11:57:42.867534 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-tls\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.867933 master-0 kubenswrapper[7454]: I0319 11:57:42.867808 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-textfile\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.871091 master-0 kubenswrapper[7454]: I0319 11:57:42.868400 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 11:57:42.874296 master-0 kubenswrapper[7454]: I0319 11:57:42.873332 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 11:57:42.889581 master-0 kubenswrapper[7454]: I0319 11:57:42.889535 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hk8l\" (UniqueName: \"kubernetes.io/projected/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-api-access-6hk8l\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 11:57:42.904772 master-0 kubenswrapper[7454]: I0319 11:57:42.904724 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq9p4\" (UniqueName: \"kubernetes.io/projected/a9d191d1-631d-4091-af8b-382283c18a5a-kube-api-access-cq9p4\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:42.990487 master-0 kubenswrapper[7454]: I0319 11:57:42.990365 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" event={"ID":"ee3529ac-6135-438b-9334-40c63c1fbd3d","Type":"ContainerStarted","Data":"10c6568199a7e8563a8238a4394e2eb6a83f98ca431cdeed29a3dfc7601564fd"} Mar 19 11:57:42.990487 master-0 kubenswrapper[7454]: I0319 11:57:42.990426 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" event={"ID":"ee3529ac-6135-438b-9334-40c63c1fbd3d","Type":"ContainerStarted","Data":"b8ab4adb571de7e6d61b60e1752c759892824492154b5310933386ea2f807133"} Mar 19 11:57:43.001713 master-0 kubenswrapper[7454]: I0319 11:57:43.001635 7454 generic.go:334] "Generic (PLEG): container finished" podID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerID="2f120a0d94fdbfa9eb3c076343f202eb79687478095e8ae9cb88dc10339e167a" exitCode=0 Mar 19 11:57:43.001713 master-0 kubenswrapper[7454]: I0319 11:57:43.001680 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" event={"ID":"91112ce6-4f9d-44c1-a4e7-fea126554bcf","Type":"ContainerDied","Data":"2f120a0d94fdbfa9eb3c076343f202eb79687478095e8ae9cb88dc10339e167a"} Mar 19 11:57:43.061521 master-0 kubenswrapper[7454]: I0319 11:57:43.061480 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-lpndz" Mar 19 11:57:43.071248 master-0 kubenswrapper[7454]: I0319 11:57:43.071190 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 11:57:43.136969 master-0 kubenswrapper[7454]: W0319 11:57:43.136823 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9d191d1_631d_4091_af8b_382283c18a5a.slice/crio-330def8aa1845ebd7a95a673279619d604275f079a7efa3f16b2060b0fd2594e WatchSource:0}: Error finding container 330def8aa1845ebd7a95a673279619d604275f079a7efa3f16b2060b0fd2594e: Status 404 returned error can't find the container with id 330def8aa1845ebd7a95a673279619d604275f079a7efa3f16b2060b0fd2594e Mar 19 11:57:43.271493 master-0 kubenswrapper[7454]: I0319 11:57:43.271430 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/de39c80c-acfa-4bc1-a844-95b170169b44-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 11:57:43.274545 master-0 kubenswrapper[7454]: I0319 11:57:43.274506 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/de39c80c-acfa-4bc1-a844-95b170169b44-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 11:57:43.293893 master-0 kubenswrapper[7454]: I0319 11:57:43.293827 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 11:57:43.956375 master-0 kubenswrapper[7454]: I0319 11:57:43.956290 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q"] Mar 19 11:57:43.964867 master-0 kubenswrapper[7454]: W0319 11:57:43.964786 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb1000ab_4419_43ce_b1b7_8f43413e017f.slice/crio-bf281270c03af27a5f2d97eebdf0d4e36fa1955f5f7ca7b9f757a4d7f448ea9a WatchSource:0}: Error finding container bf281270c03af27a5f2d97eebdf0d4e36fa1955f5f7ca7b9f757a4d7f448ea9a: Status 404 returned error can't find the container with id bf281270c03af27a5f2d97eebdf0d4e36fa1955f5f7ca7b9f757a4d7f448ea9a Mar 19 11:57:43.981623 master-0 kubenswrapper[7454]: I0319 11:57:43.981576 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h"] Mar 19 11:57:43.984429 master-0 kubenswrapper[7454]: W0319 11:57:43.984387 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde39c80c_acfa_4bc1_a844_95b170169b44.slice/crio-b7dd57861a640edcd653a07f56af27e128f51a36c5d7dfe7a1115c64bac8ba80 WatchSource:0}: Error finding container b7dd57861a640edcd653a07f56af27e128f51a36c5d7dfe7a1115c64bac8ba80: Status 404 returned error can't find the container with id b7dd57861a640edcd653a07f56af27e128f51a36c5d7dfe7a1115c64bac8ba80 Mar 19 11:57:44.028686 master-0 kubenswrapper[7454]: I0319 11:57:44.028228 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" event={"ID":"91112ce6-4f9d-44c1-a4e7-fea126554bcf","Type":"ContainerStarted","Data":"5b0f04d22c0c85eb93a91a7347f66800de8887e62876b70685d642e80dd0f769"} Mar 19 11:57:44.030957 master-0 kubenswrapper[7454]: I0319 11:57:44.030228 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" event={"ID":"de39c80c-acfa-4bc1-a844-95b170169b44","Type":"ContainerStarted","Data":"b7dd57861a640edcd653a07f56af27e128f51a36c5d7dfe7a1115c64bac8ba80"} Mar 19 11:57:44.035861 master-0 kubenswrapper[7454]: I0319 11:57:44.035268 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" event={"ID":"ee3529ac-6135-438b-9334-40c63c1fbd3d","Type":"ContainerStarted","Data":"f632a40658dcd356ea3153f8a3d8ed9d2f72270b4d2f46d3bfaa313f6ca7532f"} Mar 19 11:57:44.035861 master-0 kubenswrapper[7454]: I0319 11:57:44.035310 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" event={"ID":"ee3529ac-6135-438b-9334-40c63c1fbd3d","Type":"ContainerStarted","Data":"296dc8986d8d88e53b561f3bac073cd3bc6b8803c01b285a45dd14b4fa44bec7"} Mar 19 11:57:44.040764 master-0 kubenswrapper[7454]: I0319 11:57:44.040730 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-lpndz" event={"ID":"a9d191d1-631d-4091-af8b-382283c18a5a","Type":"ContainerStarted","Data":"330def8aa1845ebd7a95a673279619d604275f079a7efa3f16b2060b0fd2594e"} Mar 19 11:57:44.043102 master-0 kubenswrapper[7454]: I0319 11:57:44.043030 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" event={"ID":"bb1000ab-4419-43ce-b1b7-8f43413e017f","Type":"ContainerStarted","Data":"bf281270c03af27a5f2d97eebdf0d4e36fa1955f5f7ca7b9f757a4d7f448ea9a"} Mar 19 11:57:44.073054 master-0 kubenswrapper[7454]: I0319 11:57:44.072814 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" podStartSLOduration=2.072777609 podStartE2EDuration="2.072777609s" podCreationTimestamp="2026-03-19 11:57:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:57:44.069597078 +0000 UTC m=+233.700063011" watchObservedRunningTime="2026-03-19 11:57:44.072777609 +0000 UTC m=+233.703243522" Mar 19 11:57:44.657062 master-0 kubenswrapper[7454]: I0319 11:57:44.656932 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:57:44.659658 master-0 kubenswrapper[7454]: I0319 11:57:44.659569 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:57:44.659658 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:57:44.659658 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:57:44.659658 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:57:44.659873 master-0 kubenswrapper[7454]: I0319 11:57:44.659696 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:57:45.055667 master-0 kubenswrapper[7454]: I0319 11:57:45.055620 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" event={"ID":"de39c80c-acfa-4bc1-a844-95b170169b44","Type":"ContainerStarted","Data":"b11872f43b13f9891dd2df33711df1b5afaeefeec2b3787d7f00c1fd336b14ee"} Mar 19 11:57:45.055667 master-0 kubenswrapper[7454]: I0319 11:57:45.055663 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" event={"ID":"de39c80c-acfa-4bc1-a844-95b170169b44","Type":"ContainerStarted","Data":"59fd522d5d3942d0c706ce5551f325a5809ac4bd329c60da420444aa82d27477"} Mar 19 11:57:45.665584 master-0 kubenswrapper[7454]: I0319 11:57:45.665204 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:57:45.665584 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:57:45.665584 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:57:45.665584 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:57:45.665854 master-0 kubenswrapper[7454]: I0319 11:57:45.665682 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:57:46.063834 master-0 kubenswrapper[7454]: I0319 11:57:46.063690 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-lpndz" event={"ID":"a9d191d1-631d-4091-af8b-382283c18a5a","Type":"ContainerStarted","Data":"ca89a41464eb0e27fe90d37c782e7129d81c40bb812cea238d07969b1741e6d0"} Mar 19 11:57:46.643071 master-0 kubenswrapper[7454]: I0319 11:57:46.643028 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 11:57:46.643071 master-0 kubenswrapper[7454]: I0319 11:57:46.643068 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 11:57:46.659508 master-0 kubenswrapper[7454]: I0319 11:57:46.659441 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:57:46.659508 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:57:46.659508 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:57:46.659508 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:57:46.659791 master-0 kubenswrapper[7454]: I0319 11:57:46.659509 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:57:46.719411 master-0 kubenswrapper[7454]: I0319 11:57:46.719316 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 11:57:47.071443 master-0 kubenswrapper[7454]: I0319 11:57:47.071399 7454 generic.go:334] "Generic (PLEG): container finished" podID="a9d191d1-631d-4091-af8b-382283c18a5a" containerID="ca89a41464eb0e27fe90d37c782e7129d81c40bb812cea238d07969b1741e6d0" exitCode=0 Mar 19 11:57:47.072164 master-0 kubenswrapper[7454]: I0319 11:57:47.072135 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-lpndz" event={"ID":"a9d191d1-631d-4091-af8b-382283c18a5a","Type":"ContainerDied","Data":"ca89a41464eb0e27fe90d37c782e7129d81c40bb812cea238d07969b1741e6d0"} Mar 19 11:57:47.107526 master-0 kubenswrapper[7454]: I0319 11:57:47.107468 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 11:57:47.657471 master-0 kubenswrapper[7454]: I0319 11:57:47.657377 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:57:47.660536 master-0 kubenswrapper[7454]: I0319 11:57:47.660453 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:57:47.660536 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:57:47.660536 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:57:47.660536 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:57:47.660662 master-0 kubenswrapper[7454]: I0319 11:57:47.660604 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:57:48.080080 master-0 kubenswrapper[7454]: I0319 11:57:48.080015 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-lpndz" event={"ID":"a9d191d1-631d-4091-af8b-382283c18a5a","Type":"ContainerStarted","Data":"c275058e64311fdd22e976cf21c4d89b0bc240296f7648a5549422e11855811a"} Mar 19 11:57:48.662290 master-0 kubenswrapper[7454]: I0319 11:57:48.662221 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:57:48.662290 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:57:48.662290 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:57:48.662290 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:57:48.663230 master-0 kubenswrapper[7454]: I0319 11:57:48.663180 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:57:49.090166 master-0 kubenswrapper[7454]: I0319 11:57:49.090090 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-lpndz" event={"ID":"a9d191d1-631d-4091-af8b-382283c18a5a","Type":"ContainerStarted","Data":"5d84a268436420f828d34ae9b9427f78e4ae7ef0044f60d8d26ee2b19a6b44e1"} Mar 19 11:57:49.534719 master-0 kubenswrapper[7454]: I0319 11:57:49.534626 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-lpndz" podStartSLOduration=5.092686894 podStartE2EDuration="7.53460615s" podCreationTimestamp="2026-03-19 11:57:42 +0000 UTC" firstStartedPulling="2026-03-19 11:57:43.16108621 +0000 UTC m=+232.791552123" lastFinishedPulling="2026-03-19 11:57:45.603005466 +0000 UTC m=+235.233471379" observedRunningTime="2026-03-19 11:57:49.534090573 +0000 UTC m=+239.164556506" watchObservedRunningTime="2026-03-19 11:57:49.53460615 +0000 UTC m=+239.165072063" Mar 19 11:57:49.567378 master-0 kubenswrapper[7454]: I0319 11:57:49.564317 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-pkgvq_d3017b5e-178e-49de-89d2-817a18398203/authentication-operator/0.log" Mar 19 11:57:49.580823 master-0 kubenswrapper[7454]: I0319 11:57:49.580329 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-pkgvq_d3017b5e-178e-49de-89d2-817a18398203/authentication-operator/1.log" Mar 19 11:57:49.594416 master-0 kubenswrapper[7454]: I0319 11:57:49.594372 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7dcf5569b5-lkpgl_91112ce6-4f9d-44c1-a4e7-fea126554bcf/router/1.log" Mar 19 11:57:49.641094 master-0 kubenswrapper[7454]: I0319 11:57:49.641060 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-fdc5db968-8zh6r_979ba8cc-5a7b-4188-bf9e-c22d810888e9/fix-audit-permissions/0.log" Mar 19 11:57:49.659159 master-0 kubenswrapper[7454]: I0319 11:57:49.658871 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:57:49.659159 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:57:49.659159 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:57:49.659159 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:57:49.659159 master-0 kubenswrapper[7454]: I0319 11:57:49.658943 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:57:49.660216 master-0 kubenswrapper[7454]: I0319 11:57:49.660158 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-86889676f6-phlgd"] Mar 19 11:57:49.662512 master-0 kubenswrapper[7454]: I0319 11:57:49.661701 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:49.666349 master-0 kubenswrapper[7454]: I0319 11:57:49.665460 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-hw4t4" Mar 19 11:57:49.666349 master-0 kubenswrapper[7454]: I0319 11:57:49.665898 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 19 11:57:49.666349 master-0 kubenswrapper[7454]: I0319 11:57:49.666091 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-ets52rpou52es" Mar 19 11:57:49.667007 master-0 kubenswrapper[7454]: I0319 11:57:49.666694 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 19 11:57:49.667300 master-0 kubenswrapper[7454]: I0319 11:57:49.667094 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 19 11:57:49.667557 master-0 kubenswrapper[7454]: I0319 11:57:49.667524 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 19 11:57:49.681697 master-0 kubenswrapper[7454]: I0319 11:57:49.681621 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-fdc5db968-8zh6r_979ba8cc-5a7b-4188-bf9e-c22d810888e9/oauth-apiserver/0.log" Mar 19 11:57:49.687919 master-0 kubenswrapper[7454]: I0319 11:57:49.686144 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-86889676f6-phlgd"] Mar 19 11:57:49.772141 master-0 kubenswrapper[7454]: I0319 11:57:49.772105 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:49.772353 master-0 kubenswrapper[7454]: I0319 11:57:49.772339 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-client-ca-bundle\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:49.772485 master-0 kubenswrapper[7454]: I0319 11:57:49.772463 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6db3fcbe-0dbf-464f-944b-62427173c8d3-audit-log\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:49.773083 master-0 kubenswrapper[7454]: I0319 11:57:49.773067 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-metrics-server-audit-profiles\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:49.773337 master-0 kubenswrapper[7454]: I0319 11:57:49.773322 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-server-tls\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:49.773444 master-0 kubenswrapper[7454]: I0319 11:57:49.773430 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lllml\" (UniqueName: \"kubernetes.io/projected/6db3fcbe-0dbf-464f-944b-62427173c8d3-kube-api-access-lllml\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:49.773560 master-0 kubenswrapper[7454]: I0319 11:57:49.773547 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-client-certs\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:49.866518 master-0 kubenswrapper[7454]: I0319 11:57:49.866485 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-fblgs_cf6b6560-1731-4fb1-b3c2-8257002842d6/kube-rbac-proxy/0.log" Mar 19 11:57:49.875258 master-0 kubenswrapper[7454]: I0319 11:57:49.875206 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6db3fcbe-0dbf-464f-944b-62427173c8d3-audit-log\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:49.875605 master-0 kubenswrapper[7454]: I0319 11:57:49.875581 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-metrics-server-audit-profiles\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:49.875708 master-0 kubenswrapper[7454]: I0319 11:57:49.875693 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-server-tls\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:49.875847 master-0 kubenswrapper[7454]: I0319 11:57:49.875831 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lllml\" (UniqueName: \"kubernetes.io/projected/6db3fcbe-0dbf-464f-944b-62427173c8d3-kube-api-access-lllml\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:49.875977 master-0 kubenswrapper[7454]: I0319 11:57:49.875961 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-client-certs\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:49.876093 master-0 kubenswrapper[7454]: I0319 11:57:49.876073 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:49.876206 master-0 kubenswrapper[7454]: I0319 11:57:49.876190 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-client-ca-bundle\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:49.878661 master-0 kubenswrapper[7454]: I0319 11:57:49.878638 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-metrics-server-audit-profiles\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:49.881260 master-0 kubenswrapper[7454]: I0319 11:57:49.880285 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6db3fcbe-0dbf-464f-944b-62427173c8d3-audit-log\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:49.881406 master-0 kubenswrapper[7454]: I0319 11:57:49.880781 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:49.887985 master-0 kubenswrapper[7454]: I0319 11:57:49.887939 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-client-ca-bundle\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:49.891216 master-0 kubenswrapper[7454]: I0319 11:57:49.891183 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-server-tls\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:49.892476 master-0 kubenswrapper[7454]: I0319 11:57:49.892456 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-client-certs\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:49.903999 master-0 kubenswrapper[7454]: I0319 11:57:49.903959 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lllml\" (UniqueName: \"kubernetes.io/projected/6db3fcbe-0dbf-464f-944b-62427173c8d3-kube-api-access-lllml\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:50.018988 master-0 kubenswrapper[7454]: I0319 11:57:50.018948 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:57:50.062212 master-0 kubenswrapper[7454]: I0319 11:57:50.062182 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-fblgs_cf6b6560-1731-4fb1-b3c2-8257002842d6/cluster-autoscaler-operator/0.log" Mar 19 11:57:50.105447 master-0 kubenswrapper[7454]: I0319 11:57:50.105391 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" event={"ID":"bb1000ab-4419-43ce-b1b7-8f43413e017f","Type":"ContainerStarted","Data":"356d58f3445643b887e02b3606eb82185fc7bf57de18df28b0c3120b22f182f0"} Mar 19 11:57:50.105447 master-0 kubenswrapper[7454]: I0319 11:57:50.105449 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" event={"ID":"bb1000ab-4419-43ce-b1b7-8f43413e017f","Type":"ContainerStarted","Data":"56e2837677963b4cdec8dca00794cba1b9a59c1604ff7581da81e331f20a8d93"} Mar 19 11:57:50.106061 master-0 kubenswrapper[7454]: I0319 11:57:50.105462 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" event={"ID":"bb1000ab-4419-43ce-b1b7-8f43413e017f","Type":"ContainerStarted","Data":"a54934aedff66dcdac4f667ef8dbb8881951c9d6e65f287c13a78b149e54bac2"} Mar 19 11:57:50.127814 master-0 kubenswrapper[7454]: I0319 11:57:50.127725 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" podStartSLOduration=2.54438789 podStartE2EDuration="8.127701137s" podCreationTimestamp="2026-03-19 11:57:42 +0000 UTC" firstStartedPulling="2026-03-19 11:57:43.966996378 +0000 UTC m=+233.597462291" lastFinishedPulling="2026-03-19 11:57:49.550309625 +0000 UTC m=+239.180775538" observedRunningTime="2026-03-19 11:57:50.125672653 +0000 UTC m=+239.756138596" watchObservedRunningTime="2026-03-19 11:57:50.127701137 +0000 UTC m=+239.758167050" Mar 19 11:57:50.258398 master-0 kubenswrapper[7454]: I0319 11:57:50.258295 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-ftml6_19de6601-10d4-4112-a21f-0398d2b160d1/cluster-baremetal-operator/0.log" Mar 19 11:57:50.582906 master-0 kubenswrapper[7454]: W0319 11:57:50.582864 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6db3fcbe_0dbf_464f_944b_62427173c8d3.slice/crio-be807ecce9aec0f7633eaae2ed5203cb82f342ed739dc26f098d55766e987b78 WatchSource:0}: Error finding container be807ecce9aec0f7633eaae2ed5203cb82f342ed739dc26f098d55766e987b78: Status 404 returned error can't find the container with id be807ecce9aec0f7633eaae2ed5203cb82f342ed739dc26f098d55766e987b78 Mar 19 11:57:50.594043 master-0 kubenswrapper[7454]: I0319 11:57:50.593970 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-86889676f6-phlgd"] Mar 19 11:57:50.660182 master-0 kubenswrapper[7454]: I0319 11:57:50.660140 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:57:50.660182 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:57:50.660182 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:57:50.660182 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:57:50.660564 master-0 kubenswrapper[7454]: I0319 11:57:50.660529 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:57:50.662088 master-0 kubenswrapper[7454]: I0319 11:57:50.661814 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-ftml6_19de6601-10d4-4112-a21f-0398d2b160d1/cluster-baremetal-operator/1.log" Mar 19 11:57:51.111867 master-0 kubenswrapper[7454]: I0319 11:57:51.111817 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-86889676f6-phlgd" event={"ID":"6db3fcbe-0dbf-464f-944b-62427173c8d3","Type":"ContainerStarted","Data":"be807ecce9aec0f7633eaae2ed5203cb82f342ed739dc26f098d55766e987b78"} Mar 19 11:57:51.269849 master-0 kubenswrapper[7454]: I0319 11:57:51.269674 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-ftml6_19de6601-10d4-4112-a21f-0398d2b160d1/baremetal-kube-rbac-proxy/0.log" Mar 19 11:57:51.279170 master-0 kubenswrapper[7454]: I0319 11:57:51.279109 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-tql86_44469a78-9300-4260-89e9-ea939de1357b/control-plane-machine-set-operator/0.log" Mar 19 11:57:51.287245 master-0 kubenswrapper[7454]: I0319 11:57:51.287202 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-75w5c_7b2ecb08-a0f9-4127-967c-7087dea4c0f6/kube-rbac-proxy/0.log" Mar 19 11:57:51.459578 master-0 kubenswrapper[7454]: I0319 11:57:51.459210 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-75w5c_7b2ecb08-a0f9-4127-967c-7087dea4c0f6/machine-api-operator/0.log" Mar 19 11:57:51.661822 master-0 kubenswrapper[7454]: I0319 11:57:51.659353 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:57:51.661822 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:57:51.661822 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:57:51.661822 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:57:51.661822 master-0 kubenswrapper[7454]: I0319 11:57:51.659437 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:57:51.663785 master-0 kubenswrapper[7454]: I0319 11:57:51.663145 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-sc4kz_9702fc8c-4fe0-413b-b2d4-db23021d42b8/etcd-operator/0.log" Mar 19 11:57:51.860574 master-0 kubenswrapper[7454]: I0319 11:57:51.860537 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-sc4kz_9702fc8c-4fe0-413b-b2d4-db23021d42b8/etcd-operator/1.log" Mar 19 11:57:52.057626 master-0 kubenswrapper[7454]: I0319 11:57:52.057544 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/setup/0.log" Mar 19 11:57:52.257382 master-0 kubenswrapper[7454]: I0319 11:57:52.257351 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-ensure-env-vars/0.log" Mar 19 11:57:52.459049 master-0 kubenswrapper[7454]: I0319 11:57:52.458862 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-resources-copy/0.log" Mar 19 11:57:52.657209 master-0 kubenswrapper[7454]: I0319 11:57:52.656612 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 19 11:57:52.660228 master-0 kubenswrapper[7454]: I0319 11:57:52.660139 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:57:52.660228 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:57:52.660228 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:57:52.660228 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:57:52.660387 master-0 kubenswrapper[7454]: I0319 11:57:52.660298 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:57:52.865620 master-0 kubenswrapper[7454]: I0319 11:57:52.863762 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 19 11:57:53.061822 master-0 kubenswrapper[7454]: I0319 11:57:53.061751 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 19 11:57:53.131342 master-0 kubenswrapper[7454]: I0319 11:57:53.131219 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" event={"ID":"de39c80c-acfa-4bc1-a844-95b170169b44","Type":"ContainerStarted","Data":"26190c9997ba592ddc243f74fea4b00b1c6d706221e947883a60731c2a7ac2f9"} Mar 19 11:57:53.159682 master-0 kubenswrapper[7454]: I0319 11:57:53.159608 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" podStartSLOduration=3.478618889 podStartE2EDuration="11.159593601s" podCreationTimestamp="2026-03-19 11:57:42 +0000 UTC" firstStartedPulling="2026-03-19 11:57:44.339017665 +0000 UTC m=+233.969483578" lastFinishedPulling="2026-03-19 11:57:52.019992367 +0000 UTC m=+241.650458290" observedRunningTime="2026-03-19 11:57:53.157732062 +0000 UTC m=+242.788197985" watchObservedRunningTime="2026-03-19 11:57:53.159593601 +0000 UTC m=+242.790059514" Mar 19 11:57:53.260425 master-0 kubenswrapper[7454]: I0319 11:57:53.258699 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-readyz/0.log" Mar 19 11:57:53.459886 master-0 kubenswrapper[7454]: I0319 11:57:53.459771 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 19 11:57:53.659235 master-0 kubenswrapper[7454]: I0319 11:57:53.659107 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:57:53.659235 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:57:53.659235 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:57:53.659235 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:57:53.659235 master-0 kubenswrapper[7454]: I0319 11:57:53.659179 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:57:53.667176 master-0 kubenswrapper[7454]: I0319 11:57:53.666903 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_11f83dfb-da04-483f-b281-ebdb39f3ab27/installer/0.log" Mar 19 11:57:53.862475 master-0 kubenswrapper[7454]: I0319 11:57:53.862422 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-qv4cg_1089ea24-add9-482e-9276-e6ded12052d7/kube-apiserver-operator/0.log" Mar 19 11:57:54.061895 master-0 kubenswrapper[7454]: I0319 11:57:54.061746 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-qv4cg_1089ea24-add9-482e-9276-e6ded12052d7/kube-apiserver-operator/1.log" Mar 19 11:57:54.140605 master-0 kubenswrapper[7454]: I0319 11:57:54.140499 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-86889676f6-phlgd" event={"ID":"6db3fcbe-0dbf-464f-944b-62427173c8d3","Type":"ContainerStarted","Data":"eeacdb60f8da61f85096f789c56cd94fccc18791a62d95df61660195a985a6a0"} Mar 19 11:57:54.170749 master-0 kubenswrapper[7454]: I0319 11:57:54.170629 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-86889676f6-phlgd" podStartSLOduration=2.08649259 podStartE2EDuration="5.170603164s" podCreationTimestamp="2026-03-19 11:57:49 +0000 UTC" firstStartedPulling="2026-03-19 11:57:50.58514645 +0000 UTC m=+240.215612363" lastFinishedPulling="2026-03-19 11:57:53.669256994 +0000 UTC m=+243.299722937" observedRunningTime="2026-03-19 11:57:54.169448297 +0000 UTC m=+243.799914230" watchObservedRunningTime="2026-03-19 11:57:54.170603164 +0000 UTC m=+243.801069087" Mar 19 11:57:54.258710 master-0 kubenswrapper[7454]: I0319 11:57:54.258646 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/setup/0.log" Mar 19 11:57:54.462607 master-0 kubenswrapper[7454]: I0319 11:57:54.462456 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/kube-apiserver/0.log" Mar 19 11:57:54.658344 master-0 kubenswrapper[7454]: I0319 11:57:54.658257 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/kube-apiserver-insecure-readyz/0.log" Mar 19 11:57:54.659612 master-0 kubenswrapper[7454]: I0319 11:57:54.659578 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:57:54.659612 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:57:54.659612 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:57:54.659612 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:57:54.659867 master-0 kubenswrapper[7454]: I0319 11:57:54.659670 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:57:54.861659 master-0 kubenswrapper[7454]: I0319 11:57:54.861590 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_632bdf3b-0ba0-4874-a2ec-8396683c35c5/installer/0.log" Mar 19 11:57:55.063531 master-0 kubenswrapper[7454]: I0319 11:57:55.063407 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_2e4442dc-19e2-42a3-b5d9-7af7765b1939/installer/0.log" Mar 19 11:57:55.266050 master-0 kubenswrapper[7454]: I0319 11:57:55.266011 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-fjn4b_2151eb84-177e-459c-be71-f48465323ac2/kube-controller-manager-operator/0.log" Mar 19 11:57:55.459570 master-0 kubenswrapper[7454]: I0319 11:57:55.459405 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-fjn4b_2151eb84-177e-459c-be71-f48465323ac2/kube-controller-manager-operator/1.log" Mar 19 11:57:55.659814 master-0 kubenswrapper[7454]: I0319 11:57:55.659700 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:57:55.659814 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:57:55.659814 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:57:55.659814 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:57:55.660515 master-0 kubenswrapper[7454]: I0319 11:57:55.659849 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:57:55.665295 master-0 kubenswrapper[7454]: I0319 11:57:55.665240 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_46f265536aba6292ead501bc9b49f327/kube-controller-manager/2.log" Mar 19 11:57:56.063030 master-0 kubenswrapper[7454]: I0319 11:57:56.062966 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_46f265536aba6292ead501bc9b49f327/kube-controller-manager/3.log" Mar 19 11:57:56.263387 master-0 kubenswrapper[7454]: I0319 11:57:56.263303 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_46f265536aba6292ead501bc9b49f327/cluster-policy-controller/0.log" Mar 19 11:57:56.462047 master-0 kubenswrapper[7454]: I0319 11:57:56.461879 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_c83737980b9ee109184b1d78e942cf36/kube-scheduler/0.log" Mar 19 11:57:56.660295 master-0 kubenswrapper[7454]: I0319 11:57:56.660045 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:57:56.660295 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:57:56.660295 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:57:56.660295 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:57:56.660295 master-0 kubenswrapper[7454]: I0319 11:57:56.660099 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:57:56.667214 master-0 kubenswrapper[7454]: I0319 11:57:56.667150 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_c83737980b9ee109184b1d78e942cf36/kube-scheduler/1.log" Mar 19 11:57:56.859426 master-0 kubenswrapper[7454]: I0319 11:57:56.859323 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_4b49f09f-2efa-4657-9f5a-fbddd42bee0d/installer/0.log" Mar 19 11:57:57.060107 master-0 kubenswrapper[7454]: I0319 11:57:57.060071 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-ptqdh_06df1b1b-154e-46f9-aee0-79a137c6c928/kube-scheduler-operator-container/0.log" Mar 19 11:57:57.257646 master-0 kubenswrapper[7454]: I0319 11:57:57.257601 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-ptqdh_06df1b1b-154e-46f9-aee0-79a137c6c928/kube-scheduler-operator-container/1.log" Mar 19 11:57:57.458284 master-0 kubenswrapper[7454]: I0319 11:57:57.458230 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-fblgs_cf6b6560-1731-4fb1-b3c2-8257002842d6/kube-rbac-proxy/0.log" Mar 19 11:57:57.660345 master-0 kubenswrapper[7454]: I0319 11:57:57.660226 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:57:57.660345 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:57:57.660345 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:57:57.660345 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:57:57.661303 master-0 kubenswrapper[7454]: I0319 11:57:57.661264 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:57:57.664439 master-0 kubenswrapper[7454]: I0319 11:57:57.664405 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-fblgs_cf6b6560-1731-4fb1-b3c2-8257002842d6/cluster-autoscaler-operator/0.log" Mar 19 11:57:57.858945 master-0 kubenswrapper[7454]: I0319 11:57:57.858895 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-ftml6_19de6601-10d4-4112-a21f-0398d2b160d1/cluster-baremetal-operator/0.log" Mar 19 11:57:58.258656 master-0 kubenswrapper[7454]: I0319 11:57:58.258595 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-ftml6_19de6601-10d4-4112-a21f-0398d2b160d1/cluster-baremetal-operator/1.log" Mar 19 11:57:58.459353 master-0 kubenswrapper[7454]: I0319 11:57:58.459311 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-ftml6_19de6601-10d4-4112-a21f-0398d2b160d1/baremetal-kube-rbac-proxy/0.log" Mar 19 11:57:58.658653 master-0 kubenswrapper[7454]: I0319 11:57:58.658554 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-tql86_44469a78-9300-4260-89e9-ea939de1357b/control-plane-machine-set-operator/0.log" Mar 19 11:57:58.660122 master-0 kubenswrapper[7454]: I0319 11:57:58.660087 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:57:58.660122 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:57:58.660122 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:57:58.660122 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:57:58.660283 master-0 kubenswrapper[7454]: I0319 11:57:58.660149 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:57:58.859947 master-0 kubenswrapper[7454]: I0319 11:57:58.859894 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-75w5c_7b2ecb08-a0f9-4127-967c-7087dea4c0f6/kube-rbac-proxy/0.log" Mar 19 11:57:59.061135 master-0 kubenswrapper[7454]: I0319 11:57:59.061075 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-75w5c_7b2ecb08-a0f9-4127-967c-7087dea4c0f6/machine-api-operator/0.log" Mar 19 11:57:59.264754 master-0 kubenswrapper[7454]: I0319 11:57:59.264694 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-mjs7x_f08c5930-44f0-48e4-80dd-2563f2733b2f/openshift-apiserver-operator/0.log" Mar 19 11:57:59.460964 master-0 kubenswrapper[7454]: I0319 11:57:59.459138 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-mjs7x_f08c5930-44f0-48e4-80dd-2563f2733b2f/openshift-apiserver-operator/1.log" Mar 19 11:57:59.658772 master-0 kubenswrapper[7454]: I0319 11:57:59.658729 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-897cc986b-vpg2l_13503fef-09b2-4dbe-9537-a5b361e7b591/fix-audit-permissions/0.log" Mar 19 11:57:59.660065 master-0 kubenswrapper[7454]: I0319 11:57:59.660017 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:57:59.660065 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:57:59.660065 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:57:59.660065 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:57:59.660230 master-0 kubenswrapper[7454]: I0319 11:57:59.660102 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:57:59.859704 master-0 kubenswrapper[7454]: I0319 11:57:59.859617 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-897cc986b-vpg2l_13503fef-09b2-4dbe-9537-a5b361e7b591/openshift-apiserver/0.log" Mar 19 11:58:00.058358 master-0 kubenswrapper[7454]: I0319 11:58:00.058305 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-897cc986b-vpg2l_13503fef-09b2-4dbe-9537-a5b361e7b591/openshift-apiserver-check-endpoints/0.log" Mar 19 11:58:00.263630 master-0 kubenswrapper[7454]: I0319 11:58:00.263551 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-sc4kz_9702fc8c-4fe0-413b-b2d4-db23021d42b8/etcd-operator/0.log" Mar 19 11:58:00.456939 master-0 kubenswrapper[7454]: I0319 11:58:00.456876 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-sc4kz_9702fc8c-4fe0-413b-b2d4-db23021d42b8/etcd-operator/1.log" Mar 19 11:58:00.659779 master-0 kubenswrapper[7454]: I0319 11:58:00.659665 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:00.659779 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:00.659779 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:00.659779 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:00.659779 master-0 kubenswrapper[7454]: I0319 11:58:00.659714 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:00.663307 master-0 kubenswrapper[7454]: I0319 11:58:00.663277 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-2trz4_bdcdb23d-ef1f-45e2-b9ac-7abf707637b6/catalog-operator/0.log" Mar 19 11:58:00.862767 master-0 kubenswrapper[7454]: I0319 11:58:00.862701 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-5c9796789-8cldl_87a3f546-e1c1-42a1-b80e-d45b6d5c0a04/olm-operator/0.log" Mar 19 11:58:01.057900 master-0 kubenswrapper[7454]: I0319 11:58:01.057841 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-6j2nj_beb562de-402b-4d9f-b5ed-090b60847a95/kube-rbac-proxy/0.log" Mar 19 11:58:01.260366 master-0 kubenswrapper[7454]: I0319 11:58:01.260311 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-6j2nj_beb562de-402b-4d9f-b5ed-090b60847a95/package-server-manager/0.log" Mar 19 11:58:01.462779 master-0 kubenswrapper[7454]: I0319 11:58:01.462653 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-77d68bd5f8-w9hmb_be4349fa-5c67-4135-80a7-b8a694553662/packageserver/0.log" Mar 19 11:58:01.659310 master-0 kubenswrapper[7454]: I0319 11:58:01.659240 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:01.659310 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:01.659310 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:01.659310 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:01.659704 master-0 kubenswrapper[7454]: I0319 11:58:01.659340 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:02.659176 master-0 kubenswrapper[7454]: I0319 11:58:02.659109 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:02.659176 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:02.659176 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:02.659176 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:02.659176 master-0 kubenswrapper[7454]: I0319 11:58:02.659179 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:03.660144 master-0 kubenswrapper[7454]: I0319 11:58:03.660048 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:03.660144 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:03.660144 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:03.660144 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:03.660903 master-0 kubenswrapper[7454]: I0319 11:58:03.660160 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:04.658902 master-0 kubenswrapper[7454]: I0319 11:58:04.658835 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:04.658902 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:04.658902 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:04.658902 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:04.659227 master-0 kubenswrapper[7454]: I0319 11:58:04.658933 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:05.659563 master-0 kubenswrapper[7454]: I0319 11:58:05.659504 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:05.659563 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:05.659563 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:05.659563 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:05.659563 master-0 kubenswrapper[7454]: I0319 11:58:05.659567 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:06.661873 master-0 kubenswrapper[7454]: I0319 11:58:06.659416 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:06.661873 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:06.661873 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:06.661873 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:06.661873 master-0 kubenswrapper[7454]: I0319 11:58:06.659496 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:07.659747 master-0 kubenswrapper[7454]: I0319 11:58:07.659658 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:07.659747 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:07.659747 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:07.659747 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:07.659747 master-0 kubenswrapper[7454]: I0319 11:58:07.659732 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:08.659830 master-0 kubenswrapper[7454]: I0319 11:58:08.659758 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:08.659830 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:08.659830 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:08.659830 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:08.660487 master-0 kubenswrapper[7454]: I0319 11:58:08.659852 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:09.659580 master-0 kubenswrapper[7454]: I0319 11:58:09.659503 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:09.659580 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:09.659580 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:09.659580 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:09.660233 master-0 kubenswrapper[7454]: I0319 11:58:09.659583 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:10.019382 master-0 kubenswrapper[7454]: I0319 11:58:10.019284 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:58:10.019382 master-0 kubenswrapper[7454]: I0319 11:58:10.019373 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:58:10.660246 master-0 kubenswrapper[7454]: I0319 11:58:10.660110 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:10.660246 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:10.660246 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:10.660246 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:10.660246 master-0 kubenswrapper[7454]: I0319 11:58:10.660168 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:11.660048 master-0 kubenswrapper[7454]: I0319 11:58:11.659943 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:11.660048 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:11.660048 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:11.660048 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:11.660048 master-0 kubenswrapper[7454]: I0319 11:58:11.660023 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:12.658637 master-0 kubenswrapper[7454]: I0319 11:58:12.658579 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:12.658637 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:12.658637 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:12.658637 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:12.659006 master-0 kubenswrapper[7454]: I0319 11:58:12.658647 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:13.659599 master-0 kubenswrapper[7454]: I0319 11:58:13.659531 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:13.659599 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:13.659599 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:13.659599 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:13.660257 master-0 kubenswrapper[7454]: I0319 11:58:13.659607 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:14.659418 master-0 kubenswrapper[7454]: I0319 11:58:14.659349 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:14.659418 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:14.659418 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:14.659418 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:14.660397 master-0 kubenswrapper[7454]: I0319 11:58:14.660358 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:15.660078 master-0 kubenswrapper[7454]: I0319 11:58:15.659980 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:15.660078 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:15.660078 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:15.660078 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:15.660843 master-0 kubenswrapper[7454]: I0319 11:58:15.660089 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:16.660270 master-0 kubenswrapper[7454]: I0319 11:58:16.660194 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:16.660270 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:16.660270 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:16.660270 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:16.660270 master-0 kubenswrapper[7454]: I0319 11:58:16.660255 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:17.659955 master-0 kubenswrapper[7454]: I0319 11:58:17.659875 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:17.659955 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:17.659955 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:17.659955 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:17.661345 master-0 kubenswrapper[7454]: I0319 11:58:17.661290 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:18.659717 master-0 kubenswrapper[7454]: I0319 11:58:18.659633 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:18.659717 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:18.659717 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:18.659717 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:18.660113 master-0 kubenswrapper[7454]: I0319 11:58:18.659718 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:19.660324 master-0 kubenswrapper[7454]: I0319 11:58:19.660238 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:19.660324 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:19.660324 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:19.660324 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:19.661353 master-0 kubenswrapper[7454]: I0319 11:58:19.660380 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:20.659577 master-0 kubenswrapper[7454]: I0319 11:58:20.659487 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:20.659577 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:20.659577 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:20.659577 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:20.659577 master-0 kubenswrapper[7454]: I0319 11:58:20.659570 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:21.659819 master-0 kubenswrapper[7454]: I0319 11:58:21.659721 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:21.659819 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:21.659819 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:21.659819 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:21.660431 master-0 kubenswrapper[7454]: I0319 11:58:21.659872 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:22.658938 master-0 kubenswrapper[7454]: I0319 11:58:22.658853 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:22.658938 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:22.658938 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:22.658938 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:22.659295 master-0 kubenswrapper[7454]: I0319 11:58:22.658954 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:23.660313 master-0 kubenswrapper[7454]: I0319 11:58:23.660205 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:23.660313 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:23.660313 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:23.660313 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:23.661718 master-0 kubenswrapper[7454]: I0319 11:58:23.660962 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:24.659698 master-0 kubenswrapper[7454]: I0319 11:58:24.659633 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:24.659698 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:24.659698 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:24.659698 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:24.660020 master-0 kubenswrapper[7454]: I0319 11:58:24.659699 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:25.660087 master-0 kubenswrapper[7454]: I0319 11:58:25.659967 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:25.660087 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:25.660087 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:25.660087 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:25.660087 master-0 kubenswrapper[7454]: I0319 11:58:25.660074 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:26.659880 master-0 kubenswrapper[7454]: I0319 11:58:26.659789 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:26.659880 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:26.659880 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:26.659880 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:26.660450 master-0 kubenswrapper[7454]: I0319 11:58:26.659880 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:27.658816 master-0 kubenswrapper[7454]: I0319 11:58:27.658714 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:27.658816 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:27.658816 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:27.658816 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:27.658816 master-0 kubenswrapper[7454]: I0319 11:58:27.658807 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:28.659509 master-0 kubenswrapper[7454]: I0319 11:58:28.659419 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:28.659509 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:28.659509 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:28.659509 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:28.660323 master-0 kubenswrapper[7454]: I0319 11:58:28.659514 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:29.659240 master-0 kubenswrapper[7454]: I0319 11:58:29.659134 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:29.659240 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:29.659240 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:29.659240 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:29.659240 master-0 kubenswrapper[7454]: I0319 11:58:29.659213 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:30.025336 master-0 kubenswrapper[7454]: I0319 11:58:30.025275 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:58:30.030326 master-0 kubenswrapper[7454]: I0319 11:58:30.030254 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 11:58:30.660122 master-0 kubenswrapper[7454]: I0319 11:58:30.660057 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:30.660122 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:30.660122 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:30.660122 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:30.660756 master-0 kubenswrapper[7454]: I0319 11:58:30.660143 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:31.658861 master-0 kubenswrapper[7454]: I0319 11:58:31.658688 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:31.658861 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:31.658861 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:31.658861 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:31.658861 master-0 kubenswrapper[7454]: I0319 11:58:31.658845 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:32.659484 master-0 kubenswrapper[7454]: I0319 11:58:32.659424 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:32.659484 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:32.659484 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:32.659484 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:32.660112 master-0 kubenswrapper[7454]: I0319 11:58:32.659503 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:33.659923 master-0 kubenswrapper[7454]: I0319 11:58:33.659854 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:33.659923 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:33.659923 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:33.659923 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:33.659923 master-0 kubenswrapper[7454]: I0319 11:58:33.659916 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:34.659404 master-0 kubenswrapper[7454]: I0319 11:58:34.659351 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:34.659404 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:34.659404 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:34.659404 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:34.659404 master-0 kubenswrapper[7454]: I0319 11:58:34.659402 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:35.659503 master-0 kubenswrapper[7454]: I0319 11:58:35.659395 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:35.659503 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:35.659503 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:35.659503 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:35.660704 master-0 kubenswrapper[7454]: I0319 11:58:35.659500 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:36.659386 master-0 kubenswrapper[7454]: I0319 11:58:36.659273 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:36.659386 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:36.659386 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:36.659386 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:36.659975 master-0 kubenswrapper[7454]: I0319 11:58:36.659378 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:37.659956 master-0 kubenswrapper[7454]: I0319 11:58:37.659898 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:37.659956 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:37.659956 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:37.659956 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:37.660539 master-0 kubenswrapper[7454]: I0319 11:58:37.659994 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:38.658866 master-0 kubenswrapper[7454]: I0319 11:58:38.658825 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:38.658866 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:38.658866 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:38.658866 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:38.659266 master-0 kubenswrapper[7454]: I0319 11:58:38.659240 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:39.660776 master-0 kubenswrapper[7454]: I0319 11:58:39.660644 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:39.660776 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:39.660776 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:39.660776 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:39.661683 master-0 kubenswrapper[7454]: I0319 11:58:39.660790 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:40.659493 master-0 kubenswrapper[7454]: I0319 11:58:40.659401 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:40.659493 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:40.659493 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:40.659493 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:40.659493 master-0 kubenswrapper[7454]: I0319 11:58:40.659489 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:41.659092 master-0 kubenswrapper[7454]: I0319 11:58:41.658975 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:41.659092 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:41.659092 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:41.659092 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:41.659092 master-0 kubenswrapper[7454]: I0319 11:58:41.659084 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:42.659576 master-0 kubenswrapper[7454]: I0319 11:58:42.659505 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:42.659576 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:42.659576 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:42.659576 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:42.660667 master-0 kubenswrapper[7454]: I0319 11:58:42.659594 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:43.659225 master-0 kubenswrapper[7454]: I0319 11:58:43.659083 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:43.659225 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:43.659225 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:43.659225 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:43.659225 master-0 kubenswrapper[7454]: I0319 11:58:43.659184 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:44.660242 master-0 kubenswrapper[7454]: I0319 11:58:44.660155 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:44.660242 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:44.660242 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:44.660242 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:44.660957 master-0 kubenswrapper[7454]: I0319 11:58:44.660256 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:45.660941 master-0 kubenswrapper[7454]: I0319 11:58:45.660792 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:45.660941 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:45.660941 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:45.660941 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:45.660941 master-0 kubenswrapper[7454]: I0319 11:58:45.660915 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:46.659964 master-0 kubenswrapper[7454]: I0319 11:58:46.659849 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:46.659964 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:46.659964 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:46.659964 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:46.659964 master-0 kubenswrapper[7454]: I0319 11:58:46.659917 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:47.660222 master-0 kubenswrapper[7454]: I0319 11:58:47.660146 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:47.660222 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:47.660222 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:47.660222 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:47.660927 master-0 kubenswrapper[7454]: I0319 11:58:47.660246 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:48.660320 master-0 kubenswrapper[7454]: I0319 11:58:48.660232 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:48.660320 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:48.660320 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:48.660320 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:48.660984 master-0 kubenswrapper[7454]: I0319 11:58:48.660335 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:49.658840 master-0 kubenswrapper[7454]: I0319 11:58:49.658766 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:49.658840 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:49.658840 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:49.658840 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:49.659172 master-0 kubenswrapper[7454]: I0319 11:58:49.658848 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:50.650359 master-0 kubenswrapper[7454]: I0319 11:58:50.650305 7454 kubelet.go:1505] "Image garbage collection succeeded" Mar 19 11:58:50.660880 master-0 kubenswrapper[7454]: I0319 11:58:50.660817 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:50.660880 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:50.660880 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:50.660880 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:50.661207 master-0 kubenswrapper[7454]: I0319 11:58:50.660895 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:50.738830 master-0 kubenswrapper[7454]: I0319 11:58:50.738751 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-w8jqs"] Mar 19 11:58:50.739864 master-0 kubenswrapper[7454]: I0319 11:58:50.739838 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-w8jqs" Mar 19 11:58:50.742845 master-0 kubenswrapper[7454]: I0319 11:58:50.741856 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-6flh6" Mar 19 11:58:50.746195 master-0 kubenswrapper[7454]: I0319 11:58:50.746163 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 19 11:58:50.746953 master-0 kubenswrapper[7454]: I0319 11:58:50.746468 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 19 11:58:50.746953 master-0 kubenswrapper[7454]: I0319 11:58:50.746627 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 19 11:58:50.750039 master-0 kubenswrapper[7454]: I0319 11:58:50.749930 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-w8jqs"] Mar 19 11:58:50.911917 master-0 kubenswrapper[7454]: I0319 11:58:50.911326 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9hck\" (UniqueName: \"kubernetes.io/projected/36e5fec9-7fb5-4460-8bb4-4b9e36fae978-kube-api-access-z9hck\") pod \"ingress-canary-w8jqs\" (UID: \"36e5fec9-7fb5-4460-8bb4-4b9e36fae978\") " pod="openshift-ingress-canary/ingress-canary-w8jqs" Mar 19 11:58:50.911917 master-0 kubenswrapper[7454]: I0319 11:58:50.911625 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/36e5fec9-7fb5-4460-8bb4-4b9e36fae978-cert\") pod \"ingress-canary-w8jqs\" (UID: \"36e5fec9-7fb5-4460-8bb4-4b9e36fae978\") " pod="openshift-ingress-canary/ingress-canary-w8jqs" Mar 19 11:58:51.012690 master-0 kubenswrapper[7454]: I0319 11:58:51.012592 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9hck\" (UniqueName: \"kubernetes.io/projected/36e5fec9-7fb5-4460-8bb4-4b9e36fae978-kube-api-access-z9hck\") pod \"ingress-canary-w8jqs\" (UID: \"36e5fec9-7fb5-4460-8bb4-4b9e36fae978\") " pod="openshift-ingress-canary/ingress-canary-w8jqs" Mar 19 11:58:51.012690 master-0 kubenswrapper[7454]: I0319 11:58:51.012701 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/36e5fec9-7fb5-4460-8bb4-4b9e36fae978-cert\") pod \"ingress-canary-w8jqs\" (UID: \"36e5fec9-7fb5-4460-8bb4-4b9e36fae978\") " pod="openshift-ingress-canary/ingress-canary-w8jqs" Mar 19 11:58:51.016173 master-0 kubenswrapper[7454]: I0319 11:58:51.016120 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/36e5fec9-7fb5-4460-8bb4-4b9e36fae978-cert\") pod \"ingress-canary-w8jqs\" (UID: \"36e5fec9-7fb5-4460-8bb4-4b9e36fae978\") " pod="openshift-ingress-canary/ingress-canary-w8jqs" Mar 19 11:58:51.027112 master-0 kubenswrapper[7454]: I0319 11:58:51.027066 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9hck\" (UniqueName: \"kubernetes.io/projected/36e5fec9-7fb5-4460-8bb4-4b9e36fae978-kube-api-access-z9hck\") pod \"ingress-canary-w8jqs\" (UID: \"36e5fec9-7fb5-4460-8bb4-4b9e36fae978\") " pod="openshift-ingress-canary/ingress-canary-w8jqs" Mar 19 11:58:51.078826 master-0 kubenswrapper[7454]: I0319 11:58:51.078739 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-w8jqs" Mar 19 11:58:51.487488 master-0 kubenswrapper[7454]: I0319 11:58:51.487407 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-w8jqs"] Mar 19 11:58:51.493448 master-0 kubenswrapper[7454]: W0319 11:58:51.493306 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36e5fec9_7fb5_4460_8bb4_4b9e36fae978.slice/crio-9a72b8977a8a7f6da552724471a9890da5b8ee5f4a6fe88fb55492ca16eb4221 WatchSource:0}: Error finding container 9a72b8977a8a7f6da552724471a9890da5b8ee5f4a6fe88fb55492ca16eb4221: Status 404 returned error can't find the container with id 9a72b8977a8a7f6da552724471a9890da5b8ee5f4a6fe88fb55492ca16eb4221 Mar 19 11:58:51.517210 master-0 kubenswrapper[7454]: I0319 11:58:51.517151 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/1.log" Mar 19 11:58:51.518165 master-0 kubenswrapper[7454]: I0319 11:58:51.518128 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/0.log" Mar 19 11:58:51.518240 master-0 kubenswrapper[7454]: I0319 11:58:51.518188 7454 generic.go:334] "Generic (PLEG): container finished" podID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" containerID="ff92d05d103782a47d08e29aa2fb79e226a87a90f33dcfc9e8b5555e427f0ce4" exitCode=1 Mar 19 11:58:51.518298 master-0 kubenswrapper[7454]: I0319 11:58:51.518267 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" event={"ID":"b80027fd-7b39-477a-a337-ff9bb08e7eeb","Type":"ContainerDied","Data":"ff92d05d103782a47d08e29aa2fb79e226a87a90f33dcfc9e8b5555e427f0ce4"} Mar 19 11:58:51.518350 master-0 kubenswrapper[7454]: I0319 11:58:51.518326 7454 scope.go:117] "RemoveContainer" containerID="85ef4c835912214d79ee0e2491e95c939671fab04307a1604919b04165567448" Mar 19 11:58:51.518719 master-0 kubenswrapper[7454]: I0319 11:58:51.518694 7454 scope.go:117] "RemoveContainer" containerID="ff92d05d103782a47d08e29aa2fb79e226a87a90f33dcfc9e8b5555e427f0ce4" Mar 19 11:58:51.520436 master-0 kubenswrapper[7454]: E0319 11:58:51.518952 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 11:58:51.525451 master-0 kubenswrapper[7454]: I0319 11:58:51.525330 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-w8jqs" event={"ID":"36e5fec9-7fb5-4460-8bb4-4b9e36fae978","Type":"ContainerStarted","Data":"9a72b8977a8a7f6da552724471a9890da5b8ee5f4a6fe88fb55492ca16eb4221"} Mar 19 11:58:51.658823 master-0 kubenswrapper[7454]: I0319 11:58:51.658751 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:51.658823 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:51.658823 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:51.658823 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:51.659667 master-0 kubenswrapper[7454]: I0319 11:58:51.658844 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:52.533101 master-0 kubenswrapper[7454]: I0319 11:58:52.533050 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/1.log" Mar 19 11:58:52.535343 master-0 kubenswrapper[7454]: I0319 11:58:52.535301 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-w8jqs" event={"ID":"36e5fec9-7fb5-4460-8bb4-4b9e36fae978","Type":"ContainerStarted","Data":"d0449db4cbd41085e8091bf8eac4331d55c7112053c31bb399dcb2b92759fc8e"} Mar 19 11:58:52.659185 master-0 kubenswrapper[7454]: I0319 11:58:52.659106 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:52.659185 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:52.659185 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:52.659185 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:52.659787 master-0 kubenswrapper[7454]: I0319 11:58:52.659228 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:53.660083 master-0 kubenswrapper[7454]: I0319 11:58:53.660013 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:53.660083 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:53.660083 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:53.660083 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:53.660731 master-0 kubenswrapper[7454]: I0319 11:58:53.660090 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:54.659445 master-0 kubenswrapper[7454]: I0319 11:58:54.659383 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:54.659445 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:54.659445 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:54.659445 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:54.659766 master-0 kubenswrapper[7454]: I0319 11:58:54.659458 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:54.830020 master-0 kubenswrapper[7454]: I0319 11:58:54.829969 7454 scope.go:117] "RemoveContainer" containerID="5130296ba65834ed8eebf5136547f5b58340e0b2714dd3dba811f10381f648f5" Mar 19 11:58:55.660083 master-0 kubenswrapper[7454]: I0319 11:58:55.659988 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:55.660083 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:55.660083 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:55.660083 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:55.660524 master-0 kubenswrapper[7454]: I0319 11:58:55.660082 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:56.660122 master-0 kubenswrapper[7454]: I0319 11:58:56.660061 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:56.660122 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:56.660122 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:56.660122 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:56.660883 master-0 kubenswrapper[7454]: I0319 11:58:56.660134 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:57.659471 master-0 kubenswrapper[7454]: I0319 11:58:57.659376 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:57.659471 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:57.659471 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:57.659471 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:57.659845 master-0 kubenswrapper[7454]: I0319 11:58:57.659482 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:58.659783 master-0 kubenswrapper[7454]: I0319 11:58:58.659701 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:58.659783 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:58.659783 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:58.659783 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:58.659783 master-0 kubenswrapper[7454]: I0319 11:58:58.659764 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:58:59.659670 master-0 kubenswrapper[7454]: I0319 11:58:59.659606 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:58:59.659670 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:58:59.659670 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:58:59.659670 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:58:59.660426 master-0 kubenswrapper[7454]: I0319 11:58:59.659701 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:00.658767 master-0 kubenswrapper[7454]: I0319 11:59:00.658688 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:00.658767 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:00.658767 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:00.658767 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:00.659242 master-0 kubenswrapper[7454]: I0319 11:59:00.658789 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:01.659089 master-0 kubenswrapper[7454]: I0319 11:59:01.659007 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:01.659089 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:01.659089 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:01.659089 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:01.659089 master-0 kubenswrapper[7454]: I0319 11:59:01.659070 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:02.659353 master-0 kubenswrapper[7454]: I0319 11:59:02.659293 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:02.659353 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:02.659353 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:02.659353 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:02.660175 master-0 kubenswrapper[7454]: I0319 11:59:02.659364 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:03.658879 master-0 kubenswrapper[7454]: I0319 11:59:03.658785 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:03.658879 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:03.658879 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:03.658879 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:03.659173 master-0 kubenswrapper[7454]: I0319 11:59:03.658906 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:04.659220 master-0 kubenswrapper[7454]: I0319 11:59:04.659133 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:04.659220 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:04.659220 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:04.659220 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:04.659220 master-0 kubenswrapper[7454]: I0319 11:59:04.659217 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:05.633596 master-0 kubenswrapper[7454]: I0319 11:59:05.633559 7454 scope.go:117] "RemoveContainer" containerID="ff92d05d103782a47d08e29aa2fb79e226a87a90f33dcfc9e8b5555e427f0ce4" Mar 19 11:59:05.659582 master-0 kubenswrapper[7454]: I0319 11:59:05.659511 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:05.659582 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:05.659582 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:05.659582 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:05.660201 master-0 kubenswrapper[7454]: I0319 11:59:05.659579 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:05.668613 master-0 kubenswrapper[7454]: I0319 11:59:05.668534 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-w8jqs" podStartSLOduration=15.668520722 podStartE2EDuration="15.668520722s" podCreationTimestamp="2026-03-19 11:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 11:58:52.558400111 +0000 UTC m=+302.188866054" watchObservedRunningTime="2026-03-19 11:59:05.668520722 +0000 UTC m=+315.298986635" Mar 19 11:59:06.631684 master-0 kubenswrapper[7454]: I0319 11:59:06.631630 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/1.log" Mar 19 11:59:06.632232 master-0 kubenswrapper[7454]: I0319 11:59:06.632184 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" event={"ID":"b80027fd-7b39-477a-a337-ff9bb08e7eeb","Type":"ContainerStarted","Data":"e8132683509c67a65f018a1049a40400831c5e5aafa7f685a1489681ff42e257"} Mar 19 11:59:06.658627 master-0 kubenswrapper[7454]: I0319 11:59:06.658569 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:06.658627 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:06.658627 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:06.658627 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:06.658627 master-0 kubenswrapper[7454]: I0319 11:59:06.658620 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:07.659519 master-0 kubenswrapper[7454]: I0319 11:59:07.659438 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:07.659519 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:07.659519 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:07.659519 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:07.659519 master-0 kubenswrapper[7454]: I0319 11:59:07.659512 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:08.659762 master-0 kubenswrapper[7454]: I0319 11:59:08.659706 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:08.659762 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:08.659762 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:08.659762 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:08.660578 master-0 kubenswrapper[7454]: I0319 11:59:08.659786 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:09.660841 master-0 kubenswrapper[7454]: I0319 11:59:09.660738 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:09.660841 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:09.660841 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:09.660841 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:09.661906 master-0 kubenswrapper[7454]: I0319 11:59:09.660851 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:10.660868 master-0 kubenswrapper[7454]: I0319 11:59:10.660789 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:10.660868 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:10.660868 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:10.660868 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:10.661843 master-0 kubenswrapper[7454]: I0319 11:59:10.660871 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:11.659304 master-0 kubenswrapper[7454]: I0319 11:59:11.659241 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:11.659304 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:11.659304 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:11.659304 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:11.659620 master-0 kubenswrapper[7454]: I0319 11:59:11.659308 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:12.660075 master-0 kubenswrapper[7454]: I0319 11:59:12.659982 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:12.660075 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:12.660075 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:12.660075 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:12.660780 master-0 kubenswrapper[7454]: I0319 11:59:12.660095 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:13.659990 master-0 kubenswrapper[7454]: I0319 11:59:13.659903 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:13.659990 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:13.659990 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:13.659990 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:13.660725 master-0 kubenswrapper[7454]: I0319 11:59:13.660000 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:14.660931 master-0 kubenswrapper[7454]: I0319 11:59:14.660856 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:14.660931 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:14.660931 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:14.660931 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:14.661918 master-0 kubenswrapper[7454]: I0319 11:59:14.660950 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:15.660117 master-0 kubenswrapper[7454]: I0319 11:59:15.660027 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:15.660117 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:15.660117 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:15.660117 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:15.660463 master-0 kubenswrapper[7454]: I0319 11:59:15.660148 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:16.659631 master-0 kubenswrapper[7454]: I0319 11:59:16.659580 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:16.659631 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:16.659631 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:16.659631 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:16.660333 master-0 kubenswrapper[7454]: I0319 11:59:16.659654 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:17.659489 master-0 kubenswrapper[7454]: I0319 11:59:17.659403 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:17.659489 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:17.659489 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:17.659489 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:17.660529 master-0 kubenswrapper[7454]: I0319 11:59:17.659494 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:18.659610 master-0 kubenswrapper[7454]: I0319 11:59:18.659548 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:18.659610 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:18.659610 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:18.659610 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:18.660643 master-0 kubenswrapper[7454]: I0319 11:59:18.659623 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:19.660933 master-0 kubenswrapper[7454]: I0319 11:59:19.660866 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:19.660933 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:19.660933 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:19.660933 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:19.661507 master-0 kubenswrapper[7454]: I0319 11:59:19.660964 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:20.658824 master-0 kubenswrapper[7454]: I0319 11:59:20.658729 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:20.658824 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:20.658824 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:20.658824 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:20.659163 master-0 kubenswrapper[7454]: I0319 11:59:20.658824 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:21.658899 master-0 kubenswrapper[7454]: I0319 11:59:21.658775 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:21.658899 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:21.658899 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:21.658899 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:21.659978 master-0 kubenswrapper[7454]: I0319 11:59:21.658908 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:22.660792 master-0 kubenswrapper[7454]: I0319 11:59:22.660723 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:22.660792 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:22.660792 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:22.660792 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:22.661855 master-0 kubenswrapper[7454]: I0319 11:59:22.660802 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:23.660401 master-0 kubenswrapper[7454]: I0319 11:59:23.660317 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:23.660401 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:23.660401 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:23.660401 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:23.660756 master-0 kubenswrapper[7454]: I0319 11:59:23.660429 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:24.660009 master-0 kubenswrapper[7454]: I0319 11:59:24.659893 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:24.660009 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:24.660009 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:24.660009 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:24.660605 master-0 kubenswrapper[7454]: I0319 11:59:24.660051 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:25.659742 master-0 kubenswrapper[7454]: I0319 11:59:25.659655 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:25.659742 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:25.659742 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:25.659742 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:25.660382 master-0 kubenswrapper[7454]: I0319 11:59:25.659774 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:26.661484 master-0 kubenswrapper[7454]: I0319 11:59:26.661420 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:26.661484 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:26.661484 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:26.661484 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:26.662847 master-0 kubenswrapper[7454]: I0319 11:59:26.661492 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:27.660527 master-0 kubenswrapper[7454]: I0319 11:59:27.660430 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:27.660527 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:27.660527 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:27.660527 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:27.661032 master-0 kubenswrapper[7454]: I0319 11:59:27.660579 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:28.659516 master-0 kubenswrapper[7454]: I0319 11:59:28.659452 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:28.659516 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:28.659516 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:28.659516 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:28.660547 master-0 kubenswrapper[7454]: I0319 11:59:28.659538 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:29.658846 master-0 kubenswrapper[7454]: I0319 11:59:29.658743 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:29.658846 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:29.658846 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:29.658846 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:29.659237 master-0 kubenswrapper[7454]: I0319 11:59:29.658852 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:30.660471 master-0 kubenswrapper[7454]: I0319 11:59:30.660370 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:30.660471 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:30.660471 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:30.660471 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:30.660471 master-0 kubenswrapper[7454]: I0319 11:59:30.660465 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:31.660919 master-0 kubenswrapper[7454]: I0319 11:59:31.660844 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:31.660919 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:31.660919 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:31.660919 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:31.661774 master-0 kubenswrapper[7454]: I0319 11:59:31.660947 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:32.659925 master-0 kubenswrapper[7454]: I0319 11:59:32.659876 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:32.659925 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:32.659925 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:32.659925 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:32.659925 master-0 kubenswrapper[7454]: I0319 11:59:32.659934 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:33.660629 master-0 kubenswrapper[7454]: I0319 11:59:33.660559 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:33.660629 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:33.660629 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:33.660629 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:33.662271 master-0 kubenswrapper[7454]: I0319 11:59:33.660648 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:34.661382 master-0 kubenswrapper[7454]: I0319 11:59:34.661279 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:34.661382 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:34.661382 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:34.661382 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:34.661382 master-0 kubenswrapper[7454]: I0319 11:59:34.661369 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:35.659329 master-0 kubenswrapper[7454]: I0319 11:59:35.659262 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:35.659329 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:35.659329 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:35.659329 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:35.659604 master-0 kubenswrapper[7454]: I0319 11:59:35.659333 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:36.657965 master-0 kubenswrapper[7454]: I0319 11:59:36.657918 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:36.657965 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:36.657965 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:36.657965 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:36.658670 master-0 kubenswrapper[7454]: I0319 11:59:36.658644 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:37.662408 master-0 kubenswrapper[7454]: I0319 11:59:37.662315 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:37.662408 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:37.662408 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:37.662408 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:37.663309 master-0 kubenswrapper[7454]: I0319 11:59:37.662424 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:38.659819 master-0 kubenswrapper[7454]: I0319 11:59:38.659757 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:38.659819 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:38.659819 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:38.659819 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:38.660266 master-0 kubenswrapper[7454]: I0319 11:59:38.659889 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:39.765900 master-0 kubenswrapper[7454]: I0319 11:59:39.765855 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:39.765900 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:39.765900 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:39.765900 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:39.766615 master-0 kubenswrapper[7454]: I0319 11:59:39.766581 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:40.659544 master-0 kubenswrapper[7454]: I0319 11:59:40.659362 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:40.659544 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:40.659544 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:40.659544 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:40.659544 master-0 kubenswrapper[7454]: I0319 11:59:40.659471 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:41.660030 master-0 kubenswrapper[7454]: I0319 11:59:41.659934 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:41.660030 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:41.660030 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:41.660030 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:41.660660 master-0 kubenswrapper[7454]: I0319 11:59:41.660034 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:42.659895 master-0 kubenswrapper[7454]: I0319 11:59:42.659787 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:42.659895 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:42.659895 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:42.659895 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:42.660767 master-0 kubenswrapper[7454]: I0319 11:59:42.659906 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:43.658910 master-0 kubenswrapper[7454]: I0319 11:59:43.658844 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 11:59:43.658910 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 11:59:43.658910 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 11:59:43.658910 master-0 kubenswrapper[7454]: healthz check failed Mar 19 11:59:43.658910 master-0 kubenswrapper[7454]: I0319 11:59:43.658945 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 11:59:43.659346 master-0 kubenswrapper[7454]: I0319 11:59:43.659004 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 11:59:43.659700 master-0 kubenswrapper[7454]: I0319 11:59:43.659663 7454 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"5b0f04d22c0c85eb93a91a7347f66800de8887e62876b70685d642e80dd0f769"} pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" containerMessage="Container router failed startup probe, will be restarted" Mar 19 11:59:43.659773 master-0 kubenswrapper[7454]: I0319 11:59:43.659704 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" containerID="cri-o://5b0f04d22c0c85eb93a91a7347f66800de8887e62876b70685d642e80dd0f769" gracePeriod=3600 Mar 19 12:00:30.248593 master-0 kubenswrapper[7454]: I0319 12:00:30.248492 7454 generic.go:334] "Generic (PLEG): container finished" podID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerID="5b0f04d22c0c85eb93a91a7347f66800de8887e62876b70685d642e80dd0f769" exitCode=0 Mar 19 12:00:30.248593 master-0 kubenswrapper[7454]: I0319 12:00:30.248547 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" event={"ID":"91112ce6-4f9d-44c1-a4e7-fea126554bcf","Type":"ContainerDied","Data":"5b0f04d22c0c85eb93a91a7347f66800de8887e62876b70685d642e80dd0f769"} Mar 19 12:00:30.248593 master-0 kubenswrapper[7454]: I0319 12:00:30.248600 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" event={"ID":"91112ce6-4f9d-44c1-a4e7-fea126554bcf","Type":"ContainerStarted","Data":"fc66004bdf7840ad3f084c0dfa71eeb2520e8e4a081e3e6ac34bc77b6fbd71ea"} Mar 19 12:00:30.249297 master-0 kubenswrapper[7454]: I0319 12:00:30.248622 7454 scope.go:117] "RemoveContainer" containerID="2f120a0d94fdbfa9eb3c076343f202eb79687478095e8ae9cb88dc10339e167a" Mar 19 12:00:30.658067 master-0 kubenswrapper[7454]: I0319 12:00:30.657074 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:00:30.660414 master-0 kubenswrapper[7454]: I0319 12:00:30.660358 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:30.660414 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:30.660414 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:30.660414 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:30.660656 master-0 kubenswrapper[7454]: I0319 12:00:30.660434 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:31.659546 master-0 kubenswrapper[7454]: I0319 12:00:31.659465 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:31.659546 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:31.659546 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:31.659546 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:31.660269 master-0 kubenswrapper[7454]: I0319 12:00:31.659559 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:32.658469 master-0 kubenswrapper[7454]: I0319 12:00:32.658413 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:32.658469 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:32.658469 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:32.658469 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:32.658907 master-0 kubenswrapper[7454]: I0319 12:00:32.658477 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:33.660928 master-0 kubenswrapper[7454]: I0319 12:00:33.660851 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:33.660928 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:33.660928 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:33.660928 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:33.660928 master-0 kubenswrapper[7454]: I0319 12:00:33.660922 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:34.659929 master-0 kubenswrapper[7454]: I0319 12:00:34.659796 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:34.659929 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:34.659929 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:34.659929 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:34.660275 master-0 kubenswrapper[7454]: I0319 12:00:34.659933 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:35.658732 master-0 kubenswrapper[7454]: I0319 12:00:35.658679 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:35.658732 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:35.658732 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:35.658732 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:35.659467 master-0 kubenswrapper[7454]: I0319 12:00:35.658761 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:36.659318 master-0 kubenswrapper[7454]: I0319 12:00:36.658922 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:36.659318 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:36.659318 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:36.659318 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:36.659318 master-0 kubenswrapper[7454]: I0319 12:00:36.659014 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:37.657161 master-0 kubenswrapper[7454]: I0319 12:00:37.657092 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:00:37.660290 master-0 kubenswrapper[7454]: I0319 12:00:37.660193 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:37.660290 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:37.660290 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:37.660290 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:37.661114 master-0 kubenswrapper[7454]: I0319 12:00:37.660296 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:38.660139 master-0 kubenswrapper[7454]: I0319 12:00:38.660034 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:38.660139 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:38.660139 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:38.660139 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:38.661554 master-0 kubenswrapper[7454]: I0319 12:00:38.660155 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:39.659611 master-0 kubenswrapper[7454]: I0319 12:00:39.659525 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:39.659611 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:39.659611 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:39.659611 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:39.659957 master-0 kubenswrapper[7454]: I0319 12:00:39.659611 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:40.659381 master-0 kubenswrapper[7454]: I0319 12:00:40.659320 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:40.659381 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:40.659381 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:40.659381 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:40.660112 master-0 kubenswrapper[7454]: I0319 12:00:40.659385 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:41.659236 master-0 kubenswrapper[7454]: I0319 12:00:41.659154 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:41.659236 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:41.659236 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:41.659236 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:41.659876 master-0 kubenswrapper[7454]: I0319 12:00:41.659257 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:42.659435 master-0 kubenswrapper[7454]: I0319 12:00:42.659389 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:42.659435 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:42.659435 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:42.659435 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:42.660158 master-0 kubenswrapper[7454]: I0319 12:00:42.659439 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:43.659507 master-0 kubenswrapper[7454]: I0319 12:00:43.659379 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:43.659507 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:43.659507 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:43.659507 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:43.660690 master-0 kubenswrapper[7454]: I0319 12:00:43.659515 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:44.659694 master-0 kubenswrapper[7454]: I0319 12:00:44.659591 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:44.659694 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:44.659694 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:44.659694 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:44.659694 master-0 kubenswrapper[7454]: I0319 12:00:44.659686 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:45.659747 master-0 kubenswrapper[7454]: I0319 12:00:45.659663 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:45.659747 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:45.659747 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:45.659747 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:45.659747 master-0 kubenswrapper[7454]: I0319 12:00:45.659732 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:46.660489 master-0 kubenswrapper[7454]: I0319 12:00:46.660312 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:46.660489 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:46.660489 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:46.660489 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:46.660489 master-0 kubenswrapper[7454]: I0319 12:00:46.660432 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:47.659973 master-0 kubenswrapper[7454]: I0319 12:00:47.659889 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:47.659973 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:47.659973 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:47.659973 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:47.661226 master-0 kubenswrapper[7454]: I0319 12:00:47.660000 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:48.658895 master-0 kubenswrapper[7454]: I0319 12:00:48.658770 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:48.658895 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:48.658895 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:48.658895 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:48.659218 master-0 kubenswrapper[7454]: I0319 12:00:48.658943 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:49.659807 master-0 kubenswrapper[7454]: I0319 12:00:49.659699 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:49.659807 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:49.659807 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:49.659807 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:49.660400 master-0 kubenswrapper[7454]: I0319 12:00:49.659841 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:50.659923 master-0 kubenswrapper[7454]: I0319 12:00:50.659832 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:50.659923 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:50.659923 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:50.659923 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:50.660565 master-0 kubenswrapper[7454]: I0319 12:00:50.659938 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:51.658955 master-0 kubenswrapper[7454]: I0319 12:00:51.658888 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:51.658955 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:51.658955 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:51.658955 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:51.659259 master-0 kubenswrapper[7454]: I0319 12:00:51.658969 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:52.659713 master-0 kubenswrapper[7454]: I0319 12:00:52.659628 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:52.659713 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:52.659713 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:52.659713 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:52.659713 master-0 kubenswrapper[7454]: I0319 12:00:52.659716 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:53.659494 master-0 kubenswrapper[7454]: I0319 12:00:53.659417 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:53.659494 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:53.659494 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:53.659494 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:53.660354 master-0 kubenswrapper[7454]: I0319 12:00:53.659518 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:54.659217 master-0 kubenswrapper[7454]: I0319 12:00:54.659158 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:54.659217 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:54.659217 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:54.659217 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:54.659801 master-0 kubenswrapper[7454]: I0319 12:00:54.659228 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:55.659588 master-0 kubenswrapper[7454]: I0319 12:00:55.659496 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:55.659588 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:55.659588 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:55.659588 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:55.660230 master-0 kubenswrapper[7454]: I0319 12:00:55.659582 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:56.659766 master-0 kubenswrapper[7454]: I0319 12:00:56.659712 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:56.659766 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:56.659766 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:56.659766 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:56.660486 master-0 kubenswrapper[7454]: I0319 12:00:56.659779 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:57.659609 master-0 kubenswrapper[7454]: I0319 12:00:57.659549 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:57.659609 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:57.659609 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:57.659609 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:57.660389 master-0 kubenswrapper[7454]: I0319 12:00:57.659637 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:58.659975 master-0 kubenswrapper[7454]: I0319 12:00:58.659906 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:58.659975 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:58.659975 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:58.659975 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:58.660656 master-0 kubenswrapper[7454]: I0319 12:00:58.660001 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:00:59.660018 master-0 kubenswrapper[7454]: I0319 12:00:59.659926 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:00:59.660018 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:00:59.660018 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:00:59.660018 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:00:59.661004 master-0 kubenswrapper[7454]: I0319 12:00:59.660043 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:00.660319 master-0 kubenswrapper[7454]: I0319 12:01:00.660262 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:00.660319 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:00.660319 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:00.660319 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:00.660319 master-0 kubenswrapper[7454]: I0319 12:01:00.660321 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:01.660114 master-0 kubenswrapper[7454]: I0319 12:01:01.660032 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:01.660114 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:01.660114 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:01.660114 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:01.660114 master-0 kubenswrapper[7454]: I0319 12:01:01.660109 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:02.660115 master-0 kubenswrapper[7454]: I0319 12:01:02.660042 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:02.660115 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:02.660115 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:02.660115 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:02.660115 master-0 kubenswrapper[7454]: I0319 12:01:02.660112 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:03.659592 master-0 kubenswrapper[7454]: I0319 12:01:03.659538 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:03.659592 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:03.659592 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:03.659592 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:03.660135 master-0 kubenswrapper[7454]: I0319 12:01:03.659618 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:04.659586 master-0 kubenswrapper[7454]: I0319 12:01:04.659533 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:04.659586 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:04.659586 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:04.659586 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:04.660262 master-0 kubenswrapper[7454]: I0319 12:01:04.659609 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:05.659745 master-0 kubenswrapper[7454]: I0319 12:01:05.659652 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:05.659745 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:05.659745 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:05.659745 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:05.660515 master-0 kubenswrapper[7454]: I0319 12:01:05.659766 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:06.499787 master-0 kubenswrapper[7454]: I0319 12:01:06.499721 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/2.log" Mar 19 12:01:06.500515 master-0 kubenswrapper[7454]: I0319 12:01:06.500483 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/1.log" Mar 19 12:01:06.501104 master-0 kubenswrapper[7454]: I0319 12:01:06.501054 7454 generic.go:334] "Generic (PLEG): container finished" podID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" containerID="e8132683509c67a65f018a1049a40400831c5e5aafa7f685a1489681ff42e257" exitCode=1 Mar 19 12:01:06.501209 master-0 kubenswrapper[7454]: I0319 12:01:06.501105 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" event={"ID":"b80027fd-7b39-477a-a337-ff9bb08e7eeb","Type":"ContainerDied","Data":"e8132683509c67a65f018a1049a40400831c5e5aafa7f685a1489681ff42e257"} Mar 19 12:01:06.501209 master-0 kubenswrapper[7454]: I0319 12:01:06.501154 7454 scope.go:117] "RemoveContainer" containerID="ff92d05d103782a47d08e29aa2fb79e226a87a90f33dcfc9e8b5555e427f0ce4" Mar 19 12:01:06.502074 master-0 kubenswrapper[7454]: I0319 12:01:06.502018 7454 scope.go:117] "RemoveContainer" containerID="e8132683509c67a65f018a1049a40400831c5e5aafa7f685a1489681ff42e257" Mar 19 12:01:06.503267 master-0 kubenswrapper[7454]: E0319 12:01:06.502573 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 12:01:06.659497 master-0 kubenswrapper[7454]: I0319 12:01:06.659439 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:06.659497 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:06.659497 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:06.659497 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:06.660314 master-0 kubenswrapper[7454]: I0319 12:01:06.659519 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:07.513058 master-0 kubenswrapper[7454]: I0319 12:01:07.513009 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/2.log" Mar 19 12:01:07.660021 master-0 kubenswrapper[7454]: I0319 12:01:07.659927 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:07.660021 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:07.660021 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:07.660021 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:07.661083 master-0 kubenswrapper[7454]: I0319 12:01:07.660023 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:08.659818 master-0 kubenswrapper[7454]: I0319 12:01:08.659758 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:08.659818 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:08.659818 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:08.659818 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:08.660414 master-0 kubenswrapper[7454]: I0319 12:01:08.659879 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:09.659342 master-0 kubenswrapper[7454]: I0319 12:01:09.659285 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:09.659342 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:09.659342 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:09.659342 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:09.659614 master-0 kubenswrapper[7454]: I0319 12:01:09.659375 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:10.660530 master-0 kubenswrapper[7454]: I0319 12:01:10.660233 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:10.660530 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:10.660530 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:10.660530 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:10.660530 master-0 kubenswrapper[7454]: I0319 12:01:10.660295 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:11.660214 master-0 kubenswrapper[7454]: I0319 12:01:11.660140 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:11.660214 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:11.660214 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:11.660214 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:11.660570 master-0 kubenswrapper[7454]: I0319 12:01:11.660246 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:12.659314 master-0 kubenswrapper[7454]: I0319 12:01:12.659255 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:12.659314 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:12.659314 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:12.659314 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:12.659641 master-0 kubenswrapper[7454]: I0319 12:01:12.659325 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:13.659704 master-0 kubenswrapper[7454]: I0319 12:01:13.659634 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:13.659704 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:13.659704 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:13.659704 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:13.660560 master-0 kubenswrapper[7454]: I0319 12:01:13.659720 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:14.659469 master-0 kubenswrapper[7454]: I0319 12:01:14.659389 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:14.659469 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:14.659469 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:14.659469 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:14.660355 master-0 kubenswrapper[7454]: I0319 12:01:14.659476 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:15.660009 master-0 kubenswrapper[7454]: I0319 12:01:15.659946 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:15.660009 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:15.660009 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:15.660009 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:15.660668 master-0 kubenswrapper[7454]: I0319 12:01:15.660014 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:16.658544 master-0 kubenswrapper[7454]: I0319 12:01:16.658477 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:16.658544 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:16.658544 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:16.658544 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:16.658544 master-0 kubenswrapper[7454]: I0319 12:01:16.658538 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:17.659300 master-0 kubenswrapper[7454]: I0319 12:01:17.659226 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:17.659300 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:17.659300 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:17.659300 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:17.659300 master-0 kubenswrapper[7454]: I0319 12:01:17.659296 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:18.659911 master-0 kubenswrapper[7454]: I0319 12:01:18.659856 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:18.659911 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:18.659911 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:18.659911 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:18.660748 master-0 kubenswrapper[7454]: I0319 12:01:18.659931 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:19.660013 master-0 kubenswrapper[7454]: I0319 12:01:19.659944 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:19.660013 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:19.660013 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:19.660013 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:19.660659 master-0 kubenswrapper[7454]: I0319 12:01:19.660030 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:20.636259 master-0 kubenswrapper[7454]: I0319 12:01:20.636183 7454 scope.go:117] "RemoveContainer" containerID="e8132683509c67a65f018a1049a40400831c5e5aafa7f685a1489681ff42e257" Mar 19 12:01:20.636564 master-0 kubenswrapper[7454]: E0319 12:01:20.636412 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 12:01:20.660775 master-0 kubenswrapper[7454]: I0319 12:01:20.660699 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:20.660775 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:20.660775 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:20.660775 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:20.661599 master-0 kubenswrapper[7454]: I0319 12:01:20.660783 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:21.659116 master-0 kubenswrapper[7454]: I0319 12:01:21.659039 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:21.659116 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:21.659116 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:21.659116 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:21.659116 master-0 kubenswrapper[7454]: I0319 12:01:21.659097 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:22.659927 master-0 kubenswrapper[7454]: I0319 12:01:22.659859 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:22.659927 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:22.659927 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:22.659927 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:22.660726 master-0 kubenswrapper[7454]: I0319 12:01:22.659956 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:23.666689 master-0 kubenswrapper[7454]: I0319 12:01:23.666626 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:23.666689 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:23.666689 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:23.666689 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:23.667283 master-0 kubenswrapper[7454]: I0319 12:01:23.666717 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:24.658979 master-0 kubenswrapper[7454]: I0319 12:01:24.658917 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:24.658979 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:24.658979 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:24.658979 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:24.658979 master-0 kubenswrapper[7454]: I0319 12:01:24.658975 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:25.660079 master-0 kubenswrapper[7454]: I0319 12:01:25.659992 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:25.660079 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:25.660079 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:25.660079 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:25.660852 master-0 kubenswrapper[7454]: I0319 12:01:25.660108 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:26.659243 master-0 kubenswrapper[7454]: I0319 12:01:26.659096 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:26.659243 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:26.659243 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:26.659243 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:26.659243 master-0 kubenswrapper[7454]: I0319 12:01:26.659207 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:27.659945 master-0 kubenswrapper[7454]: I0319 12:01:27.659881 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:27.659945 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:27.659945 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:27.659945 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:27.659945 master-0 kubenswrapper[7454]: I0319 12:01:27.659943 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:28.658733 master-0 kubenswrapper[7454]: I0319 12:01:28.658667 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:28.658733 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:28.658733 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:28.658733 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:28.658733 master-0 kubenswrapper[7454]: I0319 12:01:28.658731 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:29.659052 master-0 kubenswrapper[7454]: I0319 12:01:29.658950 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:29.659052 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:29.659052 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:29.659052 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:29.659759 master-0 kubenswrapper[7454]: I0319 12:01:29.659106 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:30.660739 master-0 kubenswrapper[7454]: I0319 12:01:30.660656 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:30.660739 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:30.660739 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:30.660739 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:30.661447 master-0 kubenswrapper[7454]: I0319 12:01:30.660757 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:31.660316 master-0 kubenswrapper[7454]: I0319 12:01:31.660204 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:31.660316 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:31.660316 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:31.660316 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:31.660316 master-0 kubenswrapper[7454]: I0319 12:01:31.660302 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:32.633753 master-0 kubenswrapper[7454]: I0319 12:01:32.633708 7454 scope.go:117] "RemoveContainer" containerID="e8132683509c67a65f018a1049a40400831c5e5aafa7f685a1489681ff42e257" Mar 19 12:01:32.659766 master-0 kubenswrapper[7454]: I0319 12:01:32.659584 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:32.659766 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:32.659766 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:32.659766 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:32.659766 master-0 kubenswrapper[7454]: I0319 12:01:32.659633 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:33.659406 master-0 kubenswrapper[7454]: I0319 12:01:33.659311 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:33.659406 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:33.659406 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:33.659406 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:33.660151 master-0 kubenswrapper[7454]: I0319 12:01:33.659412 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:33.700351 master-0 kubenswrapper[7454]: I0319 12:01:33.700282 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/2.log" Mar 19 12:01:33.700896 master-0 kubenswrapper[7454]: I0319 12:01:33.700864 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" event={"ID":"b80027fd-7b39-477a-a337-ff9bb08e7eeb","Type":"ContainerStarted","Data":"0618d6d0445d7e095cd15b094fe882be49fcec49db027db4fe7de076025a2a7e"} Mar 19 12:01:34.659678 master-0 kubenswrapper[7454]: I0319 12:01:34.659595 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:34.659678 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:34.659678 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:34.659678 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:34.659678 master-0 kubenswrapper[7454]: I0319 12:01:34.659669 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:35.658821 master-0 kubenswrapper[7454]: I0319 12:01:35.658741 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:35.658821 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:35.658821 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:35.658821 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:35.659112 master-0 kubenswrapper[7454]: I0319 12:01:35.658857 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:36.351053 master-0 kubenswrapper[7454]: I0319 12:01:36.350977 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-2xxmn"] Mar 19 12:01:36.351894 master-0 kubenswrapper[7454]: I0319 12:01:36.351854 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" Mar 19 12:01:36.353780 master-0 kubenswrapper[7454]: I0319 12:01:36.353738 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-f7j74" Mar 19 12:01:36.353876 master-0 kubenswrapper[7454]: I0319 12:01:36.353857 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 19 12:01:36.431113 master-0 kubenswrapper[7454]: I0319 12:01:36.431043 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6bj8\" (UniqueName: \"kubernetes.io/projected/29e11ec4-f565-4b35-8f1e-0dddb8473b05-kube-api-access-h6bj8\") pod \"cni-sysctl-allowlist-ds-2xxmn\" (UID: \"29e11ec4-f565-4b35-8f1e-0dddb8473b05\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" Mar 19 12:01:36.431351 master-0 kubenswrapper[7454]: I0319 12:01:36.431124 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/29e11ec4-f565-4b35-8f1e-0dddb8473b05-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-2xxmn\" (UID: \"29e11ec4-f565-4b35-8f1e-0dddb8473b05\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" Mar 19 12:01:36.431351 master-0 kubenswrapper[7454]: I0319 12:01:36.431166 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/29e11ec4-f565-4b35-8f1e-0dddb8473b05-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-2xxmn\" (UID: \"29e11ec4-f565-4b35-8f1e-0dddb8473b05\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" Mar 19 12:01:36.431351 master-0 kubenswrapper[7454]: I0319 12:01:36.431189 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/29e11ec4-f565-4b35-8f1e-0dddb8473b05-ready\") pod \"cni-sysctl-allowlist-ds-2xxmn\" (UID: \"29e11ec4-f565-4b35-8f1e-0dddb8473b05\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" Mar 19 12:01:36.532351 master-0 kubenswrapper[7454]: I0319 12:01:36.532280 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/29e11ec4-f565-4b35-8f1e-0dddb8473b05-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-2xxmn\" (UID: \"29e11ec4-f565-4b35-8f1e-0dddb8473b05\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" Mar 19 12:01:36.532351 master-0 kubenswrapper[7454]: I0319 12:01:36.532357 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/29e11ec4-f565-4b35-8f1e-0dddb8473b05-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-2xxmn\" (UID: \"29e11ec4-f565-4b35-8f1e-0dddb8473b05\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" Mar 19 12:01:36.532351 master-0 kubenswrapper[7454]: I0319 12:01:36.532361 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/29e11ec4-f565-4b35-8f1e-0dddb8473b05-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-2xxmn\" (UID: \"29e11ec4-f565-4b35-8f1e-0dddb8473b05\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" Mar 19 12:01:36.532751 master-0 kubenswrapper[7454]: I0319 12:01:36.532497 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/29e11ec4-f565-4b35-8f1e-0dddb8473b05-ready\") pod \"cni-sysctl-allowlist-ds-2xxmn\" (UID: \"29e11ec4-f565-4b35-8f1e-0dddb8473b05\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" Mar 19 12:01:36.532751 master-0 kubenswrapper[7454]: I0319 12:01:36.532576 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6bj8\" (UniqueName: \"kubernetes.io/projected/29e11ec4-f565-4b35-8f1e-0dddb8473b05-kube-api-access-h6bj8\") pod \"cni-sysctl-allowlist-ds-2xxmn\" (UID: \"29e11ec4-f565-4b35-8f1e-0dddb8473b05\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" Mar 19 12:01:36.533028 master-0 kubenswrapper[7454]: I0319 12:01:36.532998 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/29e11ec4-f565-4b35-8f1e-0dddb8473b05-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-2xxmn\" (UID: \"29e11ec4-f565-4b35-8f1e-0dddb8473b05\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" Mar 19 12:01:36.533250 master-0 kubenswrapper[7454]: I0319 12:01:36.533197 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/29e11ec4-f565-4b35-8f1e-0dddb8473b05-ready\") pod \"cni-sysctl-allowlist-ds-2xxmn\" (UID: \"29e11ec4-f565-4b35-8f1e-0dddb8473b05\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" Mar 19 12:01:36.547579 master-0 kubenswrapper[7454]: I0319 12:01:36.547435 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6bj8\" (UniqueName: \"kubernetes.io/projected/29e11ec4-f565-4b35-8f1e-0dddb8473b05-kube-api-access-h6bj8\") pod \"cni-sysctl-allowlist-ds-2xxmn\" (UID: \"29e11ec4-f565-4b35-8f1e-0dddb8473b05\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" Mar 19 12:01:36.660546 master-0 kubenswrapper[7454]: I0319 12:01:36.660396 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:36.660546 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:36.660546 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:36.660546 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:36.660546 master-0 kubenswrapper[7454]: I0319 12:01:36.660492 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:36.671001 master-0 kubenswrapper[7454]: I0319 12:01:36.670414 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" Mar 19 12:01:36.722076 master-0 kubenswrapper[7454]: I0319 12:01:36.721998 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" event={"ID":"29e11ec4-f565-4b35-8f1e-0dddb8473b05","Type":"ContainerStarted","Data":"7e727db50b790d5f4dade75045fa787af146864b65041c118cacc4ddf2f13bcc"} Mar 19 12:01:37.659499 master-0 kubenswrapper[7454]: I0319 12:01:37.659439 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:37.659499 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:37.659499 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:37.659499 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:37.660292 master-0 kubenswrapper[7454]: I0319 12:01:37.659515 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:37.729548 master-0 kubenswrapper[7454]: I0319 12:01:37.729492 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" event={"ID":"29e11ec4-f565-4b35-8f1e-0dddb8473b05","Type":"ContainerStarted","Data":"94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4"} Mar 19 12:01:37.729823 master-0 kubenswrapper[7454]: I0319 12:01:37.729772 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" Mar 19 12:01:37.753385 master-0 kubenswrapper[7454]: I0319 12:01:37.753337 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" Mar 19 12:01:37.753647 master-0 kubenswrapper[7454]: I0319 12:01:37.753595 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" podStartSLOduration=1.753579841 podStartE2EDuration="1.753579841s" podCreationTimestamp="2026-03-19 12:01:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:01:37.753337643 +0000 UTC m=+467.383803556" watchObservedRunningTime="2026-03-19 12:01:37.753579841 +0000 UTC m=+467.384045744" Mar 19 12:01:38.337113 master-0 kubenswrapper[7454]: I0319 12:01:38.337046 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-2xxmn"] Mar 19 12:01:38.659605 master-0 kubenswrapper[7454]: I0319 12:01:38.659392 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:38.659605 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:38.659605 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:38.659605 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:38.659605 master-0 kubenswrapper[7454]: I0319 12:01:38.659526 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:39.660016 master-0 kubenswrapper[7454]: I0319 12:01:39.659973 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:39.660016 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:39.660016 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:39.660016 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:39.660692 master-0 kubenswrapper[7454]: I0319 12:01:39.660666 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:39.743072 master-0 kubenswrapper[7454]: I0319 12:01:39.743009 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" podUID="29e11ec4-f565-4b35-8f1e-0dddb8473b05" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4" gracePeriod=30 Mar 19 12:01:40.659495 master-0 kubenswrapper[7454]: I0319 12:01:40.659438 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:40.659495 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:40.659495 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:40.659495 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:40.660623 master-0 kubenswrapper[7454]: I0319 12:01:40.659520 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:41.659355 master-0 kubenswrapper[7454]: I0319 12:01:41.659297 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:41.659355 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:41.659355 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:41.659355 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:41.659711 master-0 kubenswrapper[7454]: I0319 12:01:41.659366 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:42.658478 master-0 kubenswrapper[7454]: I0319 12:01:42.658436 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:42.658478 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:42.658478 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:42.658478 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:42.659037 master-0 kubenswrapper[7454]: I0319 12:01:42.658495 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:43.660355 master-0 kubenswrapper[7454]: I0319 12:01:43.660279 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:43.660355 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:43.660355 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:43.660355 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:43.661000 master-0 kubenswrapper[7454]: I0319 12:01:43.660374 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:44.659096 master-0 kubenswrapper[7454]: I0319 12:01:44.659040 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:44.659096 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:44.659096 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:44.659096 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:44.659096 master-0 kubenswrapper[7454]: I0319 12:01:44.659099 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:45.658691 master-0 kubenswrapper[7454]: I0319 12:01:45.658630 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:45.658691 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:45.658691 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:45.658691 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:45.658691 master-0 kubenswrapper[7454]: I0319 12:01:45.658689 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:46.136024 master-0 kubenswrapper[7454]: I0319 12:01:46.135960 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9"] Mar 19 12:01:46.137416 master-0 kubenswrapper[7454]: I0319 12:01:46.137379 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9" Mar 19 12:01:46.139787 master-0 kubenswrapper[7454]: I0319 12:01:46.139732 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-77kwj" Mar 19 12:01:46.147998 master-0 kubenswrapper[7454]: I0319 12:01:46.147951 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9"] Mar 19 12:01:46.167884 master-0 kubenswrapper[7454]: I0319 12:01:46.163675 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8n22\" (UniqueName: \"kubernetes.io/projected/1c2a33ba-76d0-4b81-a41d-9da16fd46209-kube-api-access-k8n22\") pod \"multus-admission-controller-58c9f8fc64-dqgd9\" (UID: \"1c2a33ba-76d0-4b81-a41d-9da16fd46209\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9" Mar 19 12:01:46.167884 master-0 kubenswrapper[7454]: I0319 12:01:46.163787 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1c2a33ba-76d0-4b81-a41d-9da16fd46209-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-dqgd9\" (UID: \"1c2a33ba-76d0-4b81-a41d-9da16fd46209\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9" Mar 19 12:01:46.264443 master-0 kubenswrapper[7454]: I0319 12:01:46.264379 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1c2a33ba-76d0-4b81-a41d-9da16fd46209-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-dqgd9\" (UID: \"1c2a33ba-76d0-4b81-a41d-9da16fd46209\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9" Mar 19 12:01:46.264665 master-0 kubenswrapper[7454]: I0319 12:01:46.264455 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8n22\" (UniqueName: \"kubernetes.io/projected/1c2a33ba-76d0-4b81-a41d-9da16fd46209-kube-api-access-k8n22\") pod \"multus-admission-controller-58c9f8fc64-dqgd9\" (UID: \"1c2a33ba-76d0-4b81-a41d-9da16fd46209\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9" Mar 19 12:01:46.269616 master-0 kubenswrapper[7454]: I0319 12:01:46.269548 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1c2a33ba-76d0-4b81-a41d-9da16fd46209-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-dqgd9\" (UID: \"1c2a33ba-76d0-4b81-a41d-9da16fd46209\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9" Mar 19 12:01:46.291573 master-0 kubenswrapper[7454]: I0319 12:01:46.291519 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8n22\" (UniqueName: \"kubernetes.io/projected/1c2a33ba-76d0-4b81-a41d-9da16fd46209-kube-api-access-k8n22\") pod \"multus-admission-controller-58c9f8fc64-dqgd9\" (UID: \"1c2a33ba-76d0-4b81-a41d-9da16fd46209\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9" Mar 19 12:01:46.454887 master-0 kubenswrapper[7454]: I0319 12:01:46.454735 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9" Mar 19 12:01:46.659836 master-0 kubenswrapper[7454]: I0319 12:01:46.659778 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:46.659836 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:46.659836 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:46.659836 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:46.660481 master-0 kubenswrapper[7454]: I0319 12:01:46.660449 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:46.673745 master-0 kubenswrapper[7454]: E0319 12:01:46.673681 7454 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 19 12:01:46.677898 master-0 kubenswrapper[7454]: E0319 12:01:46.677833 7454 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 19 12:01:46.683151 master-0 kubenswrapper[7454]: E0319 12:01:46.683091 7454 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 19 12:01:46.683248 master-0 kubenswrapper[7454]: E0319 12:01:46.683169 7454 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" podUID="29e11ec4-f565-4b35-8f1e-0dddb8473b05" containerName="kube-multus-additional-cni-plugins" Mar 19 12:01:46.849147 master-0 kubenswrapper[7454]: I0319 12:01:46.849088 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9"] Mar 19 12:01:47.659348 master-0 kubenswrapper[7454]: I0319 12:01:47.659273 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:47.659348 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:47.659348 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:47.659348 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:47.659658 master-0 kubenswrapper[7454]: I0319 12:01:47.659360 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:47.786691 master-0 kubenswrapper[7454]: I0319 12:01:47.786622 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9" event={"ID":"1c2a33ba-76d0-4b81-a41d-9da16fd46209","Type":"ContainerStarted","Data":"60b8b75c220d8a01957c8a03b599e038d714955e763a8b75ff7ecba2a91d234a"} Mar 19 12:01:47.786691 master-0 kubenswrapper[7454]: I0319 12:01:47.786674 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9" event={"ID":"1c2a33ba-76d0-4b81-a41d-9da16fd46209","Type":"ContainerStarted","Data":"4f3a6761e4d7558088acbd783b7aae43598ed70e3e26813213cf59996cdc0e7c"} Mar 19 12:01:47.786691 master-0 kubenswrapper[7454]: I0319 12:01:47.786683 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9" event={"ID":"1c2a33ba-76d0-4b81-a41d-9da16fd46209","Type":"ContainerStarted","Data":"c364dba2c743db6a6431b4c04a672e744dc16c7056590a2f4b28394bd78f6fc7"} Mar 19 12:01:47.803811 master-0 kubenswrapper[7454]: I0319 12:01:47.803701 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9" podStartSLOduration=1.803676198 podStartE2EDuration="1.803676198s" podCreationTimestamp="2026-03-19 12:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:01:47.800835435 +0000 UTC m=+477.431301348" watchObservedRunningTime="2026-03-19 12:01:47.803676198 +0000 UTC m=+477.434142131" Mar 19 12:01:47.836307 master-0 kubenswrapper[7454]: I0319 12:01:47.836201 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg"] Mar 19 12:01:47.836534 master-0 kubenswrapper[7454]: I0319 12:01:47.836426 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" podUID="806a4c30-7b93-4430-86da-f9e1f4f2d206" containerName="multus-admission-controller" containerID="cri-o://32946350fbb40f17e1bf84fa3bef60ee89587d671dd1dca0cb3ac265a9a51704" gracePeriod=30 Mar 19 12:01:47.836726 master-0 kubenswrapper[7454]: I0319 12:01:47.836680 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" podUID="806a4c30-7b93-4430-86da-f9e1f4f2d206" containerName="kube-rbac-proxy" containerID="cri-o://5af120083ccfa19775f3cfbcd29e655aebb641b4ecf435859e1f29291e7340f7" gracePeriod=30 Mar 19 12:01:48.659382 master-0 kubenswrapper[7454]: I0319 12:01:48.659331 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:48.659382 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:48.659382 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:48.659382 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:48.659699 master-0 kubenswrapper[7454]: I0319 12:01:48.659399 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:48.799406 master-0 kubenswrapper[7454]: I0319 12:01:48.799349 7454 generic.go:334] "Generic (PLEG): container finished" podID="806a4c30-7b93-4430-86da-f9e1f4f2d206" containerID="5af120083ccfa19775f3cfbcd29e655aebb641b4ecf435859e1f29291e7340f7" exitCode=0 Mar 19 12:01:48.799928 master-0 kubenswrapper[7454]: I0319 12:01:48.799425 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" event={"ID":"806a4c30-7b93-4430-86da-f9e1f4f2d206","Type":"ContainerDied","Data":"5af120083ccfa19775f3cfbcd29e655aebb641b4ecf435859e1f29291e7340f7"} Mar 19 12:01:49.658909 master-0 kubenswrapper[7454]: I0319 12:01:49.658851 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:49.658909 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:49.658909 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:49.658909 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:49.659228 master-0 kubenswrapper[7454]: I0319 12:01:49.658934 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:50.658673 master-0 kubenswrapper[7454]: I0319 12:01:50.658617 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:50.658673 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:50.658673 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:50.658673 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:50.659299 master-0 kubenswrapper[7454]: I0319 12:01:50.658684 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:51.659388 master-0 kubenswrapper[7454]: I0319 12:01:51.659312 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:51.659388 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:51.659388 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:51.659388 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:51.659388 master-0 kubenswrapper[7454]: I0319 12:01:51.659383 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:52.437535 master-0 kubenswrapper[7454]: I0319 12:01:52.437465 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 19 12:01:52.438333 master-0 kubenswrapper[7454]: I0319 12:01:52.438298 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 12:01:52.443666 master-0 kubenswrapper[7454]: I0319 12:01:52.443596 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kqbhm" Mar 19 12:01:52.443904 master-0 kubenswrapper[7454]: I0319 12:01:52.443857 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 19 12:01:52.452029 master-0 kubenswrapper[7454]: I0319 12:01:52.451971 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 19 12:01:52.546947 master-0 kubenswrapper[7454]: I0319 12:01:52.546884 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b425669d-6f80-4a2b-b2f2-5c6766654c6c-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"b425669d-6f80-4a2b-b2f2-5c6766654c6c\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 12:01:52.546947 master-0 kubenswrapper[7454]: I0319 12:01:52.546946 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b425669d-6f80-4a2b-b2f2-5c6766654c6c-var-lock\") pod \"installer-3-master-0\" (UID: \"b425669d-6f80-4a2b-b2f2-5c6766654c6c\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 12:01:52.547183 master-0 kubenswrapper[7454]: I0319 12:01:52.547138 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b425669d-6f80-4a2b-b2f2-5c6766654c6c-kube-api-access\") pod \"installer-3-master-0\" (UID: \"b425669d-6f80-4a2b-b2f2-5c6766654c6c\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 12:01:52.648811 master-0 kubenswrapper[7454]: I0319 12:01:52.648748 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b425669d-6f80-4a2b-b2f2-5c6766654c6c-var-lock\") pod \"installer-3-master-0\" (UID: \"b425669d-6f80-4a2b-b2f2-5c6766654c6c\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 12:01:52.649023 master-0 kubenswrapper[7454]: I0319 12:01:52.648869 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b425669d-6f80-4a2b-b2f2-5c6766654c6c-var-lock\") pod \"installer-3-master-0\" (UID: \"b425669d-6f80-4a2b-b2f2-5c6766654c6c\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 12:01:52.649023 master-0 kubenswrapper[7454]: I0319 12:01:52.648909 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b425669d-6f80-4a2b-b2f2-5c6766654c6c-kube-api-access\") pod \"installer-3-master-0\" (UID: \"b425669d-6f80-4a2b-b2f2-5c6766654c6c\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 12:01:52.649276 master-0 kubenswrapper[7454]: I0319 12:01:52.649244 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b425669d-6f80-4a2b-b2f2-5c6766654c6c-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"b425669d-6f80-4a2b-b2f2-5c6766654c6c\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 12:01:52.649337 master-0 kubenswrapper[7454]: I0319 12:01:52.649324 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b425669d-6f80-4a2b-b2f2-5c6766654c6c-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"b425669d-6f80-4a2b-b2f2-5c6766654c6c\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 12:01:52.659603 master-0 kubenswrapper[7454]: I0319 12:01:52.659557 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:52.659603 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:52.659603 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:52.659603 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:52.660142 master-0 kubenswrapper[7454]: I0319 12:01:52.659606 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:52.663243 master-0 kubenswrapper[7454]: I0319 12:01:52.663218 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b425669d-6f80-4a2b-b2f2-5c6766654c6c-kube-api-access\") pod \"installer-3-master-0\" (UID: \"b425669d-6f80-4a2b-b2f2-5c6766654c6c\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 12:01:52.763836 master-0 kubenswrapper[7454]: I0319 12:01:52.763743 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 12:01:53.178341 master-0 kubenswrapper[7454]: I0319 12:01:53.176866 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 19 12:01:53.658503 master-0 kubenswrapper[7454]: I0319 12:01:53.658413 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:53.658503 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:53.658503 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:53.658503 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:53.658503 master-0 kubenswrapper[7454]: I0319 12:01:53.658484 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:53.834035 master-0 kubenswrapper[7454]: I0319 12:01:53.833850 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"b425669d-6f80-4a2b-b2f2-5c6766654c6c","Type":"ContainerStarted","Data":"4f12a6a6377eb63e234161ff939d40e45bfc8d6ae4fa1554dca2cf62421fb52b"} Mar 19 12:01:53.834035 master-0 kubenswrapper[7454]: I0319 12:01:53.833908 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"b425669d-6f80-4a2b-b2f2-5c6766654c6c","Type":"ContainerStarted","Data":"bd02abc3df1ea2ca997096da3d27136acff3102126289b07e7fa867e530a0c53"} Mar 19 12:01:53.871545 master-0 kubenswrapper[7454]: I0319 12:01:53.871444 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=1.871415865 podStartE2EDuration="1.871415865s" podCreationTimestamp="2026-03-19 12:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:01:53.867979623 +0000 UTC m=+483.498445586" watchObservedRunningTime="2026-03-19 12:01:53.871415865 +0000 UTC m=+483.501881778" Mar 19 12:01:54.659609 master-0 kubenswrapper[7454]: I0319 12:01:54.659493 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:54.659609 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:54.659609 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:54.659609 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:54.659609 master-0 kubenswrapper[7454]: I0319 12:01:54.659552 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:55.663200 master-0 kubenswrapper[7454]: I0319 12:01:55.663110 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:55.663200 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:55.663200 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:55.663200 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:55.663820 master-0 kubenswrapper[7454]: I0319 12:01:55.663197 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:56.659852 master-0 kubenswrapper[7454]: I0319 12:01:56.659604 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:56.659852 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:56.659852 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:56.659852 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:56.659852 master-0 kubenswrapper[7454]: I0319 12:01:56.659681 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:56.672926 master-0 kubenswrapper[7454]: E0319 12:01:56.672844 7454 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 19 12:01:56.675579 master-0 kubenswrapper[7454]: E0319 12:01:56.675465 7454 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 19 12:01:56.679319 master-0 kubenswrapper[7454]: E0319 12:01:56.679254 7454 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 19 12:01:56.679403 master-0 kubenswrapper[7454]: E0319 12:01:56.679334 7454 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" podUID="29e11ec4-f565-4b35-8f1e-0dddb8473b05" containerName="kube-multus-additional-cni-plugins" Mar 19 12:01:57.151363 master-0 kubenswrapper[7454]: I0319 12:01:57.151288 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 19 12:01:57.152247 master-0 kubenswrapper[7454]: I0319 12:01:57.152219 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 12:01:57.154121 master-0 kubenswrapper[7454]: I0319 12:01:57.154068 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-dqvjj" Mar 19 12:01:57.154924 master-0 kubenswrapper[7454]: I0319 12:01:57.154894 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 19 12:01:57.170278 master-0 kubenswrapper[7454]: I0319 12:01:57.170236 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 19 12:01:57.222981 master-0 kubenswrapper[7454]: I0319 12:01:57.222913 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12d71593-ee54-4321-bc0f-a24261873bd1-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"12d71593-ee54-4321-bc0f-a24261873bd1\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 12:01:57.222981 master-0 kubenswrapper[7454]: I0319 12:01:57.222961 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/12d71593-ee54-4321-bc0f-a24261873bd1-var-lock\") pod \"installer-4-master-0\" (UID: \"12d71593-ee54-4321-bc0f-a24261873bd1\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 12:01:57.222981 master-0 kubenswrapper[7454]: I0319 12:01:57.222996 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12d71593-ee54-4321-bc0f-a24261873bd1-kube-api-access\") pod \"installer-4-master-0\" (UID: \"12d71593-ee54-4321-bc0f-a24261873bd1\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 12:01:57.325036 master-0 kubenswrapper[7454]: I0319 12:01:57.324950 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12d71593-ee54-4321-bc0f-a24261873bd1-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"12d71593-ee54-4321-bc0f-a24261873bd1\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 12:01:57.325036 master-0 kubenswrapper[7454]: I0319 12:01:57.325023 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/12d71593-ee54-4321-bc0f-a24261873bd1-var-lock\") pod \"installer-4-master-0\" (UID: \"12d71593-ee54-4321-bc0f-a24261873bd1\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 12:01:57.325300 master-0 kubenswrapper[7454]: I0319 12:01:57.325062 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12d71593-ee54-4321-bc0f-a24261873bd1-kube-api-access\") pod \"installer-4-master-0\" (UID: \"12d71593-ee54-4321-bc0f-a24261873bd1\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 12:01:57.325300 master-0 kubenswrapper[7454]: I0319 12:01:57.325248 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/12d71593-ee54-4321-bc0f-a24261873bd1-var-lock\") pod \"installer-4-master-0\" (UID: \"12d71593-ee54-4321-bc0f-a24261873bd1\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 12:01:57.325418 master-0 kubenswrapper[7454]: I0319 12:01:57.325344 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12d71593-ee54-4321-bc0f-a24261873bd1-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"12d71593-ee54-4321-bc0f-a24261873bd1\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 12:01:57.341973 master-0 kubenswrapper[7454]: I0319 12:01:57.341920 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12d71593-ee54-4321-bc0f-a24261873bd1-kube-api-access\") pod \"installer-4-master-0\" (UID: \"12d71593-ee54-4321-bc0f-a24261873bd1\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 12:01:57.469867 master-0 kubenswrapper[7454]: I0319 12:01:57.469751 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 12:01:57.660468 master-0 kubenswrapper[7454]: I0319 12:01:57.660404 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:57.660468 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:57.660468 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:57.660468 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:57.660831 master-0 kubenswrapper[7454]: I0319 12:01:57.660480 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:57.897417 master-0 kubenswrapper[7454]: I0319 12:01:57.897367 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 19 12:01:58.659909 master-0 kubenswrapper[7454]: I0319 12:01:58.659844 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:58.659909 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:58.659909 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:58.659909 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:58.660287 master-0 kubenswrapper[7454]: I0319 12:01:58.659967 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:01:58.868505 master-0 kubenswrapper[7454]: I0319 12:01:58.868458 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"12d71593-ee54-4321-bc0f-a24261873bd1","Type":"ContainerStarted","Data":"bce063a1f339b0aa356b146565a1aad286cac9d49e6c2b9606f7a6d9709c3159"} Mar 19 12:01:58.868910 master-0 kubenswrapper[7454]: I0319 12:01:58.868887 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"12d71593-ee54-4321-bc0f-a24261873bd1","Type":"ContainerStarted","Data":"ed283c061d1fd79e9b8f04b4ebc51756f0469a7d30532249627ffce7936f190b"} Mar 19 12:01:58.889706 master-0 kubenswrapper[7454]: I0319 12:01:58.889574 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=1.889547874 podStartE2EDuration="1.889547874s" podCreationTimestamp="2026-03-19 12:01:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:01:58.883780077 +0000 UTC m=+488.514246010" watchObservedRunningTime="2026-03-19 12:01:58.889547874 +0000 UTC m=+488.520013787" Mar 19 12:01:59.659321 master-0 kubenswrapper[7454]: I0319 12:01:59.659281 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:01:59.659321 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:01:59.659321 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:01:59.659321 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:01:59.660064 master-0 kubenswrapper[7454]: I0319 12:01:59.659971 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:00.659102 master-0 kubenswrapper[7454]: I0319 12:02:00.659048 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:00.659102 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:00.659102 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:00.659102 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:00.659708 master-0 kubenswrapper[7454]: I0319 12:02:00.659119 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:01.658677 master-0 kubenswrapper[7454]: I0319 12:02:01.658625 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:01.658677 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:01.658677 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:01.658677 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:01.658998 master-0 kubenswrapper[7454]: I0319 12:02:01.658698 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:02.658816 master-0 kubenswrapper[7454]: I0319 12:02:02.658758 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:02.658816 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:02.658816 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:02.658816 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:02.659364 master-0 kubenswrapper[7454]: I0319 12:02:02.658843 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:02.941081 master-0 kubenswrapper[7454]: I0319 12:02:02.938934 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 19 12:02:02.941081 master-0 kubenswrapper[7454]: I0319 12:02:02.940543 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 19 12:02:02.943876 master-0 kubenswrapper[7454]: I0319 12:02:02.943783 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 19 12:02:02.947874 master-0 kubenswrapper[7454]: I0319 12:02:02.944252 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-p5mjz" Mar 19 12:02:02.957863 master-0 kubenswrapper[7454]: I0319 12:02:02.955741 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 19 12:02:03.006909 master-0 kubenswrapper[7454]: I0319 12:02:03.006769 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b48817c-05cd-430b-9b1f-9cc037f1ca77-kube-api-access\") pod \"installer-2-master-0\" (UID: \"8b48817c-05cd-430b-9b1f-9cc037f1ca77\") " pod="openshift-etcd/installer-2-master-0" Mar 19 12:02:03.007114 master-0 kubenswrapper[7454]: I0319 12:02:03.006961 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b48817c-05cd-430b-9b1f-9cc037f1ca77-var-lock\") pod \"installer-2-master-0\" (UID: \"8b48817c-05cd-430b-9b1f-9cc037f1ca77\") " pod="openshift-etcd/installer-2-master-0" Mar 19 12:02:03.007114 master-0 kubenswrapper[7454]: I0319 12:02:03.006996 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b48817c-05cd-430b-9b1f-9cc037f1ca77-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"8b48817c-05cd-430b-9b1f-9cc037f1ca77\") " pod="openshift-etcd/installer-2-master-0" Mar 19 12:02:03.108524 master-0 kubenswrapper[7454]: I0319 12:02:03.108372 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b48817c-05cd-430b-9b1f-9cc037f1ca77-var-lock\") pod \"installer-2-master-0\" (UID: \"8b48817c-05cd-430b-9b1f-9cc037f1ca77\") " pod="openshift-etcd/installer-2-master-0" Mar 19 12:02:03.108524 master-0 kubenswrapper[7454]: I0319 12:02:03.108532 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b48817c-05cd-430b-9b1f-9cc037f1ca77-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"8b48817c-05cd-430b-9b1f-9cc037f1ca77\") " pod="openshift-etcd/installer-2-master-0" Mar 19 12:02:03.108965 master-0 kubenswrapper[7454]: I0319 12:02:03.108482 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b48817c-05cd-430b-9b1f-9cc037f1ca77-var-lock\") pod \"installer-2-master-0\" (UID: \"8b48817c-05cd-430b-9b1f-9cc037f1ca77\") " pod="openshift-etcd/installer-2-master-0" Mar 19 12:02:03.108965 master-0 kubenswrapper[7454]: I0319 12:02:03.108675 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b48817c-05cd-430b-9b1f-9cc037f1ca77-kube-api-access\") pod \"installer-2-master-0\" (UID: \"8b48817c-05cd-430b-9b1f-9cc037f1ca77\") " pod="openshift-etcd/installer-2-master-0" Mar 19 12:02:03.108965 master-0 kubenswrapper[7454]: I0319 12:02:03.108681 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b48817c-05cd-430b-9b1f-9cc037f1ca77-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"8b48817c-05cd-430b-9b1f-9cc037f1ca77\") " pod="openshift-etcd/installer-2-master-0" Mar 19 12:02:03.133137 master-0 kubenswrapper[7454]: I0319 12:02:03.133079 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b48817c-05cd-430b-9b1f-9cc037f1ca77-kube-api-access\") pod \"installer-2-master-0\" (UID: \"8b48817c-05cd-430b-9b1f-9cc037f1ca77\") " pod="openshift-etcd/installer-2-master-0" Mar 19 12:02:03.277681 master-0 kubenswrapper[7454]: I0319 12:02:03.277597 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 19 12:02:03.659346 master-0 kubenswrapper[7454]: I0319 12:02:03.659237 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:03.659346 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:03.659346 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:03.659346 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:03.659346 master-0 kubenswrapper[7454]: I0319 12:02:03.659302 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:03.710323 master-0 kubenswrapper[7454]: I0319 12:02:03.710286 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 19 12:02:03.899038 master-0 kubenswrapper[7454]: I0319 12:02:03.898995 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"8b48817c-05cd-430b-9b1f-9cc037f1ca77","Type":"ContainerStarted","Data":"7220eeff67efce450283cc72bc4e2acf7316ae81a06fc10749f8bb6f974b934b"} Mar 19 12:02:04.658886 master-0 kubenswrapper[7454]: I0319 12:02:04.658831 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:04.658886 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:04.658886 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:04.658886 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:04.659299 master-0 kubenswrapper[7454]: I0319 12:02:04.658905 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:04.908642 master-0 kubenswrapper[7454]: I0319 12:02:04.908570 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"8b48817c-05cd-430b-9b1f-9cc037f1ca77","Type":"ContainerStarted","Data":"4ffdbe686ec312f51e0f69bfddfcf8ddbe9d68d7435e9ea8d330dd01862adb85"} Mar 19 12:02:04.932198 master-0 kubenswrapper[7454]: I0319 12:02:04.932032 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.9320106089999998 podStartE2EDuration="2.932010609s" podCreationTimestamp="2026-03-19 12:02:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:02:04.92860897 +0000 UTC m=+494.559074893" watchObservedRunningTime="2026-03-19 12:02:04.932010609 +0000 UTC m=+494.562476532" Mar 19 12:02:05.658395 master-0 kubenswrapper[7454]: I0319 12:02:05.658331 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:05.658395 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:05.658395 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:05.658395 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:05.658395 master-0 kubenswrapper[7454]: I0319 12:02:05.658389 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:06.660020 master-0 kubenswrapper[7454]: I0319 12:02:06.659946 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:06.660020 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:06.660020 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:06.660020 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:06.660899 master-0 kubenswrapper[7454]: I0319 12:02:06.660850 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:06.673166 master-0 kubenswrapper[7454]: E0319 12:02:06.673105 7454 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 19 12:02:06.674549 master-0 kubenswrapper[7454]: E0319 12:02:06.674469 7454 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 19 12:02:06.676443 master-0 kubenswrapper[7454]: E0319 12:02:06.676190 7454 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 19 12:02:06.676443 master-0 kubenswrapper[7454]: E0319 12:02:06.676326 7454 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" podUID="29e11ec4-f565-4b35-8f1e-0dddb8473b05" containerName="kube-multus-additional-cni-plugins" Mar 19 12:02:07.659066 master-0 kubenswrapper[7454]: I0319 12:02:07.659006 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:07.659066 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:07.659066 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:07.659066 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:07.659484 master-0 kubenswrapper[7454]: I0319 12:02:07.659079 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:08.659174 master-0 kubenswrapper[7454]: I0319 12:02:08.659092 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:08.659174 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:08.659174 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:08.659174 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:08.659720 master-0 kubenswrapper[7454]: I0319 12:02:08.659180 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:09.659333 master-0 kubenswrapper[7454]: I0319 12:02:09.659288 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:09.659333 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:09.659333 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:09.659333 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:09.660023 master-0 kubenswrapper[7454]: I0319 12:02:09.659991 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:09.872266 master-0 kubenswrapper[7454]: I0319 12:02:09.872212 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-2xxmn_29e11ec4-f565-4b35-8f1e-0dddb8473b05/kube-multus-additional-cni-plugins/0.log" Mar 19 12:02:09.872509 master-0 kubenswrapper[7454]: I0319 12:02:09.872302 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" Mar 19 12:02:09.937601 master-0 kubenswrapper[7454]: I0319 12:02:09.937444 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-2xxmn_29e11ec4-f565-4b35-8f1e-0dddb8473b05/kube-multus-additional-cni-plugins/0.log" Mar 19 12:02:09.937601 master-0 kubenswrapper[7454]: I0319 12:02:09.937515 7454 generic.go:334] "Generic (PLEG): container finished" podID="29e11ec4-f565-4b35-8f1e-0dddb8473b05" containerID="94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4" exitCode=137 Mar 19 12:02:09.937601 master-0 kubenswrapper[7454]: I0319 12:02:09.937547 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" event={"ID":"29e11ec4-f565-4b35-8f1e-0dddb8473b05","Type":"ContainerDied","Data":"94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4"} Mar 19 12:02:09.937601 master-0 kubenswrapper[7454]: I0319 12:02:09.937597 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" event={"ID":"29e11ec4-f565-4b35-8f1e-0dddb8473b05","Type":"ContainerDied","Data":"7e727db50b790d5f4dade75045fa787af146864b65041c118cacc4ddf2f13bcc"} Mar 19 12:02:09.937961 master-0 kubenswrapper[7454]: I0319 12:02:09.937615 7454 scope.go:117] "RemoveContainer" containerID="94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4" Mar 19 12:02:09.937961 master-0 kubenswrapper[7454]: I0319 12:02:09.937615 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-2xxmn" Mar 19 12:02:09.957099 master-0 kubenswrapper[7454]: I0319 12:02:09.955094 7454 scope.go:117] "RemoveContainer" containerID="94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4" Mar 19 12:02:09.957099 master-0 kubenswrapper[7454]: E0319 12:02:09.955488 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4\": container with ID starting with 94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4 not found: ID does not exist" containerID="94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4" Mar 19 12:02:09.957099 master-0 kubenswrapper[7454]: I0319 12:02:09.955536 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4"} err="failed to get container status \"94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4\": rpc error: code = NotFound desc = could not find container \"94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4\": container with ID starting with 94601aeb96019578fbdad1319a920055ffb76524014eeb01d01d90f9cc97bee4 not found: ID does not exist" Mar 19 12:02:10.001472 master-0 kubenswrapper[7454]: I0319 12:02:10.001408 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6bj8\" (UniqueName: \"kubernetes.io/projected/29e11ec4-f565-4b35-8f1e-0dddb8473b05-kube-api-access-h6bj8\") pod \"29e11ec4-f565-4b35-8f1e-0dddb8473b05\" (UID: \"29e11ec4-f565-4b35-8f1e-0dddb8473b05\") " Mar 19 12:02:10.001709 master-0 kubenswrapper[7454]: I0319 12:02:10.001515 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/29e11ec4-f565-4b35-8f1e-0dddb8473b05-cni-sysctl-allowlist\") pod \"29e11ec4-f565-4b35-8f1e-0dddb8473b05\" (UID: \"29e11ec4-f565-4b35-8f1e-0dddb8473b05\") " Mar 19 12:02:10.001709 master-0 kubenswrapper[7454]: I0319 12:02:10.001553 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/29e11ec4-f565-4b35-8f1e-0dddb8473b05-tuning-conf-dir\") pod \"29e11ec4-f565-4b35-8f1e-0dddb8473b05\" (UID: \"29e11ec4-f565-4b35-8f1e-0dddb8473b05\") " Mar 19 12:02:10.001709 master-0 kubenswrapper[7454]: I0319 12:02:10.001583 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/29e11ec4-f565-4b35-8f1e-0dddb8473b05-ready\") pod \"29e11ec4-f565-4b35-8f1e-0dddb8473b05\" (UID: \"29e11ec4-f565-4b35-8f1e-0dddb8473b05\") " Mar 19 12:02:10.002095 master-0 kubenswrapper[7454]: I0319 12:02:10.002037 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29e11ec4-f565-4b35-8f1e-0dddb8473b05-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "29e11ec4-f565-4b35-8f1e-0dddb8473b05" (UID: "29e11ec4-f565-4b35-8f1e-0dddb8473b05"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:02:10.002501 master-0 kubenswrapper[7454]: I0319 12:02:10.002277 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29e11ec4-f565-4b35-8f1e-0dddb8473b05-ready" (OuterVolumeSpecName: "ready") pod "29e11ec4-f565-4b35-8f1e-0dddb8473b05" (UID: "29e11ec4-f565-4b35-8f1e-0dddb8473b05"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:02:10.002501 master-0 kubenswrapper[7454]: I0319 12:02:10.002408 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e11ec4-f565-4b35-8f1e-0dddb8473b05-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "29e11ec4-f565-4b35-8f1e-0dddb8473b05" (UID: "29e11ec4-f565-4b35-8f1e-0dddb8473b05"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:02:10.007783 master-0 kubenswrapper[7454]: I0319 12:02:10.007155 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e11ec4-f565-4b35-8f1e-0dddb8473b05-kube-api-access-h6bj8" (OuterVolumeSpecName: "kube-api-access-h6bj8") pod "29e11ec4-f565-4b35-8f1e-0dddb8473b05" (UID: "29e11ec4-f565-4b35-8f1e-0dddb8473b05"). InnerVolumeSpecName "kube-api-access-h6bj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:02:10.103146 master-0 kubenswrapper[7454]: I0319 12:02:10.103073 7454 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/29e11ec4-f565-4b35-8f1e-0dddb8473b05-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:10.103146 master-0 kubenswrapper[7454]: I0319 12:02:10.103117 7454 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/29e11ec4-f565-4b35-8f1e-0dddb8473b05-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:10.103146 master-0 kubenswrapper[7454]: I0319 12:02:10.103130 7454 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/29e11ec4-f565-4b35-8f1e-0dddb8473b05-ready\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:10.103146 master-0 kubenswrapper[7454]: I0319 12:02:10.103142 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6bj8\" (UniqueName: \"kubernetes.io/projected/29e11ec4-f565-4b35-8f1e-0dddb8473b05-kube-api-access-h6bj8\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:10.276615 master-0 kubenswrapper[7454]: I0319 12:02:10.276558 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-2xxmn"] Mar 19 12:02:10.281012 master-0 kubenswrapper[7454]: I0319 12:02:10.280953 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-2xxmn"] Mar 19 12:02:10.685626 master-0 kubenswrapper[7454]: I0319 12:02:10.685494 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:10.685626 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:10.685626 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:10.685626 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:10.685626 master-0 kubenswrapper[7454]: I0319 12:02:10.685581 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:10.698064 master-0 kubenswrapper[7454]: I0319 12:02:10.697995 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29e11ec4-f565-4b35-8f1e-0dddb8473b05" path="/var/lib/kubelet/pods/29e11ec4-f565-4b35-8f1e-0dddb8473b05/volumes" Mar 19 12:02:11.659843 master-0 kubenswrapper[7454]: I0319 12:02:11.659773 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:11.659843 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:11.659843 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:11.659843 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:11.660193 master-0 kubenswrapper[7454]: I0319 12:02:11.659851 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:12.658610 master-0 kubenswrapper[7454]: I0319 12:02:12.658524 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:12.658610 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:12.658610 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:12.658610 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:12.658610 master-0 kubenswrapper[7454]: I0319 12:02:12.658602 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:13.662354 master-0 kubenswrapper[7454]: I0319 12:02:13.662290 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:13.662354 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:13.662354 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:13.662354 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:13.663416 master-0 kubenswrapper[7454]: I0319 12:02:13.662376 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:14.658638 master-0 kubenswrapper[7454]: I0319 12:02:14.658578 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:14.658638 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:14.658638 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:14.658638 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:14.659263 master-0 kubenswrapper[7454]: I0319 12:02:14.659220 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:15.658885 master-0 kubenswrapper[7454]: I0319 12:02:15.658826 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:15.658885 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:15.658885 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:15.658885 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:15.659487 master-0 kubenswrapper[7454]: I0319 12:02:15.658897 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:16.659925 master-0 kubenswrapper[7454]: I0319 12:02:16.659825 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:16.659925 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:16.659925 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:16.659925 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:16.659925 master-0 kubenswrapper[7454]: I0319 12:02:16.659885 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:17.659275 master-0 kubenswrapper[7454]: I0319 12:02:17.659202 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:17.659275 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:17.659275 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:17.659275 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:17.659275 master-0 kubenswrapper[7454]: I0319 12:02:17.659272 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:18.010244 master-0 kubenswrapper[7454]: I0319 12:02:18.010184 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-fz8cg_806a4c30-7b93-4430-86da-f9e1f4f2d206/multus-admission-controller/0.log" Mar 19 12:02:18.010754 master-0 kubenswrapper[7454]: I0319 12:02:18.010263 7454 generic.go:334] "Generic (PLEG): container finished" podID="806a4c30-7b93-4430-86da-f9e1f4f2d206" containerID="32946350fbb40f17e1bf84fa3bef60ee89587d671dd1dca0cb3ac265a9a51704" exitCode=137 Mar 19 12:02:18.010754 master-0 kubenswrapper[7454]: I0319 12:02:18.010308 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" event={"ID":"806a4c30-7b93-4430-86da-f9e1f4f2d206","Type":"ContainerDied","Data":"32946350fbb40f17e1bf84fa3bef60ee89587d671dd1dca0cb3ac265a9a51704"} Mar 19 12:02:18.660261 master-0 kubenswrapper[7454]: I0319 12:02:18.660151 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:18.660261 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:18.660261 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:18.660261 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:18.660536 master-0 kubenswrapper[7454]: I0319 12:02:18.660295 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:18.665366 master-0 kubenswrapper[7454]: I0319 12:02:18.665315 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-fz8cg_806a4c30-7b93-4430-86da-f9e1f4f2d206/multus-admission-controller/0.log" Mar 19 12:02:18.665501 master-0 kubenswrapper[7454]: I0319 12:02:18.665407 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 12:02:18.814926 master-0 kubenswrapper[7454]: I0319 12:02:18.814855 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfl29\" (UniqueName: \"kubernetes.io/projected/806a4c30-7b93-4430-86da-f9e1f4f2d206-kube-api-access-dfl29\") pod \"806a4c30-7b93-4430-86da-f9e1f4f2d206\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " Mar 19 12:02:18.815161 master-0 kubenswrapper[7454]: I0319 12:02:18.815058 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs\") pod \"806a4c30-7b93-4430-86da-f9e1f4f2d206\" (UID: \"806a4c30-7b93-4430-86da-f9e1f4f2d206\") " Mar 19 12:02:18.818249 master-0 kubenswrapper[7454]: I0319 12:02:18.818197 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/806a4c30-7b93-4430-86da-f9e1f4f2d206-kube-api-access-dfl29" (OuterVolumeSpecName: "kube-api-access-dfl29") pod "806a4c30-7b93-4430-86da-f9e1f4f2d206" (UID: "806a4c30-7b93-4430-86da-f9e1f4f2d206"). InnerVolumeSpecName "kube-api-access-dfl29". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:02:18.818746 master-0 kubenswrapper[7454]: I0319 12:02:18.818706 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "806a4c30-7b93-4430-86da-f9e1f4f2d206" (UID: "806a4c30-7b93-4430-86da-f9e1f4f2d206"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:02:18.917018 master-0 kubenswrapper[7454]: I0319 12:02:18.916910 7454 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/806a4c30-7b93-4430-86da-f9e1f4f2d206-webhook-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:18.917018 master-0 kubenswrapper[7454]: I0319 12:02:18.916951 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfl29\" (UniqueName: \"kubernetes.io/projected/806a4c30-7b93-4430-86da-f9e1f4f2d206-kube-api-access-dfl29\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:19.018823 master-0 kubenswrapper[7454]: I0319 12:02:19.018764 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-fz8cg_806a4c30-7b93-4430-86da-f9e1f4f2d206/multus-admission-controller/0.log" Mar 19 12:02:19.019365 master-0 kubenswrapper[7454]: I0319 12:02:19.018877 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" event={"ID":"806a4c30-7b93-4430-86da-f9e1f4f2d206","Type":"ContainerDied","Data":"eb304defbff285339483036ba9b4adeeac46981b039317b57ed5349a2d1f0ae3"} Mar 19 12:02:19.019365 master-0 kubenswrapper[7454]: I0319 12:02:19.018924 7454 scope.go:117] "RemoveContainer" containerID="5af120083ccfa19775f3cfbcd29e655aebb641b4ecf435859e1f29291e7340f7" Mar 19 12:02:19.019365 master-0 kubenswrapper[7454]: I0319 12:02:19.018933 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg" Mar 19 12:02:19.033881 master-0 kubenswrapper[7454]: I0319 12:02:19.033720 7454 scope.go:117] "RemoveContainer" containerID="32946350fbb40f17e1bf84fa3bef60ee89587d671dd1dca0cb3ac265a9a51704" Mar 19 12:02:19.073498 master-0 kubenswrapper[7454]: I0319 12:02:19.073438 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg"] Mar 19 12:02:19.078476 master-0 kubenswrapper[7454]: I0319 12:02:19.078329 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-fz8cg"] Mar 19 12:02:19.658645 master-0 kubenswrapper[7454]: I0319 12:02:19.658578 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:19.658645 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:19.658645 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:19.658645 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:19.658645 master-0 kubenswrapper[7454]: I0319 12:02:19.658639 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:20.642930 master-0 kubenswrapper[7454]: I0319 12:02:20.642864 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="806a4c30-7b93-4430-86da-f9e1f4f2d206" path="/var/lib/kubelet/pods/806a4c30-7b93-4430-86da-f9e1f4f2d206/volumes" Mar 19 12:02:20.658993 master-0 kubenswrapper[7454]: I0319 12:02:20.658920 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:20.658993 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:20.658993 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:20.658993 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:20.659472 master-0 kubenswrapper[7454]: I0319 12:02:20.658996 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:21.659925 master-0 kubenswrapper[7454]: I0319 12:02:21.659849 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:21.659925 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:21.659925 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:21.659925 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:21.660462 master-0 kubenswrapper[7454]: I0319 12:02:21.659933 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:22.658724 master-0 kubenswrapper[7454]: I0319 12:02:22.658633 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:22.658724 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:22.658724 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:22.658724 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:22.659098 master-0 kubenswrapper[7454]: I0319 12:02:22.658728 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:23.660154 master-0 kubenswrapper[7454]: I0319 12:02:23.660068 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:23.660154 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:23.660154 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:23.660154 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:23.661194 master-0 kubenswrapper[7454]: I0319 12:02:23.660188 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:24.659965 master-0 kubenswrapper[7454]: I0319 12:02:24.659862 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:24.659965 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:24.659965 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:24.659965 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:24.661312 master-0 kubenswrapper[7454]: I0319 12:02:24.659967 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:25.659935 master-0 kubenswrapper[7454]: I0319 12:02:25.659863 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:25.659935 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:25.659935 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:25.659935 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:25.660264 master-0 kubenswrapper[7454]: I0319 12:02:25.659966 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:26.145499 master-0 kubenswrapper[7454]: I0319 12:02:26.145431 7454 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 19 12:02:26.146116 master-0 kubenswrapper[7454]: I0319 12:02:26.145766 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" containerID="cri-o://37d48d754396bd1bc2527510939d9c6fc67f46e6581c207179d0cd71cd638e7d" gracePeriod=30 Mar 19 12:02:26.146116 master-0 kubenswrapper[7454]: I0319 12:02:26.145857 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" containerID="cri-o://d27eb2ebaac292762ef9b813ca8ecd2562af748c7508755cdddd461f45526f66" gracePeriod=30 Mar 19 12:02:26.146641 master-0 kubenswrapper[7454]: I0319 12:02:26.146452 7454 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 12:02:26.146762 master-0 kubenswrapper[7454]: E0319 12:02:26.146730 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 19 12:02:26.146762 master-0 kubenswrapper[7454]: I0319 12:02:26.146755 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 19 12:02:26.146934 master-0 kubenswrapper[7454]: E0319 12:02:26.146770 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 19 12:02:26.146934 master-0 kubenswrapper[7454]: I0319 12:02:26.146778 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 19 12:02:26.146934 master-0 kubenswrapper[7454]: E0319 12:02:26.146789 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806a4c30-7b93-4430-86da-f9e1f4f2d206" containerName="multus-admission-controller" Mar 19 12:02:26.146934 master-0 kubenswrapper[7454]: I0319 12:02:26.146817 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="806a4c30-7b93-4430-86da-f9e1f4f2d206" containerName="multus-admission-controller" Mar 19 12:02:26.146934 master-0 kubenswrapper[7454]: E0319 12:02:26.146827 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 19 12:02:26.146934 master-0 kubenswrapper[7454]: I0319 12:02:26.146834 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 19 12:02:26.146934 master-0 kubenswrapper[7454]: E0319 12:02:26.146851 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 19 12:02:26.146934 master-0 kubenswrapper[7454]: I0319 12:02:26.146859 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 19 12:02:26.146934 master-0 kubenswrapper[7454]: E0319 12:02:26.146870 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e11ec4-f565-4b35-8f1e-0dddb8473b05" containerName="kube-multus-additional-cni-plugins" Mar 19 12:02:26.146934 master-0 kubenswrapper[7454]: I0319 12:02:26.146879 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e11ec4-f565-4b35-8f1e-0dddb8473b05" containerName="kube-multus-additional-cni-plugins" Mar 19 12:02:26.146934 master-0 kubenswrapper[7454]: E0319 12:02:26.146896 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806a4c30-7b93-4430-86da-f9e1f4f2d206" containerName="kube-rbac-proxy" Mar 19 12:02:26.146934 master-0 kubenswrapper[7454]: I0319 12:02:26.146906 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="806a4c30-7b93-4430-86da-f9e1f4f2d206" containerName="kube-rbac-proxy" Mar 19 12:02:26.147324 master-0 kubenswrapper[7454]: I0319 12:02:26.147045 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 19 12:02:26.147324 master-0 kubenswrapper[7454]: I0319 12:02:26.147060 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 19 12:02:26.147324 master-0 kubenswrapper[7454]: I0319 12:02:26.147086 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 19 12:02:26.147324 master-0 kubenswrapper[7454]: I0319 12:02:26.147099 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="806a4c30-7b93-4430-86da-f9e1f4f2d206" containerName="multus-admission-controller" Mar 19 12:02:26.147324 master-0 kubenswrapper[7454]: I0319 12:02:26.147110 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 19 12:02:26.147324 master-0 kubenswrapper[7454]: I0319 12:02:26.147118 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="806a4c30-7b93-4430-86da-f9e1f4f2d206" containerName="kube-rbac-proxy" Mar 19 12:02:26.147324 master-0 kubenswrapper[7454]: I0319 12:02:26.147131 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="29e11ec4-f565-4b35-8f1e-0dddb8473b05" containerName="kube-multus-additional-cni-plugins" Mar 19 12:02:26.147324 master-0 kubenswrapper[7454]: E0319 12:02:26.147269 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 19 12:02:26.147324 master-0 kubenswrapper[7454]: I0319 12:02:26.147280 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 19 12:02:26.147565 master-0 kubenswrapper[7454]: I0319 12:02:26.147398 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 19 12:02:26.148463 master-0 kubenswrapper[7454]: I0319 12:02:26.148433 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:02:26.308320 master-0 kubenswrapper[7454]: I0319 12:02:26.308287 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 12:02:26.322317 master-0 kubenswrapper[7454]: I0319 12:02:26.322235 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 19 12:02:26.322317 master-0 kubenswrapper[7454]: I0319 12:02:26.322321 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 19 12:02:26.322604 master-0 kubenswrapper[7454]: I0319 12:02:26.322355 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 19 12:02:26.322604 master-0 kubenswrapper[7454]: I0319 12:02:26.322436 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets" (OuterVolumeSpecName: "secrets") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:02:26.322604 master-0 kubenswrapper[7454]: I0319 12:02:26.322451 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 19 12:02:26.322604 master-0 kubenswrapper[7454]: I0319 12:02:26.322501 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 19 12:02:26.322828 master-0 kubenswrapper[7454]: I0319 12:02:26.322502 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:02:26.322828 master-0 kubenswrapper[7454]: I0319 12:02:26.322520 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config" (OuterVolumeSpecName: "config") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:02:26.322828 master-0 kubenswrapper[7454]: I0319 12:02:26.322542 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:02:26.322828 master-0 kubenswrapper[7454]: I0319 12:02:26.322569 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs" (OuterVolumeSpecName: "logs") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:02:26.322828 master-0 kubenswrapper[7454]: I0319 12:02:26.322687 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ed7034eee202d25f8fdd5bf58084d919-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"ed7034eee202d25f8fdd5bf58084d919\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:02:26.322828 master-0 kubenswrapper[7454]: I0319 12:02:26.322725 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ed7034eee202d25f8fdd5bf58084d919-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"ed7034eee202d25f8fdd5bf58084d919\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:02:26.326282 master-0 kubenswrapper[7454]: I0319 12:02:26.324396 7454 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:26.326282 master-0 kubenswrapper[7454]: I0319 12:02:26.324419 7454 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:26.326282 master-0 kubenswrapper[7454]: I0319 12:02:26.324429 7454 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:26.326282 master-0 kubenswrapper[7454]: I0319 12:02:26.324440 7454 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:26.326282 master-0 kubenswrapper[7454]: I0319 12:02:26.324453 7454 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:26.332722 master-0 kubenswrapper[7454]: I0319 12:02:26.332668 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 12:02:26.359251 master-0 kubenswrapper[7454]: I0319 12:02:26.359182 7454 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="785c8576-b9c1-4385-a130-48598c8e3a64" Mar 19 12:02:26.425431 master-0 kubenswrapper[7454]: I0319 12:02:26.425099 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ed7034eee202d25f8fdd5bf58084d919-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"ed7034eee202d25f8fdd5bf58084d919\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:02:26.425431 master-0 kubenswrapper[7454]: I0319 12:02:26.425142 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ed7034eee202d25f8fdd5bf58084d919-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"ed7034eee202d25f8fdd5bf58084d919\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:02:26.425431 master-0 kubenswrapper[7454]: I0319 12:02:26.425404 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ed7034eee202d25f8fdd5bf58084d919-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"ed7034eee202d25f8fdd5bf58084d919\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:02:26.425431 master-0 kubenswrapper[7454]: I0319 12:02:26.425397 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ed7034eee202d25f8fdd5bf58084d919-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"ed7034eee202d25f8fdd5bf58084d919\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:02:26.633304 master-0 kubenswrapper[7454]: I0319 12:02:26.633235 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:02:26.641723 master-0 kubenswrapper[7454]: I0319 12:02:26.641675 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f265536aba6292ead501bc9b49f327" path="/var/lib/kubelet/pods/46f265536aba6292ead501bc9b49f327/volumes" Mar 19 12:02:26.642188 master-0 kubenswrapper[7454]: I0319 12:02:26.642166 7454 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 19 12:02:26.655620 master-0 kubenswrapper[7454]: W0319 12:02:26.655578 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded7034eee202d25f8fdd5bf58084d919.slice/crio-1c1162902cf97a8b88fecc587e4927a4fa7874565b759344e7b07063df911ac6 WatchSource:0}: Error finding container 1c1162902cf97a8b88fecc587e4927a4fa7874565b759344e7b07063df911ac6: Status 404 returned error can't find the container with id 1c1162902cf97a8b88fecc587e4927a4fa7874565b759344e7b07063df911ac6 Mar 19 12:02:26.659197 master-0 kubenswrapper[7454]: I0319 12:02:26.659160 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:26.659197 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:26.659197 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:26.659197 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:26.659612 master-0 kubenswrapper[7454]: I0319 12:02:26.659209 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:26.668853 master-0 kubenswrapper[7454]: I0319 12:02:26.668755 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 19 12:02:26.668853 master-0 kubenswrapper[7454]: I0319 12:02:26.668788 7454 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="785c8576-b9c1-4385-a130-48598c8e3a64" Mar 19 12:02:26.676887 master-0 kubenswrapper[7454]: I0319 12:02:26.676759 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 19 12:02:26.677120 master-0 kubenswrapper[7454]: I0319 12:02:26.677091 7454 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="785c8576-b9c1-4385-a130-48598c8e3a64" Mar 19 12:02:27.078391 master-0 kubenswrapper[7454]: I0319 12:02:27.078060 7454 generic.go:334] "Generic (PLEG): container finished" podID="b425669d-6f80-4a2b-b2f2-5c6766654c6c" containerID="4f12a6a6377eb63e234161ff939d40e45bfc8d6ae4fa1554dca2cf62421fb52b" exitCode=0 Mar 19 12:02:27.078391 master-0 kubenswrapper[7454]: I0319 12:02:27.078162 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"b425669d-6f80-4a2b-b2f2-5c6766654c6c","Type":"ContainerDied","Data":"4f12a6a6377eb63e234161ff939d40e45bfc8d6ae4fa1554dca2cf62421fb52b"} Mar 19 12:02:27.081398 master-0 kubenswrapper[7454]: I0319 12:02:27.081362 7454 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="d27eb2ebaac292762ef9b813ca8ecd2562af748c7508755cdddd461f45526f66" exitCode=0 Mar 19 12:02:27.081398 master-0 kubenswrapper[7454]: I0319 12:02:27.081385 7454 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="37d48d754396bd1bc2527510939d9c6fc67f46e6581c207179d0cd71cd638e7d" exitCode=0 Mar 19 12:02:27.081520 master-0 kubenswrapper[7454]: I0319 12:02:27.081440 7454 scope.go:117] "RemoveContainer" containerID="d27eb2ebaac292762ef9b813ca8ecd2562af748c7508755cdddd461f45526f66" Mar 19 12:02:27.081568 master-0 kubenswrapper[7454]: I0319 12:02:27.081542 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 12:02:27.088036 master-0 kubenswrapper[7454]: I0319 12:02:27.084503 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ed7034eee202d25f8fdd5bf58084d919","Type":"ContainerStarted","Data":"1c1162902cf97a8b88fecc587e4927a4fa7874565b759344e7b07063df911ac6"} Mar 19 12:02:27.122346 master-0 kubenswrapper[7454]: I0319 12:02:27.122288 7454 scope.go:117] "RemoveContainer" containerID="84d3d820222fb615bbd44a77bcef4de4c96b78f545a4bc5490f5fa77f0e958e7" Mar 19 12:02:27.143210 master-0 kubenswrapper[7454]: I0319 12:02:27.143148 7454 scope.go:117] "RemoveContainer" containerID="37d48d754396bd1bc2527510939d9c6fc67f46e6581c207179d0cd71cd638e7d" Mar 19 12:02:27.162832 master-0 kubenswrapper[7454]: I0319 12:02:27.159028 7454 scope.go:117] "RemoveContainer" containerID="d27eb2ebaac292762ef9b813ca8ecd2562af748c7508755cdddd461f45526f66" Mar 19 12:02:27.162832 master-0 kubenswrapper[7454]: E0319 12:02:27.160409 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d27eb2ebaac292762ef9b813ca8ecd2562af748c7508755cdddd461f45526f66\": container with ID starting with d27eb2ebaac292762ef9b813ca8ecd2562af748c7508755cdddd461f45526f66 not found: ID does not exist" containerID="d27eb2ebaac292762ef9b813ca8ecd2562af748c7508755cdddd461f45526f66" Mar 19 12:02:27.162832 master-0 kubenswrapper[7454]: I0319 12:02:27.160445 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d27eb2ebaac292762ef9b813ca8ecd2562af748c7508755cdddd461f45526f66"} err="failed to get container status \"d27eb2ebaac292762ef9b813ca8ecd2562af748c7508755cdddd461f45526f66\": rpc error: code = NotFound desc = could not find container \"d27eb2ebaac292762ef9b813ca8ecd2562af748c7508755cdddd461f45526f66\": container with ID starting with d27eb2ebaac292762ef9b813ca8ecd2562af748c7508755cdddd461f45526f66 not found: ID does not exist" Mar 19 12:02:27.162832 master-0 kubenswrapper[7454]: I0319 12:02:27.160475 7454 scope.go:117] "RemoveContainer" containerID="84d3d820222fb615bbd44a77bcef4de4c96b78f545a4bc5490f5fa77f0e958e7" Mar 19 12:02:27.162832 master-0 kubenswrapper[7454]: E0319 12:02:27.160990 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84d3d820222fb615bbd44a77bcef4de4c96b78f545a4bc5490f5fa77f0e958e7\": container with ID starting with 84d3d820222fb615bbd44a77bcef4de4c96b78f545a4bc5490f5fa77f0e958e7 not found: ID does not exist" containerID="84d3d820222fb615bbd44a77bcef4de4c96b78f545a4bc5490f5fa77f0e958e7" Mar 19 12:02:27.162832 master-0 kubenswrapper[7454]: I0319 12:02:27.161033 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d3d820222fb615bbd44a77bcef4de4c96b78f545a4bc5490f5fa77f0e958e7"} err="failed to get container status \"84d3d820222fb615bbd44a77bcef4de4c96b78f545a4bc5490f5fa77f0e958e7\": rpc error: code = NotFound desc = could not find container \"84d3d820222fb615bbd44a77bcef4de4c96b78f545a4bc5490f5fa77f0e958e7\": container with ID starting with 84d3d820222fb615bbd44a77bcef4de4c96b78f545a4bc5490f5fa77f0e958e7 not found: ID does not exist" Mar 19 12:02:27.162832 master-0 kubenswrapper[7454]: I0319 12:02:27.161063 7454 scope.go:117] "RemoveContainer" containerID="37d48d754396bd1bc2527510939d9c6fc67f46e6581c207179d0cd71cd638e7d" Mar 19 12:02:27.162832 master-0 kubenswrapper[7454]: E0319 12:02:27.161928 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37d48d754396bd1bc2527510939d9c6fc67f46e6581c207179d0cd71cd638e7d\": container with ID starting with 37d48d754396bd1bc2527510939d9c6fc67f46e6581c207179d0cd71cd638e7d not found: ID does not exist" containerID="37d48d754396bd1bc2527510939d9c6fc67f46e6581c207179d0cd71cd638e7d" Mar 19 12:02:27.162832 master-0 kubenswrapper[7454]: I0319 12:02:27.161981 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d48d754396bd1bc2527510939d9c6fc67f46e6581c207179d0cd71cd638e7d"} err="failed to get container status \"37d48d754396bd1bc2527510939d9c6fc67f46e6581c207179d0cd71cd638e7d\": rpc error: code = NotFound desc = could not find container \"37d48d754396bd1bc2527510939d9c6fc67f46e6581c207179d0cd71cd638e7d\": container with ID starting with 37d48d754396bd1bc2527510939d9c6fc67f46e6581c207179d0cd71cd638e7d not found: ID does not exist" Mar 19 12:02:27.162832 master-0 kubenswrapper[7454]: I0319 12:02:27.162005 7454 scope.go:117] "RemoveContainer" containerID="d27eb2ebaac292762ef9b813ca8ecd2562af748c7508755cdddd461f45526f66" Mar 19 12:02:27.162832 master-0 kubenswrapper[7454]: I0319 12:02:27.162651 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d27eb2ebaac292762ef9b813ca8ecd2562af748c7508755cdddd461f45526f66"} err="failed to get container status \"d27eb2ebaac292762ef9b813ca8ecd2562af748c7508755cdddd461f45526f66\": rpc error: code = NotFound desc = could not find container \"d27eb2ebaac292762ef9b813ca8ecd2562af748c7508755cdddd461f45526f66\": container with ID starting with d27eb2ebaac292762ef9b813ca8ecd2562af748c7508755cdddd461f45526f66 not found: ID does not exist" Mar 19 12:02:27.162832 master-0 kubenswrapper[7454]: I0319 12:02:27.162684 7454 scope.go:117] "RemoveContainer" containerID="84d3d820222fb615bbd44a77bcef4de4c96b78f545a4bc5490f5fa77f0e958e7" Mar 19 12:02:27.164184 master-0 kubenswrapper[7454]: I0319 12:02:27.164085 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d3d820222fb615bbd44a77bcef4de4c96b78f545a4bc5490f5fa77f0e958e7"} err="failed to get container status \"84d3d820222fb615bbd44a77bcef4de4c96b78f545a4bc5490f5fa77f0e958e7\": rpc error: code = NotFound desc = could not find container \"84d3d820222fb615bbd44a77bcef4de4c96b78f545a4bc5490f5fa77f0e958e7\": container with ID starting with 84d3d820222fb615bbd44a77bcef4de4c96b78f545a4bc5490f5fa77f0e958e7 not found: ID does not exist" Mar 19 12:02:27.164184 master-0 kubenswrapper[7454]: I0319 12:02:27.164106 7454 scope.go:117] "RemoveContainer" containerID="37d48d754396bd1bc2527510939d9c6fc67f46e6581c207179d0cd71cd638e7d" Mar 19 12:02:27.165868 master-0 kubenswrapper[7454]: I0319 12:02:27.164636 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d48d754396bd1bc2527510939d9c6fc67f46e6581c207179d0cd71cd638e7d"} err="failed to get container status \"37d48d754396bd1bc2527510939d9c6fc67f46e6581c207179d0cd71cd638e7d\": rpc error: code = NotFound desc = could not find container \"37d48d754396bd1bc2527510939d9c6fc67f46e6581c207179d0cd71cd638e7d\": container with ID starting with 37d48d754396bd1bc2527510939d9c6fc67f46e6581c207179d0cd71cd638e7d not found: ID does not exist" Mar 19 12:02:27.663034 master-0 kubenswrapper[7454]: I0319 12:02:27.662960 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:27.663034 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:27.663034 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:27.663034 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:27.663375 master-0 kubenswrapper[7454]: I0319 12:02:27.663039 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:28.099545 master-0 kubenswrapper[7454]: I0319 12:02:28.099485 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ed7034eee202d25f8fdd5bf58084d919","Type":"ContainerStarted","Data":"7ba9fe238d802cb5b3d8a7a91252294e09ef5a02de2e8f653eef99bd12ecd678"} Mar 19 12:02:28.099545 master-0 kubenswrapper[7454]: I0319 12:02:28.099524 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ed7034eee202d25f8fdd5bf58084d919","Type":"ContainerStarted","Data":"6d38396688a212d80e4b9440cc838a81e9ba0076c58cc35f80f3248581700f34"} Mar 19 12:02:28.099545 master-0 kubenswrapper[7454]: I0319 12:02:28.099533 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ed7034eee202d25f8fdd5bf58084d919","Type":"ContainerStarted","Data":"d47b78b4162ef738abc79ae7fccddf86e10f2a7b582e6e8119dc73b890a42578"} Mar 19 12:02:28.099545 master-0 kubenswrapper[7454]: I0319 12:02:28.099542 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ed7034eee202d25f8fdd5bf58084d919","Type":"ContainerStarted","Data":"190a2ede2af79ab256016ad5364d037b5e12b69b5a7a2227b7287826e6597c14"} Mar 19 12:02:28.131200 master-0 kubenswrapper[7454]: I0319 12:02:28.131087 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.131055157 podStartE2EDuration="2.131055157s" podCreationTimestamp="2026-03-19 12:02:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:02:28.130481668 +0000 UTC m=+517.760947611" watchObservedRunningTime="2026-03-19 12:02:28.131055157 +0000 UTC m=+517.761521070" Mar 19 12:02:28.415532 master-0 kubenswrapper[7454]: I0319 12:02:28.415205 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 12:02:28.455764 master-0 kubenswrapper[7454]: I0319 12:02:28.455131 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b425669d-6f80-4a2b-b2f2-5c6766654c6c-kubelet-dir\") pod \"b425669d-6f80-4a2b-b2f2-5c6766654c6c\" (UID: \"b425669d-6f80-4a2b-b2f2-5c6766654c6c\") " Mar 19 12:02:28.455764 master-0 kubenswrapper[7454]: I0319 12:02:28.455181 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b425669d-6f80-4a2b-b2f2-5c6766654c6c-kube-api-access\") pod \"b425669d-6f80-4a2b-b2f2-5c6766654c6c\" (UID: \"b425669d-6f80-4a2b-b2f2-5c6766654c6c\") " Mar 19 12:02:28.455764 master-0 kubenswrapper[7454]: I0319 12:02:28.455204 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b425669d-6f80-4a2b-b2f2-5c6766654c6c-var-lock\") pod \"b425669d-6f80-4a2b-b2f2-5c6766654c6c\" (UID: \"b425669d-6f80-4a2b-b2f2-5c6766654c6c\") " Mar 19 12:02:28.455764 master-0 kubenswrapper[7454]: I0319 12:02:28.455399 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b425669d-6f80-4a2b-b2f2-5c6766654c6c-var-lock" (OuterVolumeSpecName: "var-lock") pod "b425669d-6f80-4a2b-b2f2-5c6766654c6c" (UID: "b425669d-6f80-4a2b-b2f2-5c6766654c6c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:02:28.455764 master-0 kubenswrapper[7454]: I0319 12:02:28.455428 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b425669d-6f80-4a2b-b2f2-5c6766654c6c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b425669d-6f80-4a2b-b2f2-5c6766654c6c" (UID: "b425669d-6f80-4a2b-b2f2-5c6766654c6c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:02:28.458723 master-0 kubenswrapper[7454]: I0319 12:02:28.458695 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b425669d-6f80-4a2b-b2f2-5c6766654c6c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b425669d-6f80-4a2b-b2f2-5c6766654c6c" (UID: "b425669d-6f80-4a2b-b2f2-5c6766654c6c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:02:28.556676 master-0 kubenswrapper[7454]: I0319 12:02:28.556247 7454 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b425669d-6f80-4a2b-b2f2-5c6766654c6c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:28.556676 master-0 kubenswrapper[7454]: I0319 12:02:28.556280 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b425669d-6f80-4a2b-b2f2-5c6766654c6c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:28.556676 master-0 kubenswrapper[7454]: I0319 12:02:28.556290 7454 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b425669d-6f80-4a2b-b2f2-5c6766654c6c-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:28.658789 master-0 kubenswrapper[7454]: I0319 12:02:28.658650 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:28.658789 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:28.658789 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:28.658789 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:28.658789 master-0 kubenswrapper[7454]: I0319 12:02:28.658710 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:29.107484 master-0 kubenswrapper[7454]: I0319 12:02:29.107173 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 12:02:29.107484 master-0 kubenswrapper[7454]: I0319 12:02:29.107166 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"b425669d-6f80-4a2b-b2f2-5c6766654c6c","Type":"ContainerDied","Data":"bd02abc3df1ea2ca997096da3d27136acff3102126289b07e7fa867e530a0c53"} Mar 19 12:02:29.107484 master-0 kubenswrapper[7454]: I0319 12:02:29.107363 7454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd02abc3df1ea2ca997096da3d27136acff3102126289b07e7fa867e530a0c53" Mar 19 12:02:29.262638 master-0 kubenswrapper[7454]: I0319 12:02:29.262553 7454 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 19 12:02:29.263019 master-0 kubenswrapper[7454]: I0319 12:02:29.262831 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" containerID="cri-o://5ae4788fe8a4fbccec56e9e4515eedb286ece7ed48749691d96f6fb8097bac2c" gracePeriod=30 Mar 19 12:02:29.264906 master-0 kubenswrapper[7454]: I0319 12:02:29.264846 7454 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 19 12:02:29.265193 master-0 kubenswrapper[7454]: E0319 12:02:29.265149 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b425669d-6f80-4a2b-b2f2-5c6766654c6c" containerName="installer" Mar 19 12:02:29.265193 master-0 kubenswrapper[7454]: I0319 12:02:29.265179 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="b425669d-6f80-4a2b-b2f2-5c6766654c6c" containerName="installer" Mar 19 12:02:29.265298 master-0 kubenswrapper[7454]: E0319 12:02:29.265212 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 19 12:02:29.265298 master-0 kubenswrapper[7454]: I0319 12:02:29.265223 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 19 12:02:29.265298 master-0 kubenswrapper[7454]: E0319 12:02:29.265238 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 19 12:02:29.265298 master-0 kubenswrapper[7454]: I0319 12:02:29.265246 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 19 12:02:29.265413 master-0 kubenswrapper[7454]: I0319 12:02:29.265387 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="b425669d-6f80-4a2b-b2f2-5c6766654c6c" containerName="installer" Mar 19 12:02:29.265413 master-0 kubenswrapper[7454]: I0319 12:02:29.265408 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 19 12:02:29.265476 master-0 kubenswrapper[7454]: I0319 12:02:29.265421 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 19 12:02:29.272484 master-0 kubenswrapper[7454]: I0319 12:02:29.271783 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:02:29.314515 master-0 kubenswrapper[7454]: I0319 12:02:29.314361 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 19 12:02:29.377104 master-0 kubenswrapper[7454]: I0319 12:02:29.376950 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:02:29.377104 master-0 kubenswrapper[7454]: I0319 12:02:29.377039 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:02:29.429979 master-0 kubenswrapper[7454]: I0319 12:02:29.429928 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 12:02:29.477710 master-0 kubenswrapper[7454]: I0319 12:02:29.477656 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"c83737980b9ee109184b1d78e942cf36\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " Mar 19 12:02:29.477956 master-0 kubenswrapper[7454]: I0319 12:02:29.477791 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"c83737980b9ee109184b1d78e942cf36\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " Mar 19 12:02:29.477956 master-0 kubenswrapper[7454]: I0319 12:02:29.477834 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets" (OuterVolumeSpecName: "secrets") pod "c83737980b9ee109184b1d78e942cf36" (UID: "c83737980b9ee109184b1d78e942cf36"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:02:29.478075 master-0 kubenswrapper[7454]: I0319 12:02:29.478009 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs" (OuterVolumeSpecName: "logs") pod "c83737980b9ee109184b1d78e942cf36" (UID: "c83737980b9ee109184b1d78e942cf36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:02:29.478130 master-0 kubenswrapper[7454]: I0319 12:02:29.478087 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:02:29.478227 master-0 kubenswrapper[7454]: I0319 12:02:29.478186 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:02:29.478312 master-0 kubenswrapper[7454]: I0319 12:02:29.478282 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:02:29.478405 master-0 kubenswrapper[7454]: I0319 12:02:29.478373 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:02:29.478491 master-0 kubenswrapper[7454]: I0319 12:02:29.478472 7454 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:29.478527 master-0 kubenswrapper[7454]: I0319 12:02:29.478494 7454 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:29.504661 master-0 kubenswrapper[7454]: I0319 12:02:29.504592 7454 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="4e2413c2-0b83-4447-bc8d-a25d315a238a" Mar 19 12:02:29.609014 master-0 kubenswrapper[7454]: I0319 12:02:29.608940 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:02:29.632274 master-0 kubenswrapper[7454]: W0319 12:02:29.632177 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8413125cf444e5c95f023c5dd9c6151e.slice/crio-8133bbc1cbc26e1060a9c5f9a0e6097cd17b1d59b0065a7002ebf7fa91eeabbd WatchSource:0}: Error finding container 8133bbc1cbc26e1060a9c5f9a0e6097cd17b1d59b0065a7002ebf7fa91eeabbd: Status 404 returned error can't find the container with id 8133bbc1cbc26e1060a9c5f9a0e6097cd17b1d59b0065a7002ebf7fa91eeabbd Mar 19 12:02:29.659109 master-0 kubenswrapper[7454]: I0319 12:02:29.659035 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:02:29.659109 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:02:29.659109 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:02:29.659109 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:02:29.659109 master-0 kubenswrapper[7454]: I0319 12:02:29.659116 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:02:29.659558 master-0 kubenswrapper[7454]: I0319 12:02:29.659168 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:02:29.659879 master-0 kubenswrapper[7454]: I0319 12:02:29.659843 7454 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"fc66004bdf7840ad3f084c0dfa71eeb2520e8e4a081e3e6ac34bc77b6fbd71ea"} pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" containerMessage="Container router failed startup probe, will be restarted" Mar 19 12:02:29.659975 master-0 kubenswrapper[7454]: I0319 12:02:29.659880 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" containerID="cri-o://fc66004bdf7840ad3f084c0dfa71eeb2520e8e4a081e3e6ac34bc77b6fbd71ea" gracePeriod=3600 Mar 19 12:02:30.117978 master-0 kubenswrapper[7454]: I0319 12:02:30.117520 7454 generic.go:334] "Generic (PLEG): container finished" podID="12d71593-ee54-4321-bc0f-a24261873bd1" containerID="bce063a1f339b0aa356b146565a1aad286cac9d49e6c2b9606f7a6d9709c3159" exitCode=0 Mar 19 12:02:30.117978 master-0 kubenswrapper[7454]: I0319 12:02:30.117599 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"12d71593-ee54-4321-bc0f-a24261873bd1","Type":"ContainerDied","Data":"bce063a1f339b0aa356b146565a1aad286cac9d49e6c2b9606f7a6d9709c3159"} Mar 19 12:02:30.122130 master-0 kubenswrapper[7454]: I0319 12:02:30.122080 7454 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="c2d0e5370bf40fbdeb8944db50e89737b0a663a2967772c4a3f69a71c3dd5111" exitCode=0 Mar 19 12:02:30.122237 master-0 kubenswrapper[7454]: I0319 12:02:30.122177 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerDied","Data":"c2d0e5370bf40fbdeb8944db50e89737b0a663a2967772c4a3f69a71c3dd5111"} Mar 19 12:02:30.122237 master-0 kubenswrapper[7454]: I0319 12:02:30.122208 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"8133bbc1cbc26e1060a9c5f9a0e6097cd17b1d59b0065a7002ebf7fa91eeabbd"} Mar 19 12:02:30.125081 master-0 kubenswrapper[7454]: I0319 12:02:30.124998 7454 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="5ae4788fe8a4fbccec56e9e4515eedb286ece7ed48749691d96f6fb8097bac2c" exitCode=0 Mar 19 12:02:30.125081 master-0 kubenswrapper[7454]: I0319 12:02:30.125035 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 12:02:30.125081 master-0 kubenswrapper[7454]: I0319 12:02:30.125052 7454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48efbe72c10829dd5908b740a4651763088ff7358d327f0b015844979a99b5dd" Mar 19 12:02:30.125265 master-0 kubenswrapper[7454]: I0319 12:02:30.125096 7454 scope.go:117] "RemoveContainer" containerID="6606dc49963e1cc0f10c3000efffd7cbb91c76beb712be6d1c6cb91c1b4a7c79" Mar 19 12:02:30.644136 master-0 kubenswrapper[7454]: I0319 12:02:30.644074 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c83737980b9ee109184b1d78e942cf36" path="/var/lib/kubelet/pods/c83737980b9ee109184b1d78e942cf36/volumes" Mar 19 12:02:30.644602 master-0 kubenswrapper[7454]: I0319 12:02:30.644329 7454 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 19 12:02:30.665347 master-0 kubenswrapper[7454]: I0319 12:02:30.665242 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 19 12:02:30.665347 master-0 kubenswrapper[7454]: I0319 12:02:30.665329 7454 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="4e2413c2-0b83-4447-bc8d-a25d315a238a" Mar 19 12:02:30.673837 master-0 kubenswrapper[7454]: I0319 12:02:30.673755 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 19 12:02:30.673837 master-0 kubenswrapper[7454]: I0319 12:02:30.673831 7454 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="4e2413c2-0b83-4447-bc8d-a25d315a238a" Mar 19 12:02:31.134807 master-0 kubenswrapper[7454]: I0319 12:02:31.134678 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"0b7cadf57c1ff393897dfb481975475d3dd6a6c04a5c37d34ce9d4c14fc55d3e"} Mar 19 12:02:31.134807 master-0 kubenswrapper[7454]: I0319 12:02:31.134744 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"4d47a2e9aa1638460fa6ef96bf2d0249d38af6d72c57ab083a850e1599710d6d"} Mar 19 12:02:31.134807 master-0 kubenswrapper[7454]: I0319 12:02:31.134766 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"6b57ecd81087b581c66ac63d9f2f1ef10437e651539d71691b6a055612b562c9"} Mar 19 12:02:31.135155 master-0 kubenswrapper[7454]: I0319 12:02:31.135129 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:02:31.173866 master-0 kubenswrapper[7454]: I0319 12:02:31.171185 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.171164849 podStartE2EDuration="2.171164849s" podCreationTimestamp="2026-03-19 12:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:02:31.168079528 +0000 UTC m=+520.798545461" watchObservedRunningTime="2026-03-19 12:02:31.171164849 +0000 UTC m=+520.801630772" Mar 19 12:02:31.416369 master-0 kubenswrapper[7454]: I0319 12:02:31.416316 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 12:02:31.518568 master-0 kubenswrapper[7454]: I0319 12:02:31.518207 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12d71593-ee54-4321-bc0f-a24261873bd1-kube-api-access\") pod \"12d71593-ee54-4321-bc0f-a24261873bd1\" (UID: \"12d71593-ee54-4321-bc0f-a24261873bd1\") " Mar 19 12:02:31.518568 master-0 kubenswrapper[7454]: I0319 12:02:31.518413 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/12d71593-ee54-4321-bc0f-a24261873bd1-var-lock\") pod \"12d71593-ee54-4321-bc0f-a24261873bd1\" (UID: \"12d71593-ee54-4321-bc0f-a24261873bd1\") " Mar 19 12:02:31.518568 master-0 kubenswrapper[7454]: I0319 12:02:31.518440 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12d71593-ee54-4321-bc0f-a24261873bd1-kubelet-dir\") pod \"12d71593-ee54-4321-bc0f-a24261873bd1\" (UID: \"12d71593-ee54-4321-bc0f-a24261873bd1\") " Mar 19 12:02:31.518902 master-0 kubenswrapper[7454]: I0319 12:02:31.518702 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12d71593-ee54-4321-bc0f-a24261873bd1-var-lock" (OuterVolumeSpecName: "var-lock") pod "12d71593-ee54-4321-bc0f-a24261873bd1" (UID: "12d71593-ee54-4321-bc0f-a24261873bd1"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:02:31.518902 master-0 kubenswrapper[7454]: I0319 12:02:31.518809 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12d71593-ee54-4321-bc0f-a24261873bd1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "12d71593-ee54-4321-bc0f-a24261873bd1" (UID: "12d71593-ee54-4321-bc0f-a24261873bd1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:02:31.520918 master-0 kubenswrapper[7454]: I0319 12:02:31.520876 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12d71593-ee54-4321-bc0f-a24261873bd1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "12d71593-ee54-4321-bc0f-a24261873bd1" (UID: "12d71593-ee54-4321-bc0f-a24261873bd1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:02:31.620517 master-0 kubenswrapper[7454]: I0319 12:02:31.620448 7454 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/12d71593-ee54-4321-bc0f-a24261873bd1-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:31.620517 master-0 kubenswrapper[7454]: I0319 12:02:31.620500 7454 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12d71593-ee54-4321-bc0f-a24261873bd1-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:31.620517 master-0 kubenswrapper[7454]: I0319 12:02:31.620512 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12d71593-ee54-4321-bc0f-a24261873bd1-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:32.143741 master-0 kubenswrapper[7454]: I0319 12:02:32.143669 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"12d71593-ee54-4321-bc0f-a24261873bd1","Type":"ContainerDied","Data":"ed283c061d1fd79e9b8f04b4ebc51756f0469a7d30532249627ffce7936f190b"} Mar 19 12:02:32.143741 master-0 kubenswrapper[7454]: I0319 12:02:32.143725 7454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed283c061d1fd79e9b8f04b4ebc51756f0469a7d30532249627ffce7936f190b" Mar 19 12:02:32.144952 master-0 kubenswrapper[7454]: I0319 12:02:32.144910 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 12:02:35.151530 master-0 kubenswrapper[7454]: I0319 12:02:35.151482 7454 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 19 12:02:35.152001 master-0 kubenswrapper[7454]: I0319 12:02:35.151906 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" containerID="cri-o://f1e75c3306e850702c2dc6476b3f22a646b8072b1c422645e39adf1879a4acf8" gracePeriod=30 Mar 19 12:02:35.152069 master-0 kubenswrapper[7454]: I0319 12:02:35.152003 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" containerID="cri-o://ae877f625bae80d8605f3f0a14837fe860251e1f110b4f53ede269b520516c48" gracePeriod=30 Mar 19 12:02:35.152069 master-0 kubenswrapper[7454]: I0319 12:02:35.151896 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" containerID="cri-o://fc9e72422a0246db78ed7d7b829fa16f2e8eddf756aaf9341f686725870d6083" gracePeriod=30 Mar 19 12:02:35.152069 master-0 kubenswrapper[7454]: I0319 12:02:35.152018 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" containerID="cri-o://b1644591703fae93237fc31fd150f1f15c8f0859003326d27d1f2dc973286631" gracePeriod=30 Mar 19 12:02:35.152184 master-0 kubenswrapper[7454]: I0319 12:02:35.152056 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" containerID="cri-o://a973f0ee8875b2d0a945786f9dfa74332d931ec7b77d7601fa9f321c2f8b22ac" gracePeriod=30 Mar 19 12:02:35.154004 master-0 kubenswrapper[7454]: I0319 12:02:35.153985 7454 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 19 12:02:35.154715 master-0 kubenswrapper[7454]: E0319 12:02:35.154491 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12d71593-ee54-4321-bc0f-a24261873bd1" containerName="installer" Mar 19 12:02:35.154872 master-0 kubenswrapper[7454]: I0319 12:02:35.154835 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="12d71593-ee54-4321-bc0f-a24261873bd1" containerName="installer" Mar 19 12:02:35.154995 master-0 kubenswrapper[7454]: E0319 12:02:35.154982 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-resources-copy" Mar 19 12:02:35.155116 master-0 kubenswrapper[7454]: I0319 12:02:35.155081 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-resources-copy" Mar 19 12:02:35.155223 master-0 kubenswrapper[7454]: E0319 12:02:35.155210 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 19 12:02:35.155326 master-0 kubenswrapper[7454]: I0319 12:02:35.155312 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 19 12:02:35.155455 master-0 kubenswrapper[7454]: E0319 12:02:35.155442 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="setup" Mar 19 12:02:35.155736 master-0 kubenswrapper[7454]: I0319 12:02:35.155724 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="setup" Mar 19 12:02:35.155833 master-0 kubenswrapper[7454]: E0319 12:02:35.155819 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-ensure-env-vars" Mar 19 12:02:35.156033 master-0 kubenswrapper[7454]: I0319 12:02:35.155902 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-ensure-env-vars" Mar 19 12:02:35.156127 master-0 kubenswrapper[7454]: E0319 12:02:35.156113 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 19 12:02:35.156199 master-0 kubenswrapper[7454]: I0319 12:02:35.156188 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 19 12:02:35.156272 master-0 kubenswrapper[7454]: E0319 12:02:35.156260 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 19 12:02:35.156345 master-0 kubenswrapper[7454]: I0319 12:02:35.156333 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 19 12:02:35.156456 master-0 kubenswrapper[7454]: E0319 12:02:35.156414 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 19 12:02:35.156882 master-0 kubenswrapper[7454]: I0319 12:02:35.156865 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 19 12:02:35.159056 master-0 kubenswrapper[7454]: E0319 12:02:35.159003 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 19 12:02:35.159251 master-0 kubenswrapper[7454]: I0319 12:02:35.159235 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 19 12:02:35.159692 master-0 kubenswrapper[7454]: I0319 12:02:35.159668 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 19 12:02:35.159868 master-0 kubenswrapper[7454]: I0319 12:02:35.159849 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="12d71593-ee54-4321-bc0f-a24261873bd1" containerName="installer" Mar 19 12:02:35.160086 master-0 kubenswrapper[7454]: I0319 12:02:35.160053 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 19 12:02:35.160260 master-0 kubenswrapper[7454]: I0319 12:02:35.160236 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 19 12:02:35.160435 master-0 kubenswrapper[7454]: I0319 12:02:35.160402 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 19 12:02:35.160581 master-0 kubenswrapper[7454]: I0319 12:02:35.160554 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 19 12:02:35.271667 master-0 kubenswrapper[7454]: I0319 12:02:35.271589 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:02:35.271667 master-0 kubenswrapper[7454]: I0319 12:02:35.271641 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:02:35.271667 master-0 kubenswrapper[7454]: I0319 12:02:35.271659 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:02:35.272185 master-0 kubenswrapper[7454]: I0319 12:02:35.271724 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:02:35.272185 master-0 kubenswrapper[7454]: I0319 12:02:35.271823 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:02:35.272185 master-0 kubenswrapper[7454]: I0319 12:02:35.271858 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:02:35.372897 master-0 kubenswrapper[7454]: I0319 12:02:35.372775 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:02:35.372897 master-0 kubenswrapper[7454]: I0319 12:02:35.372846 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:02:35.372897 master-0 kubenswrapper[7454]: I0319 12:02:35.372865 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:02:35.373360 master-0 kubenswrapper[7454]: I0319 12:02:35.372939 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:02:35.373360 master-0 kubenswrapper[7454]: I0319 12:02:35.373008 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:02:35.373360 master-0 kubenswrapper[7454]: I0319 12:02:35.373173 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:02:35.373360 master-0 kubenswrapper[7454]: I0319 12:02:35.373189 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:02:35.373360 master-0 kubenswrapper[7454]: I0319 12:02:35.373311 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:02:35.373360 master-0 kubenswrapper[7454]: I0319 12:02:35.373335 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:02:35.373547 master-0 kubenswrapper[7454]: I0319 12:02:35.373358 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:02:35.373547 master-0 kubenswrapper[7454]: I0319 12:02:35.373426 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:02:35.373547 master-0 kubenswrapper[7454]: I0319 12:02:35.373474 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:02:36.186248 master-0 kubenswrapper[7454]: I0319 12:02:36.186203 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 19 12:02:36.187836 master-0 kubenswrapper[7454]: I0319 12:02:36.187327 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 19 12:02:36.189323 master-0 kubenswrapper[7454]: I0319 12:02:36.189281 7454 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="f1e75c3306e850702c2dc6476b3f22a646b8072b1c422645e39adf1879a4acf8" exitCode=2 Mar 19 12:02:36.189400 master-0 kubenswrapper[7454]: I0319 12:02:36.189321 7454 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="a973f0ee8875b2d0a945786f9dfa74332d931ec7b77d7601fa9f321c2f8b22ac" exitCode=0 Mar 19 12:02:36.189400 master-0 kubenswrapper[7454]: I0319 12:02:36.189335 7454 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="b1644591703fae93237fc31fd150f1f15c8f0859003326d27d1f2dc973286631" exitCode=2 Mar 19 12:02:36.648599 master-0 kubenswrapper[7454]: I0319 12:02:36.648545 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:02:36.648599 master-0 kubenswrapper[7454]: I0319 12:02:36.648610 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:02:36.648892 master-0 kubenswrapper[7454]: I0319 12:02:36.648629 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:02:36.648892 master-0 kubenswrapper[7454]: I0319 12:02:36.648646 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:02:36.648892 master-0 kubenswrapper[7454]: I0319 12:02:36.648746 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:02:36.649028 master-0 kubenswrapper[7454]: I0319 12:02:36.648899 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:02:37.200503 master-0 kubenswrapper[7454]: I0319 12:02:37.200422 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:02:37.203588 master-0 kubenswrapper[7454]: I0319 12:02:37.203535 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:02:46.188379 master-0 kubenswrapper[7454]: E0319 12:02:46.188296 7454 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io master-0)" Mar 19 12:02:50.284961 master-0 kubenswrapper[7454]: I0319 12:02:50.284906 7454 generic.go:334] "Generic (PLEG): container finished" podID="8b48817c-05cd-430b-9b1f-9cc037f1ca77" containerID="4ffdbe686ec312f51e0f69bfddfcf8ddbe9d68d7435e9ea8d330dd01862adb85" exitCode=0 Mar 19 12:02:50.285628 master-0 kubenswrapper[7454]: I0319 12:02:50.284964 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"8b48817c-05cd-430b-9b1f-9cc037f1ca77","Type":"ContainerDied","Data":"4ffdbe686ec312f51e0f69bfddfcf8ddbe9d68d7435e9ea8d330dd01862adb85"} Mar 19 12:02:51.584242 master-0 kubenswrapper[7454]: I0319 12:02:51.584192 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 19 12:02:51.690971 master-0 kubenswrapper[7454]: I0319 12:02:51.690742 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b48817c-05cd-430b-9b1f-9cc037f1ca77-var-lock\") pod \"8b48817c-05cd-430b-9b1f-9cc037f1ca77\" (UID: \"8b48817c-05cd-430b-9b1f-9cc037f1ca77\") " Mar 19 12:02:51.690971 master-0 kubenswrapper[7454]: I0319 12:02:51.690924 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b48817c-05cd-430b-9b1f-9cc037f1ca77-var-lock" (OuterVolumeSpecName: "var-lock") pod "8b48817c-05cd-430b-9b1f-9cc037f1ca77" (UID: "8b48817c-05cd-430b-9b1f-9cc037f1ca77"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:02:51.691254 master-0 kubenswrapper[7454]: I0319 12:02:51.690991 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b48817c-05cd-430b-9b1f-9cc037f1ca77-kube-api-access\") pod \"8b48817c-05cd-430b-9b1f-9cc037f1ca77\" (UID: \"8b48817c-05cd-430b-9b1f-9cc037f1ca77\") " Mar 19 12:02:51.691254 master-0 kubenswrapper[7454]: I0319 12:02:51.691075 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b48817c-05cd-430b-9b1f-9cc037f1ca77-kubelet-dir\") pod \"8b48817c-05cd-430b-9b1f-9cc037f1ca77\" (UID: \"8b48817c-05cd-430b-9b1f-9cc037f1ca77\") " Mar 19 12:02:51.691254 master-0 kubenswrapper[7454]: I0319 12:02:51.691134 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b48817c-05cd-430b-9b1f-9cc037f1ca77-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8b48817c-05cd-430b-9b1f-9cc037f1ca77" (UID: "8b48817c-05cd-430b-9b1f-9cc037f1ca77"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:02:51.691833 master-0 kubenswrapper[7454]: I0319 12:02:51.691775 7454 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b48817c-05cd-430b-9b1f-9cc037f1ca77-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:51.691833 master-0 kubenswrapper[7454]: I0319 12:02:51.691829 7454 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b48817c-05cd-430b-9b1f-9cc037f1ca77-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:51.694216 master-0 kubenswrapper[7454]: I0319 12:02:51.694148 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b48817c-05cd-430b-9b1f-9cc037f1ca77-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8b48817c-05cd-430b-9b1f-9cc037f1ca77" (UID: "8b48817c-05cd-430b-9b1f-9cc037f1ca77"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:02:51.793169 master-0 kubenswrapper[7454]: I0319 12:02:51.793047 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b48817c-05cd-430b-9b1f-9cc037f1ca77-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 12:02:52.316672 master-0 kubenswrapper[7454]: I0319 12:02:52.316608 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"8b48817c-05cd-430b-9b1f-9cc037f1ca77","Type":"ContainerDied","Data":"7220eeff67efce450283cc72bc4e2acf7316ae81a06fc10749f8bb6f974b934b"} Mar 19 12:02:52.316672 master-0 kubenswrapper[7454]: I0319 12:02:52.316652 7454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7220eeff67efce450283cc72bc4e2acf7316ae81a06fc10749f8bb6f974b934b" Mar 19 12:02:52.316948 master-0 kubenswrapper[7454]: I0319 12:02:52.316712 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 19 12:02:54.902201 master-0 kubenswrapper[7454]: I0319 12:02:54.902133 7454 scope.go:117] "RemoveContainer" containerID="5ae4788fe8a4fbccec56e9e4515eedb286ece7ed48749691d96f6fb8097bac2c" Mar 19 12:02:56.189415 master-0 kubenswrapper[7454]: E0319 12:02:56.189345 7454 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 12:03:05.407508 master-0 kubenswrapper[7454]: I0319 12:03:05.407444 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 19 12:03:05.409216 master-0 kubenswrapper[7454]: I0319 12:03:05.409172 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 19 12:03:05.410088 master-0 kubenswrapper[7454]: I0319 12:03:05.410055 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 19 12:03:05.410860 master-0 kubenswrapper[7454]: I0319 12:03:05.410779 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 19 12:03:05.412610 master-0 kubenswrapper[7454]: I0319 12:03:05.412567 7454 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="ae877f625bae80d8605f3f0a14837fe860251e1f110b4f53ede269b520516c48" exitCode=137 Mar 19 12:03:05.412610 master-0 kubenswrapper[7454]: I0319 12:03:05.412600 7454 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="fc9e72422a0246db78ed7d7b829fa16f2e8eddf756aaf9341f686725870d6083" exitCode=137 Mar 19 12:03:05.740935 master-0 kubenswrapper[7454]: I0319 12:03:05.739675 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 19 12:03:05.740935 master-0 kubenswrapper[7454]: I0319 12:03:05.740643 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 19 12:03:05.741351 master-0 kubenswrapper[7454]: I0319 12:03:05.741319 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 19 12:03:05.742025 master-0 kubenswrapper[7454]: I0319 12:03:05.741975 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 19 12:03:05.745095 master-0 kubenswrapper[7454]: I0319 12:03:05.745062 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 19 12:03:05.890819 master-0 kubenswrapper[7454]: I0319 12:03:05.890722 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 19 12:03:05.891123 master-0 kubenswrapper[7454]: I0319 12:03:05.890787 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 19 12:03:05.891123 master-0 kubenswrapper[7454]: I0319 12:03:05.890902 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 19 12:03:05.891123 master-0 kubenswrapper[7454]: I0319 12:03:05.890940 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 19 12:03:05.891123 master-0 kubenswrapper[7454]: I0319 12:03:05.890991 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 19 12:03:05.891123 master-0 kubenswrapper[7454]: I0319 12:03:05.890969 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:03:05.891123 master-0 kubenswrapper[7454]: I0319 12:03:05.891014 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 19 12:03:05.891123 master-0 kubenswrapper[7454]: I0319 12:03:05.891052 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:03:05.891123 master-0 kubenswrapper[7454]: I0319 12:03:05.891052 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir" (OuterVolumeSpecName: "data-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:03:05.891123 master-0 kubenswrapper[7454]: I0319 12:03:05.891092 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir" (OuterVolumeSpecName: "log-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:03:05.891123 master-0 kubenswrapper[7454]: I0319 12:03:05.891106 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:03:05.891123 master-0 kubenswrapper[7454]: I0319 12:03:05.891075 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:03:05.892086 master-0 kubenswrapper[7454]: I0319 12:03:05.892040 7454 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:03:05.892086 master-0 kubenswrapper[7454]: I0319 12:03:05.892072 7454 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:03:05.892086 master-0 kubenswrapper[7454]: I0319 12:03:05.892088 7454 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:03:05.892231 master-0 kubenswrapper[7454]: I0319 12:03:05.892099 7454 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:03:05.892231 master-0 kubenswrapper[7454]: I0319 12:03:05.892114 7454 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Mar 19 12:03:05.892231 master-0 kubenswrapper[7454]: I0319 12:03:05.892123 7454 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:03:06.190527 master-0 kubenswrapper[7454]: E0319 12:03:06.190462 7454 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 12:03:06.424762 master-0 kubenswrapper[7454]: I0319 12:03:06.424684 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 19 12:03:06.426289 master-0 kubenswrapper[7454]: I0319 12:03:06.426236 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 19 12:03:06.427354 master-0 kubenswrapper[7454]: I0319 12:03:06.427300 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 19 12:03:06.428152 master-0 kubenswrapper[7454]: I0319 12:03:06.428124 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 19 12:03:06.430067 master-0 kubenswrapper[7454]: I0319 12:03:06.430025 7454 scope.go:117] "RemoveContainer" containerID="f1e75c3306e850702c2dc6476b3f22a646b8072b1c422645e39adf1879a4acf8" Mar 19 12:03:06.430337 master-0 kubenswrapper[7454]: I0319 12:03:06.430299 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 19 12:03:06.454718 master-0 kubenswrapper[7454]: I0319 12:03:06.454671 7454 scope.go:117] "RemoveContainer" containerID="a973f0ee8875b2d0a945786f9dfa74332d931ec7b77d7601fa9f321c2f8b22ac" Mar 19 12:03:06.477265 master-0 kubenswrapper[7454]: I0319 12:03:06.477208 7454 scope.go:117] "RemoveContainer" containerID="b1644591703fae93237fc31fd150f1f15c8f0859003326d27d1f2dc973286631" Mar 19 12:03:06.491318 master-0 kubenswrapper[7454]: I0319 12:03:06.491174 7454 scope.go:117] "RemoveContainer" containerID="ae877f625bae80d8605f3f0a14837fe860251e1f110b4f53ede269b520516c48" Mar 19 12:03:06.505449 master-0 kubenswrapper[7454]: I0319 12:03:06.505403 7454 scope.go:117] "RemoveContainer" containerID="fc9e72422a0246db78ed7d7b829fa16f2e8eddf756aaf9341f686725870d6083" Mar 19 12:03:06.521779 master-0 kubenswrapper[7454]: I0319 12:03:06.521652 7454 scope.go:117] "RemoveContainer" containerID="0d9f4d5c57a3e2693c6c9591c7e86b98f1d2ab85c4a622f907e544850edaa7ba" Mar 19 12:03:06.541388 master-0 kubenswrapper[7454]: I0319 12:03:06.541307 7454 scope.go:117] "RemoveContainer" containerID="1997d87abd59fc12165851e197aa04b956b4477ab2970792d896817a67fd51a4" Mar 19 12:03:06.556937 master-0 kubenswrapper[7454]: I0319 12:03:06.556890 7454 scope.go:117] "RemoveContainer" containerID="fdd600f8cdf3f0f95b3056a22a1e42b087a6ae97aca51e424c6d9174012b4280" Mar 19 12:03:06.650777 master-0 kubenswrapper[7454]: I0319 12:03:06.650711 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24b4ed170d527099878cb5fdd508a2fb" path="/var/lib/kubelet/pods/24b4ed170d527099878cb5fdd508a2fb/volumes" Mar 19 12:03:14.633733 master-0 kubenswrapper[7454]: I0319 12:03:14.633623 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 19 12:03:14.659697 master-0 kubenswrapper[7454]: I0319 12:03:14.659637 7454 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="bb9699aa-8885-49ec-a3b3-8c199d95bbf9" Mar 19 12:03:14.659697 master-0 kubenswrapper[7454]: I0319 12:03:14.659679 7454 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="bb9699aa-8885-49ec-a3b3-8c199d95bbf9" Mar 19 12:03:15.414785 master-0 kubenswrapper[7454]: E0319 12:03:15.414639 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T12:03:05Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T12:03:05Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T12:03:05Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T12:03:05Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 12:03:16.192079 master-0 kubenswrapper[7454]: E0319 12:03:16.191963 7454 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 12:03:16.505970 master-0 kubenswrapper[7454]: I0319 12:03:16.505873 7454 generic.go:334] "Generic (PLEG): container finished" podID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerID="fc66004bdf7840ad3f084c0dfa71eeb2520e8e4a081e3e6ac34bc77b6fbd71ea" exitCode=0 Mar 19 12:03:16.505970 master-0 kubenswrapper[7454]: I0319 12:03:16.505923 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" event={"ID":"91112ce6-4f9d-44c1-a4e7-fea126554bcf","Type":"ContainerDied","Data":"fc66004bdf7840ad3f084c0dfa71eeb2520e8e4a081e3e6ac34bc77b6fbd71ea"} Mar 19 12:03:16.505970 master-0 kubenswrapper[7454]: I0319 12:03:16.505954 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" event={"ID":"91112ce6-4f9d-44c1-a4e7-fea126554bcf","Type":"ContainerStarted","Data":"5204ec6a181aadcc019743971b04d16299507e076f3ad2bde88b1a3554a20992"} Mar 19 12:03:16.506430 master-0 kubenswrapper[7454]: I0319 12:03:16.506053 7454 scope.go:117] "RemoveContainer" containerID="5b0f04d22c0c85eb93a91a7347f66800de8887e62876b70685d642e80dd0f769" Mar 19 12:03:16.657566 master-0 kubenswrapper[7454]: I0319 12:03:16.657496 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:03:16.660695 master-0 kubenswrapper[7454]: I0319 12:03:16.660621 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:16.660695 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:16.660695 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:16.660695 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:16.661094 master-0 kubenswrapper[7454]: I0319 12:03:16.660729 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:17.657479 master-0 kubenswrapper[7454]: I0319 12:03:17.657399 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:03:17.660877 master-0 kubenswrapper[7454]: I0319 12:03:17.660789 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:17.660877 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:17.660877 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:17.660877 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:17.661026 master-0 kubenswrapper[7454]: I0319 12:03:17.660890 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:18.660161 master-0 kubenswrapper[7454]: I0319 12:03:18.660086 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:18.660161 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:18.660161 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:18.660161 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:18.660723 master-0 kubenswrapper[7454]: I0319 12:03:18.660159 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:19.617752 master-0 kubenswrapper[7454]: I0319 12:03:19.617611 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:03:19.660231 master-0 kubenswrapper[7454]: I0319 12:03:19.660124 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:19.660231 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:19.660231 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:19.660231 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:19.661211 master-0 kubenswrapper[7454]: I0319 12:03:19.660230 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:20.660595 master-0 kubenswrapper[7454]: I0319 12:03:20.660484 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:20.660595 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:20.660595 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:20.660595 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:20.661488 master-0 kubenswrapper[7454]: I0319 12:03:20.660596 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:21.659841 master-0 kubenswrapper[7454]: I0319 12:03:21.659730 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:21.659841 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:21.659841 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:21.659841 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:21.660514 master-0 kubenswrapper[7454]: I0319 12:03:21.660467 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:22.661071 master-0 kubenswrapper[7454]: I0319 12:03:22.660955 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:22.661071 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:22.661071 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:22.661071 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:22.661947 master-0 kubenswrapper[7454]: I0319 12:03:22.661073 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:23.659259 master-0 kubenswrapper[7454]: I0319 12:03:23.659158 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:23.659259 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:23.659259 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:23.659259 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:23.659687 master-0 kubenswrapper[7454]: I0319 12:03:23.659269 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:24.659697 master-0 kubenswrapper[7454]: I0319 12:03:24.659622 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:24.659697 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:24.659697 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:24.659697 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:24.660753 master-0 kubenswrapper[7454]: I0319 12:03:24.659707 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:25.415757 master-0 kubenswrapper[7454]: E0319 12:03:25.415678 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 12:03:25.659642 master-0 kubenswrapper[7454]: I0319 12:03:25.659563 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:25.659642 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:25.659642 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:25.659642 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:25.660208 master-0 kubenswrapper[7454]: I0319 12:03:25.659676 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:26.192987 master-0 kubenswrapper[7454]: E0319 12:03:26.192762 7454 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 12:03:26.192987 master-0 kubenswrapper[7454]: I0319 12:03:26.192956 7454 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 19 12:03:26.664386 master-0 kubenswrapper[7454]: I0319 12:03:26.664326 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:26.664386 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:26.664386 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:26.664386 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:26.664968 master-0 kubenswrapper[7454]: I0319 12:03:26.664424 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:27.658500 master-0 kubenswrapper[7454]: I0319 12:03:27.658421 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:27.658500 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:27.658500 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:27.658500 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:27.658500 master-0 kubenswrapper[7454]: I0319 12:03:27.658505 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:28.660095 master-0 kubenswrapper[7454]: I0319 12:03:28.660010 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:28.660095 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:28.660095 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:28.660095 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:28.661384 master-0 kubenswrapper[7454]: I0319 12:03:28.660111 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:29.659746 master-0 kubenswrapper[7454]: I0319 12:03:29.659657 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:29.659746 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:29.659746 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:29.659746 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:29.660961 master-0 kubenswrapper[7454]: I0319 12:03:29.659759 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:30.724527 master-0 kubenswrapper[7454]: I0319 12:03:30.724452 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:30.724527 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:30.724527 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:30.724527 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:30.725522 master-0 kubenswrapper[7454]: I0319 12:03:30.724533 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:31.660786 master-0 kubenswrapper[7454]: I0319 12:03:31.660686 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:31.660786 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:31.660786 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:31.660786 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:31.661540 master-0 kubenswrapper[7454]: I0319 12:03:31.660829 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:32.652178 master-0 kubenswrapper[7454]: I0319 12:03:32.652121 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-wd4nx_8414b6b0-ee16-47a5-982b-ee58b136cfcf/approver/1.log" Mar 19 12:03:32.653216 master-0 kubenswrapper[7454]: I0319 12:03:32.652954 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-wd4nx_8414b6b0-ee16-47a5-982b-ee58b136cfcf/approver/0.log" Mar 19 12:03:32.653573 master-0 kubenswrapper[7454]: I0319 12:03:32.653496 7454 generic.go:334] "Generic (PLEG): container finished" podID="8414b6b0-ee16-47a5-982b-ee58b136cfcf" containerID="10c6078f6bb7ab73c8304b00dbc345f2f9442775840c07f5fbb58265a93f7893" exitCode=1 Mar 19 12:03:32.653573 master-0 kubenswrapper[7454]: I0319 12:03:32.653545 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-wd4nx" event={"ID":"8414b6b0-ee16-47a5-982b-ee58b136cfcf","Type":"ContainerDied","Data":"10c6078f6bb7ab73c8304b00dbc345f2f9442775840c07f5fbb58265a93f7893"} Mar 19 12:03:32.653773 master-0 kubenswrapper[7454]: I0319 12:03:32.653649 7454 scope.go:117] "RemoveContainer" containerID="acd01abcc3b9701b51c684ecc460502246e3fa79a2f3e8b56cc2aec4e47bef9f" Mar 19 12:03:32.654553 master-0 kubenswrapper[7454]: I0319 12:03:32.654497 7454 scope.go:117] "RemoveContainer" containerID="10c6078f6bb7ab73c8304b00dbc345f2f9442775840c07f5fbb58265a93f7893" Mar 19 12:03:32.654863 master-0 kubenswrapper[7454]: E0319 12:03:32.654828 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"approver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=approver pod=network-node-identity-wd4nx_openshift-network-node-identity(8414b6b0-ee16-47a5-982b-ee58b136cfcf)\"" pod="openshift-network-node-identity/network-node-identity-wd4nx" podUID="8414b6b0-ee16-47a5-982b-ee58b136cfcf" Mar 19 12:03:32.660092 master-0 kubenswrapper[7454]: I0319 12:03:32.660028 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:32.660092 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:32.660092 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:32.660092 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:32.660546 master-0 kubenswrapper[7454]: I0319 12:03:32.660126 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:33.659975 master-0 kubenswrapper[7454]: I0319 12:03:33.659896 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:33.659975 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:33.659975 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:33.659975 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:33.661173 master-0 kubenswrapper[7454]: I0319 12:03:33.659983 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:33.665864 master-0 kubenswrapper[7454]: I0319 12:03:33.665736 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/3.log" Mar 19 12:03:33.667021 master-0 kubenswrapper[7454]: I0319 12:03:33.666903 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/2.log" Mar 19 12:03:33.667630 master-0 kubenswrapper[7454]: I0319 12:03:33.667563 7454 generic.go:334] "Generic (PLEG): container finished" podID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" containerID="0618d6d0445d7e095cd15b094fe882be49fcec49db027db4fe7de076025a2a7e" exitCode=1 Mar 19 12:03:33.667731 master-0 kubenswrapper[7454]: I0319 12:03:33.667626 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" event={"ID":"b80027fd-7b39-477a-a337-ff9bb08e7eeb","Type":"ContainerDied","Data":"0618d6d0445d7e095cd15b094fe882be49fcec49db027db4fe7de076025a2a7e"} Mar 19 12:03:33.667860 master-0 kubenswrapper[7454]: I0319 12:03:33.667722 7454 scope.go:117] "RemoveContainer" containerID="e8132683509c67a65f018a1049a40400831c5e5aafa7f685a1489681ff42e257" Mar 19 12:03:33.668843 master-0 kubenswrapper[7454]: I0319 12:03:33.668717 7454 scope.go:117] "RemoveContainer" containerID="0618d6d0445d7e095cd15b094fe882be49fcec49db027db4fe7de076025a2a7e" Mar 19 12:03:33.669457 master-0 kubenswrapper[7454]: E0319 12:03:33.669378 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 12:03:33.671097 master-0 kubenswrapper[7454]: I0319 12:03:33.671047 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-wd4nx_8414b6b0-ee16-47a5-982b-ee58b136cfcf/approver/1.log" Mar 19 12:03:34.660153 master-0 kubenswrapper[7454]: I0319 12:03:34.660080 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:34.660153 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:34.660153 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:34.660153 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:34.661154 master-0 kubenswrapper[7454]: I0319 12:03:34.660165 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:34.682684 master-0 kubenswrapper[7454]: I0319 12:03:34.682590 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/3.log" Mar 19 12:03:35.416963 master-0 kubenswrapper[7454]: E0319 12:03:35.416861 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 12:03:35.660376 master-0 kubenswrapper[7454]: I0319 12:03:35.660276 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:35.660376 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:35.660376 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:35.660376 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:35.661301 master-0 kubenswrapper[7454]: I0319 12:03:35.660389 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:36.194346 master-0 kubenswrapper[7454]: E0319 12:03:36.194224 7454 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 19 12:03:36.650768 master-0 kubenswrapper[7454]: I0319 12:03:36.650674 7454 status_manager.go:851] "Failed to get status for pod" podUID="ed7034eee202d25f8fdd5bf58084d919" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)" Mar 19 12:03:36.659493 master-0 kubenswrapper[7454]: I0319 12:03:36.659410 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:36.659493 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:36.659493 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:36.659493 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:36.659937 master-0 kubenswrapper[7454]: I0319 12:03:36.659509 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:37.659974 master-0 kubenswrapper[7454]: I0319 12:03:37.659882 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:37.659974 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:37.659974 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:37.659974 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:37.659974 master-0 kubenswrapper[7454]: I0319 12:03:37.659976 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:38.660247 master-0 kubenswrapper[7454]: I0319 12:03:38.660144 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:38.660247 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:38.660247 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:38.660247 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:38.660247 master-0 kubenswrapper[7454]: I0319 12:03:38.660224 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:39.660582 master-0 kubenswrapper[7454]: I0319 12:03:39.660485 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:39.660582 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:39.660582 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:39.660582 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:39.660582 master-0 kubenswrapper[7454]: I0319 12:03:39.660581 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:40.659302 master-0 kubenswrapper[7454]: I0319 12:03:40.659189 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:40.659302 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:40.659302 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:40.659302 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:40.660021 master-0 kubenswrapper[7454]: I0319 12:03:40.659949 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:41.660328 master-0 kubenswrapper[7454]: I0319 12:03:41.660254 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:41.660328 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:41.660328 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:41.660328 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:41.661304 master-0 kubenswrapper[7454]: I0319 12:03:41.660343 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:42.660165 master-0 kubenswrapper[7454]: I0319 12:03:42.660109 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:42.660165 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:42.660165 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:42.660165 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:42.661288 master-0 kubenswrapper[7454]: I0319 12:03:42.660175 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:43.659907 master-0 kubenswrapper[7454]: I0319 12:03:43.659772 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:43.659907 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:43.659907 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:43.659907 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:43.659907 master-0 kubenswrapper[7454]: I0319 12:03:43.659880 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:44.634095 master-0 kubenswrapper[7454]: I0319 12:03:44.633975 7454 scope.go:117] "RemoveContainer" containerID="0618d6d0445d7e095cd15b094fe882be49fcec49db027db4fe7de076025a2a7e" Mar 19 12:03:44.634553 master-0 kubenswrapper[7454]: E0319 12:03:44.634512 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 12:03:44.660745 master-0 kubenswrapper[7454]: I0319 12:03:44.660651 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:44.660745 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:44.660745 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:44.660745 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:44.661534 master-0 kubenswrapper[7454]: I0319 12:03:44.660750 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:45.417650 master-0 kubenswrapper[7454]: E0319 12:03:45.417579 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 12:03:45.659450 master-0 kubenswrapper[7454]: I0319 12:03:45.659383 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:45.659450 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:45.659450 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:45.659450 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:45.659851 master-0 kubenswrapper[7454]: I0319 12:03:45.659465 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:46.395350 master-0 kubenswrapper[7454]: E0319 12:03:46.395278 7454 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 19 12:03:46.634760 master-0 kubenswrapper[7454]: I0319 12:03:46.634701 7454 scope.go:117] "RemoveContainer" containerID="10c6078f6bb7ab73c8304b00dbc345f2f9442775840c07f5fbb58265a93f7893" Mar 19 12:03:46.660330 master-0 kubenswrapper[7454]: I0319 12:03:46.660152 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:46.660330 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:46.660330 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:46.660330 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:46.660330 master-0 kubenswrapper[7454]: I0319 12:03:46.660241 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:46.777825 master-0 kubenswrapper[7454]: I0319 12:03:46.777775 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-wd4nx_8414b6b0-ee16-47a5-982b-ee58b136cfcf/approver/1.log" Mar 19 12:03:46.778166 master-0 kubenswrapper[7454]: I0319 12:03:46.778124 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-wd4nx" event={"ID":"8414b6b0-ee16-47a5-982b-ee58b136cfcf","Type":"ContainerStarted","Data":"0af559ff215b3836c17e33350cee662d5d93834577399d202b878e501ec4e72f"} Mar 19 12:03:47.658528 master-0 kubenswrapper[7454]: I0319 12:03:47.658440 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:47.658528 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:47.658528 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:47.658528 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:47.658528 master-0 kubenswrapper[7454]: I0319 12:03:47.658506 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:48.659499 master-0 kubenswrapper[7454]: I0319 12:03:48.659438 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:48.659499 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:48.659499 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:48.659499 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:48.660219 master-0 kubenswrapper[7454]: I0319 12:03:48.659510 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:48.662767 master-0 kubenswrapper[7454]: E0319 12:03:48.662729 7454 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 19 12:03:48.663465 master-0 kubenswrapper[7454]: I0319 12:03:48.663450 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 19 12:03:48.689098 master-0 kubenswrapper[7454]: W0319 12:03:48.689059 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod094204df314fe45bd5af12ca1b4622bb.slice/crio-2ea49210674ab53911da00e8c007432ee001baf1726a3c4349603d4b14736471 WatchSource:0}: Error finding container 2ea49210674ab53911da00e8c007432ee001baf1726a3c4349603d4b14736471: Status 404 returned error can't find the container with id 2ea49210674ab53911da00e8c007432ee001baf1726a3c4349603d4b14736471 Mar 19 12:03:48.798698 master-0 kubenswrapper[7454]: I0319 12:03:48.798595 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"2ea49210674ab53911da00e8c007432ee001baf1726a3c4349603d4b14736471"} Mar 19 12:03:49.659968 master-0 kubenswrapper[7454]: I0319 12:03:49.659888 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:49.659968 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:49.659968 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:49.659968 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:49.661083 master-0 kubenswrapper[7454]: I0319 12:03:49.659986 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:49.810475 master-0 kubenswrapper[7454]: I0319 12:03:49.810384 7454 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="0ee632c730d638e023a5c04cff8a8c19cb288483cbace4dc6c5c42638a2423e0" exitCode=0 Mar 19 12:03:49.810475 master-0 kubenswrapper[7454]: I0319 12:03:49.810457 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"0ee632c730d638e023a5c04cff8a8c19cb288483cbace4dc6c5c42638a2423e0"} Mar 19 12:03:49.810942 master-0 kubenswrapper[7454]: I0319 12:03:49.810913 7454 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="bb9699aa-8885-49ec-a3b3-8c199d95bbf9" Mar 19 12:03:49.810942 master-0 kubenswrapper[7454]: I0319 12:03:49.810937 7454 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="bb9699aa-8885-49ec-a3b3-8c199d95bbf9" Mar 19 12:03:50.659085 master-0 kubenswrapper[7454]: I0319 12:03:50.659021 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:50.659085 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:50.659085 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:50.659085 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:50.659460 master-0 kubenswrapper[7454]: I0319 12:03:50.659109 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:51.661278 master-0 kubenswrapper[7454]: I0319 12:03:51.661209 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:51.661278 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:51.661278 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:51.661278 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:51.662050 master-0 kubenswrapper[7454]: I0319 12:03:51.661292 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:52.660669 master-0 kubenswrapper[7454]: I0319 12:03:52.660574 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:52.660669 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:52.660669 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:52.660669 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:52.661173 master-0 kubenswrapper[7454]: I0319 12:03:52.660684 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:53.660079 master-0 kubenswrapper[7454]: I0319 12:03:53.659997 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:53.660079 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:53.660079 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:53.660079 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:53.661060 master-0 kubenswrapper[7454]: I0319 12:03:53.660093 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:54.660132 master-0 kubenswrapper[7454]: I0319 12:03:54.660061 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:54.660132 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:54.660132 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:54.660132 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:54.661306 master-0 kubenswrapper[7454]: I0319 12:03:54.660199 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:55.418695 master-0 kubenswrapper[7454]: E0319 12:03:55.418546 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 12:03:55.419331 master-0 kubenswrapper[7454]: E0319 12:03:55.419194 7454 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 19 12:03:55.660057 master-0 kubenswrapper[7454]: I0319 12:03:55.659983 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:55.660057 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:55.660057 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:55.660057 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:55.661338 master-0 kubenswrapper[7454]: I0319 12:03:55.660076 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:56.660081 master-0 kubenswrapper[7454]: I0319 12:03:56.659979 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:56.660081 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:56.660081 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:56.660081 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:56.661063 master-0 kubenswrapper[7454]: I0319 12:03:56.660091 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:56.796537 master-0 kubenswrapper[7454]: E0319 12:03:56.796437 7454 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 19 12:03:57.660258 master-0 kubenswrapper[7454]: I0319 12:03:57.660171 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:57.660258 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:57.660258 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:57.660258 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:57.661441 master-0 kubenswrapper[7454]: I0319 12:03:57.660269 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:58.634276 master-0 kubenswrapper[7454]: I0319 12:03:58.634176 7454 scope.go:117] "RemoveContainer" containerID="0618d6d0445d7e095cd15b094fe882be49fcec49db027db4fe7de076025a2a7e" Mar 19 12:03:58.634602 master-0 kubenswrapper[7454]: E0319 12:03:58.634541 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 12:03:58.659701 master-0 kubenswrapper[7454]: I0319 12:03:58.659615 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:58.659701 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:58.659701 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:58.659701 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:58.660104 master-0 kubenswrapper[7454]: I0319 12:03:58.659740 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:03:59.660683 master-0 kubenswrapper[7454]: I0319 12:03:59.660612 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:03:59.660683 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:03:59.660683 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:03:59.660683 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:03:59.661735 master-0 kubenswrapper[7454]: I0319 12:03:59.660710 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:00.659592 master-0 kubenswrapper[7454]: I0319 12:04:00.659523 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:00.659592 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:00.659592 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:00.659592 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:00.660272 master-0 kubenswrapper[7454]: I0319 12:04:00.659613 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:01.660738 master-0 kubenswrapper[7454]: I0319 12:04:01.660636 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:01.660738 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:01.660738 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:01.660738 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:01.662001 master-0 kubenswrapper[7454]: I0319 12:04:01.660750 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:02.660623 master-0 kubenswrapper[7454]: I0319 12:04:02.660522 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:02.660623 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:02.660623 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:02.660623 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:02.660623 master-0 kubenswrapper[7454]: I0319 12:04:02.660618 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:03.660534 master-0 kubenswrapper[7454]: I0319 12:04:03.660462 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:03.660534 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:03.660534 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:03.660534 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:03.661696 master-0 kubenswrapper[7454]: I0319 12:04:03.660553 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:04.659424 master-0 kubenswrapper[7454]: I0319 12:04:04.659355 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:04.659424 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:04.659424 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:04.659424 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:04.659929 master-0 kubenswrapper[7454]: I0319 12:04:04.659448 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:05.659832 master-0 kubenswrapper[7454]: I0319 12:04:05.659746 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:05.659832 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:05.659832 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:05.659832 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:05.660941 master-0 kubenswrapper[7454]: I0319 12:04:05.659862 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:06.657831 master-0 kubenswrapper[7454]: E0319 12:04:06.657630 7454 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{network-node-identity-wd4nx.189e3c7d6e9ab92a openshift-network-node-identity 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-network-node-identity,Name:network-node-identity-wd4nx,UID:8414b6b0-ee16-47a5-982b-ee58b136cfcf,APIVersion:v1,ResourceVersion:3425,FieldPath:spec.containers{approver},},Reason:BackOff,Message:Back-off restarting failed container approver in pod network-node-identity-wd4nx_openshift-network-node-identity(8414b6b0-ee16-47a5-982b-ee58b136cfcf),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 12:03:32.654766378 +0000 UTC m=+582.285232301,LastTimestamp:2026-03-19 12:03:32.654766378 +0000 UTC m=+582.285232301,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 12:04:06.660052 master-0 kubenswrapper[7454]: I0319 12:04:06.660000 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:06.660052 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:06.660052 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:06.660052 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:06.661009 master-0 kubenswrapper[7454]: I0319 12:04:06.660072 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:07.597505 master-0 kubenswrapper[7454]: E0319 12:04:07.597355 7454 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 19 12:04:07.660019 master-0 kubenswrapper[7454]: I0319 12:04:07.659946 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:07.660019 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:07.660019 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:07.660019 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:07.660601 master-0 kubenswrapper[7454]: I0319 12:04:07.660047 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:08.660396 master-0 kubenswrapper[7454]: I0319 12:04:08.660286 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:08.660396 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:08.660396 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:08.660396 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:08.661792 master-0 kubenswrapper[7454]: I0319 12:04:08.660432 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:09.660392 master-0 kubenswrapper[7454]: I0319 12:04:09.660313 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:09.660392 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:09.660392 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:09.660392 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:09.660392 master-0 kubenswrapper[7454]: I0319 12:04:09.660390 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:10.661189 master-0 kubenswrapper[7454]: I0319 12:04:10.660978 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:10.661189 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:10.661189 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:10.661189 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:10.661189 master-0 kubenswrapper[7454]: I0319 12:04:10.661090 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:11.659776 master-0 kubenswrapper[7454]: I0319 12:04:11.659692 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:11.659776 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:11.659776 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:11.659776 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:11.660303 master-0 kubenswrapper[7454]: I0319 12:04:11.659866 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:12.660920 master-0 kubenswrapper[7454]: I0319 12:04:12.660776 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:12.660920 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:12.660920 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:12.660920 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:12.662317 master-0 kubenswrapper[7454]: I0319 12:04:12.660921 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:13.633666 master-0 kubenswrapper[7454]: I0319 12:04:13.633572 7454 scope.go:117] "RemoveContainer" containerID="0618d6d0445d7e095cd15b094fe882be49fcec49db027db4fe7de076025a2a7e" Mar 19 12:04:13.661239 master-0 kubenswrapper[7454]: I0319 12:04:13.661016 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:13.661239 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:13.661239 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:13.661239 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:13.661239 master-0 kubenswrapper[7454]: I0319 12:04:13.661133 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:14.010502 master-0 kubenswrapper[7454]: I0319 12:04:14.010403 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/3.log" Mar 19 12:04:14.011208 master-0 kubenswrapper[7454]: I0319 12:04:14.011127 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" event={"ID":"b80027fd-7b39-477a-a337-ff9bb08e7eeb","Type":"ContainerStarted","Data":"b9013cf33c6b53af293ae5f76c1dea25442713e290f755f0ed35851ad4f7ec4d"} Mar 19 12:04:14.660115 master-0 kubenswrapper[7454]: I0319 12:04:14.660015 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:14.660115 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:14.660115 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:14.660115 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:14.660115 master-0 kubenswrapper[7454]: I0319 12:04:14.660108 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:15.659411 master-0 kubenswrapper[7454]: I0319 12:04:15.659285 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:15.659411 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:15.659411 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:15.659411 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:15.659411 master-0 kubenswrapper[7454]: I0319 12:04:15.659392 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:16.660200 master-0 kubenswrapper[7454]: I0319 12:04:16.660109 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:16.660200 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:16.660200 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:16.660200 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:16.661166 master-0 kubenswrapper[7454]: I0319 12:04:16.660228 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:17.660638 master-0 kubenswrapper[7454]: I0319 12:04:17.660534 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:17.660638 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:17.660638 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:17.660638 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:17.660638 master-0 kubenswrapper[7454]: I0319 12:04:17.660648 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:18.659342 master-0 kubenswrapper[7454]: I0319 12:04:18.659240 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:18.659342 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:18.659342 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:18.659342 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:18.659342 master-0 kubenswrapper[7454]: I0319 12:04:18.659322 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:19.199593 master-0 kubenswrapper[7454]: E0319 12:04:19.199455 7454 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 19 12:04:19.659724 master-0 kubenswrapper[7454]: I0319 12:04:19.659663 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:19.659724 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:19.659724 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:19.659724 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:19.659724 master-0 kubenswrapper[7454]: I0319 12:04:19.659743 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:20.659258 master-0 kubenswrapper[7454]: I0319 12:04:20.659156 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:20.659258 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:20.659258 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:20.659258 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:20.660311 master-0 kubenswrapper[7454]: I0319 12:04:20.659261 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:21.660057 master-0 kubenswrapper[7454]: I0319 12:04:21.659986 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:21.660057 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:21.660057 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:21.660057 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:21.661062 master-0 kubenswrapper[7454]: I0319 12:04:21.660078 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:22.660111 master-0 kubenswrapper[7454]: I0319 12:04:22.660028 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:22.660111 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:22.660111 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:22.660111 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:22.660964 master-0 kubenswrapper[7454]: I0319 12:04:22.660127 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:23.660306 master-0 kubenswrapper[7454]: I0319 12:04:23.660225 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:23.660306 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:23.660306 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:23.660306 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:23.661467 master-0 kubenswrapper[7454]: I0319 12:04:23.660324 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:23.814512 master-0 kubenswrapper[7454]: E0319 12:04:23.814394 7454 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 19 12:04:24.099349 master-0 kubenswrapper[7454]: I0319 12:04:24.099270 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-84gh4_ee3529ac-6135-438b-9334-40c63c1fbd3d/cluster-cloud-controller-manager/0.log" Mar 19 12:04:24.099730 master-0 kubenswrapper[7454]: I0319 12:04:24.099380 7454 generic.go:334] "Generic (PLEG): container finished" podID="ee3529ac-6135-438b-9334-40c63c1fbd3d" containerID="10c6568199a7e8563a8238a4394e2eb6a83f98ca431cdeed29a3dfc7601564fd" exitCode=1 Mar 19 12:04:24.099730 master-0 kubenswrapper[7454]: I0319 12:04:24.099444 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" event={"ID":"ee3529ac-6135-438b-9334-40c63c1fbd3d","Type":"ContainerDied","Data":"10c6568199a7e8563a8238a4394e2eb6a83f98ca431cdeed29a3dfc7601564fd"} Mar 19 12:04:24.100409 master-0 kubenswrapper[7454]: I0319 12:04:24.100350 7454 scope.go:117] "RemoveContainer" containerID="10c6568199a7e8563a8238a4394e2eb6a83f98ca431cdeed29a3dfc7601564fd" Mar 19 12:04:24.660718 master-0 kubenswrapper[7454]: I0319 12:04:24.660521 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:24.660718 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:24.660718 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:24.660718 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:24.660718 master-0 kubenswrapper[7454]: I0319 12:04:24.660671 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:25.136285 master-0 kubenswrapper[7454]: I0319 12:04:25.136187 7454 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="3feac3c251ff91bcd1b3442311df2d939efe2cd53ade12c46efdb03023c1d996" exitCode=0 Mar 19 12:04:25.136624 master-0 kubenswrapper[7454]: I0319 12:04:25.136311 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"3feac3c251ff91bcd1b3442311df2d939efe2cd53ade12c46efdb03023c1d996"} Mar 19 12:04:25.136899 master-0 kubenswrapper[7454]: I0319 12:04:25.136766 7454 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="bb9699aa-8885-49ec-a3b3-8c199d95bbf9" Mar 19 12:04:25.136899 master-0 kubenswrapper[7454]: I0319 12:04:25.136893 7454 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="bb9699aa-8885-49ec-a3b3-8c199d95bbf9" Mar 19 12:04:25.140897 master-0 kubenswrapper[7454]: I0319 12:04:25.140789 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-84gh4_ee3529ac-6135-438b-9334-40c63c1fbd3d/cluster-cloud-controller-manager/0.log" Mar 19 12:04:25.141041 master-0 kubenswrapper[7454]: I0319 12:04:25.140919 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" event={"ID":"ee3529ac-6135-438b-9334-40c63c1fbd3d","Type":"ContainerStarted","Data":"b3950d7f75639fc259e72f266a2c2e281f4697ca2b26e47aaba43a45da1f2320"} Mar 19 12:04:25.660318 master-0 kubenswrapper[7454]: I0319 12:04:25.660266 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:25.660318 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:25.660318 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:25.660318 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:25.660788 master-0 kubenswrapper[7454]: I0319 12:04:25.660752 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:26.660365 master-0 kubenswrapper[7454]: I0319 12:04:26.660111 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:26.660365 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:26.660365 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:26.660365 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:26.660365 master-0 kubenswrapper[7454]: I0319 12:04:26.660216 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:27.659764 master-0 kubenswrapper[7454]: I0319 12:04:27.659683 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:27.659764 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:27.659764 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:27.659764 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:27.660266 master-0 kubenswrapper[7454]: I0319 12:04:27.659769 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:28.660008 master-0 kubenswrapper[7454]: I0319 12:04:28.659915 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:28.660008 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:28.660008 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:28.660008 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:28.661056 master-0 kubenswrapper[7454]: I0319 12:04:28.660054 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:29.659590 master-0 kubenswrapper[7454]: I0319 12:04:29.659510 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:29.659590 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:29.659590 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:29.659590 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:29.660745 master-0 kubenswrapper[7454]: I0319 12:04:29.659610 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:30.660503 master-0 kubenswrapper[7454]: I0319 12:04:30.660449 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:30.660503 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:30.660503 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:30.660503 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:30.661444 master-0 kubenswrapper[7454]: I0319 12:04:30.660529 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:31.188433 master-0 kubenswrapper[7454]: I0319 12:04:31.188395 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-6m654_944eac68-e72b-4aed-b5dc-d7d9703178a3/snapshot-controller/1.log" Mar 19 12:04:31.189601 master-0 kubenswrapper[7454]: I0319 12:04:31.189549 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-6m654_944eac68-e72b-4aed-b5dc-d7d9703178a3/snapshot-controller/0.log" Mar 19 12:04:31.189730 master-0 kubenswrapper[7454]: I0319 12:04:31.189629 7454 generic.go:334] "Generic (PLEG): container finished" podID="944eac68-e72b-4aed-b5dc-d7d9703178a3" containerID="7b0aee976f8444b82e3c4d17e235fff6c9975468ebf15542296951ae3166eacc" exitCode=1 Mar 19 12:04:31.189857 master-0 kubenswrapper[7454]: I0319 12:04:31.189747 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" event={"ID":"944eac68-e72b-4aed-b5dc-d7d9703178a3","Type":"ContainerDied","Data":"7b0aee976f8444b82e3c4d17e235fff6c9975468ebf15542296951ae3166eacc"} Mar 19 12:04:31.189857 master-0 kubenswrapper[7454]: I0319 12:04:31.189860 7454 scope.go:117] "RemoveContainer" containerID="bdf696c39db6c9beaa009fbd69e576a7d8040c99b8de9bd67204a49a32f0a1ba" Mar 19 12:04:31.192760 master-0 kubenswrapper[7454]: I0319 12:04:31.192707 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-9mpxd_5238840f-3bef-43ad-ae68-ac187f073019/manager/1.log" Mar 19 12:04:31.192918 master-0 kubenswrapper[7454]: I0319 12:04:31.192751 7454 scope.go:117] "RemoveContainer" containerID="7b0aee976f8444b82e3c4d17e235fff6c9975468ebf15542296951ae3166eacc" Mar 19 12:04:31.195281 master-0 kubenswrapper[7454]: I0319 12:04:31.195225 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-9mpxd_5238840f-3bef-43ad-ae68-ac187f073019/manager/0.log" Mar 19 12:04:31.195413 master-0 kubenswrapper[7454]: I0319 12:04:31.195307 7454 generic.go:334] "Generic (PLEG): container finished" podID="5238840f-3bef-43ad-ae68-ac187f073019" containerID="80a4b06853370526b35bd2b1f042248803efc6dea62506012de0886df3162aa5" exitCode=1 Mar 19 12:04:31.195413 master-0 kubenswrapper[7454]: I0319 12:04:31.195354 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" event={"ID":"5238840f-3bef-43ad-ae68-ac187f073019","Type":"ContainerDied","Data":"80a4b06853370526b35bd2b1f042248803efc6dea62506012de0886df3162aa5"} Mar 19 12:04:31.196276 master-0 kubenswrapper[7454]: I0319 12:04:31.196226 7454 scope.go:117] "RemoveContainer" containerID="80a4b06853370526b35bd2b1f042248803efc6dea62506012de0886df3162aa5" Mar 19 12:04:31.196727 master-0 kubenswrapper[7454]: E0319 12:04:31.196674 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-57777556ff-9mpxd_openshift-operator-controller(5238840f-3bef-43ad-ae68-ac187f073019)\"" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" podUID="5238840f-3bef-43ad-ae68-ac187f073019" Mar 19 12:04:31.197053 master-0 kubenswrapper[7454]: E0319 12:04:31.196947 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-6m654_openshift-cluster-storage-operator(944eac68-e72b-4aed-b5dc-d7d9703178a3)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" podUID="944eac68-e72b-4aed-b5dc-d7d9703178a3" Mar 19 12:04:31.215921 master-0 kubenswrapper[7454]: I0319 12:04:31.215861 7454 scope.go:117] "RemoveContainer" containerID="387948abcb2cbae673b88cb3d7a8d043f5ef4d37ef318a38ca6b5a6a836dff73" Mar 19 12:04:31.660733 master-0 kubenswrapper[7454]: I0319 12:04:31.660569 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:31.660733 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:31.660733 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:31.660733 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:31.660733 master-0 kubenswrapper[7454]: I0319 12:04:31.660651 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:32.208324 master-0 kubenswrapper[7454]: I0319 12:04:32.208254 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-6m654_944eac68-e72b-4aed-b5dc-d7d9703178a3/snapshot-controller/1.log" Mar 19 12:04:32.211816 master-0 kubenswrapper[7454]: I0319 12:04:32.211712 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-9mpxd_5238840f-3bef-43ad-ae68-ac187f073019/manager/1.log" Mar 19 12:04:32.401046 master-0 kubenswrapper[7454]: E0319 12:04:32.400911 7454 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 19 12:04:32.660143 master-0 kubenswrapper[7454]: I0319 12:04:32.660062 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:32.660143 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:32.660143 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:32.660143 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:32.660706 master-0 kubenswrapper[7454]: I0319 12:04:32.660142 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:33.660025 master-0 kubenswrapper[7454]: I0319 12:04:33.659958 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:33.660025 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:33.660025 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:33.660025 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:33.661036 master-0 kubenswrapper[7454]: I0319 12:04:33.660054 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:34.660407 master-0 kubenswrapper[7454]: I0319 12:04:34.660217 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:34.660407 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:34.660407 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:34.660407 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:34.660407 master-0 kubenswrapper[7454]: I0319 12:04:34.660348 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:35.237666 master-0 kubenswrapper[7454]: I0319 12:04:35.237568 7454 generic.go:334] "Generic (PLEG): container finished" podID="b0f5939c-48b1-4d6c-9712-9128a78d603b" containerID="3cb3f801dd00591244b19b3ad51ca78e956ed275b4329bac7bcfc1f2f8080cd6" exitCode=0 Mar 19 12:04:35.237666 master-0 kubenswrapper[7454]: I0319 12:04:35.237638 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" event={"ID":"b0f5939c-48b1-4d6c-9712-9128a78d603b","Type":"ContainerDied","Data":"3cb3f801dd00591244b19b3ad51ca78e956ed275b4329bac7bcfc1f2f8080cd6"} Mar 19 12:04:35.238252 master-0 kubenswrapper[7454]: I0319 12:04:35.237734 7454 scope.go:117] "RemoveContainer" containerID="68ef893f247d25c990ee12be4a1311e23963264bd6e324255f2b26ed404f9f6a" Mar 19 12:04:35.238556 master-0 kubenswrapper[7454]: I0319 12:04:35.238499 7454 scope.go:117] "RemoveContainer" containerID="3cb3f801dd00591244b19b3ad51ca78e956ed275b4329bac7bcfc1f2f8080cd6" Mar 19 12:04:35.238982 master-0 kubenswrapper[7454]: E0319 12:04:35.238924 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-89ccd998f-pr7gk_openshift-marketplace(b0f5939c-48b1-4d6c-9712-9128a78d603b)\"" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" podUID="b0f5939c-48b1-4d6c-9712-9128a78d603b" Mar 19 12:04:35.660493 master-0 kubenswrapper[7454]: I0319 12:04:35.660300 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:35.660493 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:35.660493 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:35.660493 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:35.660493 master-0 kubenswrapper[7454]: I0319 12:04:35.660361 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:36.652097 master-0 kubenswrapper[7454]: I0319 12:04:36.652012 7454 status_manager.go:851] "Failed to get status for pod" podUID="ed7034eee202d25f8fdd5bf58084d919" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)" Mar 19 12:04:36.660528 master-0 kubenswrapper[7454]: I0319 12:04:36.660439 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:36.660528 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:36.660528 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:36.660528 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:36.661478 master-0 kubenswrapper[7454]: I0319 12:04:36.660526 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:37.660459 master-0 kubenswrapper[7454]: I0319 12:04:37.660381 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:37.660459 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:37.660459 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:37.660459 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:37.661196 master-0 kubenswrapper[7454]: I0319 12:04:37.660486 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:38.539023 master-0 kubenswrapper[7454]: I0319 12:04:38.538904 7454 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 12:04:38.539023 master-0 kubenswrapper[7454]: I0319 12:04:38.539008 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 12:04:38.539960 master-0 kubenswrapper[7454]: I0319 12:04:38.539844 7454 scope.go:117] "RemoveContainer" containerID="80a4b06853370526b35bd2b1f042248803efc6dea62506012de0886df3162aa5" Mar 19 12:04:38.540402 master-0 kubenswrapper[7454]: E0319 12:04:38.540335 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-57777556ff-9mpxd_openshift-operator-controller(5238840f-3bef-43ad-ae68-ac187f073019)\"" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" podUID="5238840f-3bef-43ad-ae68-ac187f073019" Mar 19 12:04:38.659731 master-0 kubenswrapper[7454]: I0319 12:04:38.659640 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:38.659731 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:38.659731 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:38.659731 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:38.660187 master-0 kubenswrapper[7454]: I0319 12:04:38.659734 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:39.659761 master-0 kubenswrapper[7454]: I0319 12:04:39.659693 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:39.659761 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:39.659761 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:39.659761 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:39.660712 master-0 kubenswrapper[7454]: I0319 12:04:39.659771 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:40.660274 master-0 kubenswrapper[7454]: I0319 12:04:40.660194 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:40.660274 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:40.660274 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:40.660274 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:40.661529 master-0 kubenswrapper[7454]: I0319 12:04:40.660287 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:40.662225 master-0 kubenswrapper[7454]: E0319 12:04:40.662003 7454 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ingress-operator-66b84d69b-btppx.189e3c3bf99a2156 openshift-ingress-operator 11469 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress-operator,Name:ingress-operator-66b84d69b-btppx,UID:b80027fd-7b39-477a-a337-ff9bb08e7eeb,APIVersion:v1,ResourceVersion:3649,FieldPath:spec.containers{ingress-operator},},Reason:BackOff,Message:Back-off restarting failed container ingress-operator in pod ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:58:51 +0000 UTC,LastTimestamp:2026-03-19 12:03:33.669330577 +0000 UTC m=+583.299796520,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 12:04:41.291954 master-0 kubenswrapper[7454]: I0319 12:04:41.291897 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-j2w8z_919daf8d-763a-44bc-8916-86b425a27cbd/manager/1.log" Mar 19 12:04:41.292888 master-0 kubenswrapper[7454]: I0319 12:04:41.292841 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-j2w8z_919daf8d-763a-44bc-8916-86b425a27cbd/manager/0.log" Mar 19 12:04:41.293489 master-0 kubenswrapper[7454]: I0319 12:04:41.293418 7454 generic.go:334] "Generic (PLEG): container finished" podID="919daf8d-763a-44bc-8916-86b425a27cbd" containerID="48baf89d0a5776fb35854b24f12ca1544d0d250398de394c850b09cf7a229ce1" exitCode=1 Mar 19 12:04:41.293623 master-0 kubenswrapper[7454]: I0319 12:04:41.293484 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" event={"ID":"919daf8d-763a-44bc-8916-86b425a27cbd","Type":"ContainerDied","Data":"48baf89d0a5776fb35854b24f12ca1544d0d250398de394c850b09cf7a229ce1"} Mar 19 12:04:41.293623 master-0 kubenswrapper[7454]: I0319 12:04:41.293542 7454 scope.go:117] "RemoveContainer" containerID="b41786c9c913f59caa3ab9f044ef31b0ba5e946f6fab91d0cf640d642dc24031" Mar 19 12:04:41.294466 master-0 kubenswrapper[7454]: I0319 12:04:41.294425 7454 scope.go:117] "RemoveContainer" containerID="48baf89d0a5776fb35854b24f12ca1544d0d250398de394c850b09cf7a229ce1" Mar 19 12:04:41.294903 master-0 kubenswrapper[7454]: E0319 12:04:41.294784 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-6864dc98f7-j2w8z_openshift-catalogd(919daf8d-763a-44bc-8916-86b425a27cbd)\"" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" podUID="919daf8d-763a-44bc-8916-86b425a27cbd" Mar 19 12:04:41.660215 master-0 kubenswrapper[7454]: I0319 12:04:41.660046 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:41.660215 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:41.660215 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:41.660215 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:41.660215 master-0 kubenswrapper[7454]: I0319 12:04:41.660155 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:42.304733 master-0 kubenswrapper[7454]: I0319 12:04:42.304646 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-j2w8z_919daf8d-763a-44bc-8916-86b425a27cbd/manager/1.log" Mar 19 12:04:42.659959 master-0 kubenswrapper[7454]: I0319 12:04:42.659824 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:42.659959 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:42.659959 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:42.659959 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:42.659959 master-0 kubenswrapper[7454]: I0319 12:04:42.659906 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:43.660212 master-0 kubenswrapper[7454]: I0319 12:04:43.660094 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:43.660212 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:43.660212 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:43.660212 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:43.660212 master-0 kubenswrapper[7454]: I0319 12:04:43.660211 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:43.708197 master-0 kubenswrapper[7454]: I0319 12:04:43.708053 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 12:04:43.708581 master-0 kubenswrapper[7454]: I0319 12:04:43.708517 7454 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 12:04:43.709090 master-0 kubenswrapper[7454]: I0319 12:04:43.708974 7454 scope.go:117] "RemoveContainer" containerID="3cb3f801dd00591244b19b3ad51ca78e956ed275b4329bac7bcfc1f2f8080cd6" Mar 19 12:04:43.709473 master-0 kubenswrapper[7454]: E0319 12:04:43.709385 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-89ccd998f-pr7gk_openshift-marketplace(b0f5939c-48b1-4d6c-9712-9128a78d603b)\"" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" podUID="b0f5939c-48b1-4d6c-9712-9128a78d603b" Mar 19 12:04:44.325214 master-0 kubenswrapper[7454]: I0319 12:04:44.325114 7454 scope.go:117] "RemoveContainer" containerID="3cb3f801dd00591244b19b3ad51ca78e956ed275b4329bac7bcfc1f2f8080cd6" Mar 19 12:04:44.659315 master-0 kubenswrapper[7454]: I0319 12:04:44.659155 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:44.659315 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:44.659315 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:44.659315 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:44.659315 master-0 kubenswrapper[7454]: I0319 12:04:44.659233 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:45.336374 master-0 kubenswrapper[7454]: I0319 12:04:45.336268 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" event={"ID":"b0f5939c-48b1-4d6c-9712-9128a78d603b","Type":"ContainerStarted","Data":"b9abe9cab7461378d1a9d129c7d55c4ae34a94e8d47d80f7732236c8c95d320b"} Mar 19 12:04:45.337257 master-0 kubenswrapper[7454]: I0319 12:04:45.336681 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 12:04:45.340777 master-0 kubenswrapper[7454]: I0319 12:04:45.340715 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 12:04:45.660571 master-0 kubenswrapper[7454]: I0319 12:04:45.660418 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:45.660571 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:45.660571 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:45.660571 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:45.660571 master-0 kubenswrapper[7454]: I0319 12:04:45.660503 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:46.348117 master-0 kubenswrapper[7454]: I0319 12:04:46.348052 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-84gh4_ee3529ac-6135-438b-9334-40c63c1fbd3d/config-sync-controllers/0.log" Mar 19 12:04:46.349593 master-0 kubenswrapper[7454]: I0319 12:04:46.349541 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-84gh4_ee3529ac-6135-438b-9334-40c63c1fbd3d/cluster-cloud-controller-manager/0.log" Mar 19 12:04:46.349700 master-0 kubenswrapper[7454]: I0319 12:04:46.349606 7454 generic.go:334] "Generic (PLEG): container finished" podID="ee3529ac-6135-438b-9334-40c63c1fbd3d" containerID="296dc8986d8d88e53b561f3bac073cd3bc6b8803c01b285a45dd14b4fa44bec7" exitCode=1 Mar 19 12:04:46.349781 master-0 kubenswrapper[7454]: I0319 12:04:46.349704 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" event={"ID":"ee3529ac-6135-438b-9334-40c63c1fbd3d","Type":"ContainerDied","Data":"296dc8986d8d88e53b561f3bac073cd3bc6b8803c01b285a45dd14b4fa44bec7"} Mar 19 12:04:46.350638 master-0 kubenswrapper[7454]: I0319 12:04:46.350577 7454 scope.go:117] "RemoveContainer" containerID="296dc8986d8d88e53b561f3bac073cd3bc6b8803c01b285a45dd14b4fa44bec7" Mar 19 12:04:46.635201 master-0 kubenswrapper[7454]: I0319 12:04:46.634653 7454 scope.go:117] "RemoveContainer" containerID="7b0aee976f8444b82e3c4d17e235fff6c9975468ebf15542296951ae3166eacc" Mar 19 12:04:46.660187 master-0 kubenswrapper[7454]: I0319 12:04:46.660129 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:46.660187 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:46.660187 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:46.660187 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:46.660646 master-0 kubenswrapper[7454]: I0319 12:04:46.660188 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:47.360825 master-0 kubenswrapper[7454]: I0319 12:04:47.360715 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-6m654_944eac68-e72b-4aed-b5dc-d7d9703178a3/snapshot-controller/1.log" Mar 19 12:04:47.360825 master-0 kubenswrapper[7454]: I0319 12:04:47.360822 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" event={"ID":"944eac68-e72b-4aed-b5dc-d7d9703178a3","Type":"ContainerStarted","Data":"e81d0406e1f1789da991b6d3be3c0b8c07d3b9704f0b264dbaa399283ae48d6c"} Mar 19 12:04:47.364669 master-0 kubenswrapper[7454]: I0319 12:04:47.364581 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-84gh4_ee3529ac-6135-438b-9334-40c63c1fbd3d/config-sync-controllers/0.log" Mar 19 12:04:47.365414 master-0 kubenswrapper[7454]: I0319 12:04:47.365376 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-84gh4_ee3529ac-6135-438b-9334-40c63c1fbd3d/cluster-cloud-controller-manager/0.log" Mar 19 12:04:47.365554 master-0 kubenswrapper[7454]: I0319 12:04:47.365448 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" event={"ID":"ee3529ac-6135-438b-9334-40c63c1fbd3d","Type":"ContainerStarted","Data":"a15f1f08f1afa7b5938fa386f12bc6fe4c9f8d77ae93da1b3887d0027de3fb21"} Mar 19 12:04:47.660473 master-0 kubenswrapper[7454]: I0319 12:04:47.660320 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:47.660473 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:47.660473 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:47.660473 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:47.660473 master-0 kubenswrapper[7454]: I0319 12:04:47.660412 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:48.660615 master-0 kubenswrapper[7454]: I0319 12:04:48.660527 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:48.660615 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:48.660615 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:48.660615 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:48.661768 master-0 kubenswrapper[7454]: I0319 12:04:48.660622 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:48.802857 master-0 kubenswrapper[7454]: E0319 12:04:48.802704 7454 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 19 12:04:49.660349 master-0 kubenswrapper[7454]: I0319 12:04:49.660250 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:49.660349 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:49.660349 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:49.660349 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:49.661357 master-0 kubenswrapper[7454]: I0319 12:04:49.660371 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:49.774580 master-0 kubenswrapper[7454]: I0319 12:04:49.774455 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 12:04:49.775342 master-0 kubenswrapper[7454]: I0319 12:04:49.775285 7454 scope.go:117] "RemoveContainer" containerID="48baf89d0a5776fb35854b24f12ca1544d0d250398de394c850b09cf7a229ce1" Mar 19 12:04:49.775689 master-0 kubenswrapper[7454]: E0319 12:04:49.775629 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-6864dc98f7-j2w8z_openshift-catalogd(919daf8d-763a-44bc-8916-86b425a27cbd)\"" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" podUID="919daf8d-763a-44bc-8916-86b425a27cbd" Mar 19 12:04:50.634281 master-0 kubenswrapper[7454]: I0319 12:04:50.634199 7454 scope.go:117] "RemoveContainer" containerID="80a4b06853370526b35bd2b1f042248803efc6dea62506012de0886df3162aa5" Mar 19 12:04:50.658943 master-0 kubenswrapper[7454]: I0319 12:04:50.658871 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:50.658943 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:50.658943 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:50.658943 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:50.658943 master-0 kubenswrapper[7454]: I0319 12:04:50.658944 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:51.407294 master-0 kubenswrapper[7454]: I0319 12:04:51.407111 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-9mpxd_5238840f-3bef-43ad-ae68-ac187f073019/manager/1.log" Mar 19 12:04:51.408312 master-0 kubenswrapper[7454]: I0319 12:04:51.407748 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" event={"ID":"5238840f-3bef-43ad-ae68-ac187f073019","Type":"ContainerStarted","Data":"d6d00086739099a9544db3c7bf39118f731af5c10596aec3c15d9eaaea8bc4d6"} Mar 19 12:04:51.408312 master-0 kubenswrapper[7454]: I0319 12:04:51.408073 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 12:04:51.660471 master-0 kubenswrapper[7454]: I0319 12:04:51.660245 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:51.660471 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:51.660471 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:51.660471 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:51.660471 master-0 kubenswrapper[7454]: I0319 12:04:51.660323 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:52.660868 master-0 kubenswrapper[7454]: I0319 12:04:52.660729 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:52.660868 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:52.660868 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:52.660868 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:52.662299 master-0 kubenswrapper[7454]: I0319 12:04:52.660877 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:53.659890 master-0 kubenswrapper[7454]: I0319 12:04:53.659791 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:53.659890 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:53.659890 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:53.659890 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:53.660483 master-0 kubenswrapper[7454]: I0319 12:04:53.659907 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:54.660249 master-0 kubenswrapper[7454]: I0319 12:04:54.660190 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:54.660249 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:54.660249 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:54.660249 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:54.661218 master-0 kubenswrapper[7454]: I0319 12:04:54.660945 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:55.659731 master-0 kubenswrapper[7454]: I0319 12:04:55.659616 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:55.659731 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:55.659731 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:55.659731 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:55.660213 master-0 kubenswrapper[7454]: I0319 12:04:55.659745 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:56.659703 master-0 kubenswrapper[7454]: I0319 12:04:56.659607 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:56.659703 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:56.659703 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:56.659703 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:56.660663 master-0 kubenswrapper[7454]: I0319 12:04:56.659723 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:57.463211 master-0 kubenswrapper[7454]: I0319 12:04:57.463111 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-tql86_44469a78-9300-4260-89e9-ea939de1357b/control-plane-machine-set-operator/0.log" Mar 19 12:04:57.463211 master-0 kubenswrapper[7454]: I0319 12:04:57.463173 7454 generic.go:334] "Generic (PLEG): container finished" podID="44469a78-9300-4260-89e9-ea939de1357b" containerID="bcbe72e4cc3e493a5ae6c052d3dcfb298a861d9613583852bbc5958392be50c4" exitCode=1 Mar 19 12:04:57.463211 master-0 kubenswrapper[7454]: I0319 12:04:57.463206 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86" event={"ID":"44469a78-9300-4260-89e9-ea939de1357b","Type":"ContainerDied","Data":"bcbe72e4cc3e493a5ae6c052d3dcfb298a861d9613583852bbc5958392be50c4"} Mar 19 12:04:57.463955 master-0 kubenswrapper[7454]: I0319 12:04:57.463741 7454 scope.go:117] "RemoveContainer" containerID="bcbe72e4cc3e493a5ae6c052d3dcfb298a861d9613583852bbc5958392be50c4" Mar 19 12:04:57.659134 master-0 kubenswrapper[7454]: I0319 12:04:57.659044 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:57.659134 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:57.659134 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:57.659134 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:57.659636 master-0 kubenswrapper[7454]: I0319 12:04:57.659131 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:58.482768 master-0 kubenswrapper[7454]: I0319 12:04:58.482678 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-tql86_44469a78-9300-4260-89e9-ea939de1357b/control-plane-machine-set-operator/0.log" Mar 19 12:04:58.482768 master-0 kubenswrapper[7454]: I0319 12:04:58.482744 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86" event={"ID":"44469a78-9300-4260-89e9-ea939de1357b","Type":"ContainerStarted","Data":"95df593bda230cfe7b98b6801869d6d366e321b5eb0ef734d501b2afe8aa29f6"} Mar 19 12:04:58.541928 master-0 kubenswrapper[7454]: I0319 12:04:58.541837 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 12:04:58.660212 master-0 kubenswrapper[7454]: I0319 12:04:58.660118 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:58.660212 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:58.660212 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:58.660212 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:58.660212 master-0 kubenswrapper[7454]: I0319 12:04:58.660204 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:59.140104 master-0 kubenswrapper[7454]: E0319 12:04:59.140019 7454 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 19 12:04:59.660963 master-0 kubenswrapper[7454]: I0319 12:04:59.660861 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:04:59.660963 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:04:59.660963 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:04:59.660963 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:04:59.662596 master-0 kubenswrapper[7454]: I0319 12:04:59.660984 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:04:59.773780 master-0 kubenswrapper[7454]: I0319 12:04:59.773721 7454 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 12:04:59.775211 master-0 kubenswrapper[7454]: I0319 12:04:59.775192 7454 scope.go:117] "RemoveContainer" containerID="48baf89d0a5776fb35854b24f12ca1544d0d250398de394c850b09cf7a229ce1" Mar 19 12:05:00.504402 master-0 kubenswrapper[7454]: I0319 12:05:00.504318 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-j2w8z_919daf8d-763a-44bc-8916-86b425a27cbd/manager/1.log" Mar 19 12:05:00.505247 master-0 kubenswrapper[7454]: I0319 12:05:00.505159 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" event={"ID":"919daf8d-763a-44bc-8916-86b425a27cbd","Type":"ContainerStarted","Data":"a19caa26ce380a89ecb56c79b6b3d4b44e3e4fcb2581f942a7f2baafc51118b3"} Mar 19 12:05:00.505601 master-0 kubenswrapper[7454]: I0319 12:05:00.505519 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 12:05:00.508748 master-0 kubenswrapper[7454]: I0319 12:05:00.508681 7454 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="af7ab2de52b543dbb0460a9ad1ef51b497e5cd2bc41457946ff4763f02848a63" exitCode=0 Mar 19 12:05:00.508977 master-0 kubenswrapper[7454]: I0319 12:05:00.508842 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"af7ab2de52b543dbb0460a9ad1ef51b497e5cd2bc41457946ff4763f02848a63"} Mar 19 12:05:00.509323 master-0 kubenswrapper[7454]: I0319 12:05:00.509258 7454 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="bb9699aa-8885-49ec-a3b3-8c199d95bbf9" Mar 19 12:05:00.509323 master-0 kubenswrapper[7454]: I0319 12:05:00.509316 7454 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="bb9699aa-8885-49ec-a3b3-8c199d95bbf9" Mar 19 12:05:00.512130 master-0 kubenswrapper[7454]: I0319 12:05:00.511772 7454 generic.go:334] "Generic (PLEG): container finished" podID="3a6b082a-649b-43f6-8e24-cf222873fe39" containerID="9525efea18e9168adb2e8691fffa21e20effeae4cf60811da09efa9acd76b65f" exitCode=0 Mar 19 12:05:00.512130 master-0 kubenswrapper[7454]: I0319 12:05:00.511870 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" event={"ID":"3a6b082a-649b-43f6-8e24-cf222873fe39","Type":"ContainerDied","Data":"9525efea18e9168adb2e8691fffa21e20effeae4cf60811da09efa9acd76b65f"} Mar 19 12:05:00.512130 master-0 kubenswrapper[7454]: I0319 12:05:00.511941 7454 scope.go:117] "RemoveContainer" containerID="1934bc0b600f1e74a406788cec8a674a8b6f1a56fe70fd8bd4ae9f2fb2ad6292" Mar 19 12:05:00.513917 master-0 kubenswrapper[7454]: I0319 12:05:00.512579 7454 scope.go:117] "RemoveContainer" containerID="9525efea18e9168adb2e8691fffa21e20effeae4cf60811da09efa9acd76b65f" Mar 19 12:05:00.513917 master-0 kubenswrapper[7454]: E0319 12:05:00.513166 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-7cdddc6cb-q222c_openshift-controller-manager(3a6b082a-649b-43f6-8e24-cf222873fe39)\"" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" podUID="3a6b082a-649b-43f6-8e24-cf222873fe39" Mar 19 12:05:00.515081 master-0 kubenswrapper[7454]: I0319 12:05:00.515031 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-ftml6_19de6601-10d4-4112-a21f-0398d2b160d1/cluster-baremetal-operator/1.log" Mar 19 12:05:00.516508 master-0 kubenswrapper[7454]: I0319 12:05:00.516457 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-ftml6_19de6601-10d4-4112-a21f-0398d2b160d1/cluster-baremetal-operator/0.log" Mar 19 12:05:00.516632 master-0 kubenswrapper[7454]: I0319 12:05:00.516539 7454 generic.go:334] "Generic (PLEG): container finished" podID="19de6601-10d4-4112-a21f-0398d2b160d1" containerID="dbd72cd315e8f5fa6faaefc2be981b3f9a0d499a3d7eead86b3d71318cde1c34" exitCode=1 Mar 19 12:05:00.516767 master-0 kubenswrapper[7454]: I0319 12:05:00.516661 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" event={"ID":"19de6601-10d4-4112-a21f-0398d2b160d1","Type":"ContainerDied","Data":"dbd72cd315e8f5fa6faaefc2be981b3f9a0d499a3d7eead86b3d71318cde1c34"} Mar 19 12:05:00.517762 master-0 kubenswrapper[7454]: I0319 12:05:00.517683 7454 scope.go:117] "RemoveContainer" containerID="dbd72cd315e8f5fa6faaefc2be981b3f9a0d499a3d7eead86b3d71318cde1c34" Mar 19 12:05:00.518273 master-0 kubenswrapper[7454]: E0319 12:05:00.518214 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-ftml6_openshift-machine-api(19de6601-10d4-4112-a21f-0398d2b160d1)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" podUID="19de6601-10d4-4112-a21f-0398d2b160d1" Mar 19 12:05:00.521398 master-0 kubenswrapper[7454]: I0319 12:05:00.521340 7454 generic.go:334] "Generic (PLEG): container finished" podID="bf226d89-450d-4876-a113-345632b94ee9" containerID="3d6c29fa2fea2a4028ae9bf07fe3dfb5fccd02ce108e84c4ff9630eee5fdf4b0" exitCode=0 Mar 19 12:05:00.521522 master-0 kubenswrapper[7454]: I0319 12:05:00.521393 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" event={"ID":"bf226d89-450d-4876-a113-345632b94ee9","Type":"ContainerDied","Data":"3d6c29fa2fea2a4028ae9bf07fe3dfb5fccd02ce108e84c4ff9630eee5fdf4b0"} Mar 19 12:05:00.522098 master-0 kubenswrapper[7454]: I0319 12:05:00.522037 7454 scope.go:117] "RemoveContainer" containerID="3d6c29fa2fea2a4028ae9bf07fe3dfb5fccd02ce108e84c4ff9630eee5fdf4b0" Mar 19 12:05:00.522485 master-0 kubenswrapper[7454]: E0319 12:05:00.522423 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-cluster-manager pod=ovnkube-control-plane-57f769d897-f6m2t_openshift-ovn-kubernetes(bf226d89-450d-4876-a113-345632b94ee9)\"" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" podUID="bf226d89-450d-4876-a113-345632b94ee9" Mar 19 12:05:00.539984 master-0 kubenswrapper[7454]: I0319 12:05:00.539830 7454 scope.go:117] "RemoveContainer" containerID="612732ed0120924fb77ef10b06bafbb001e3d8734f333029971f71583a5972b4" Mar 19 12:05:00.569148 master-0 kubenswrapper[7454]: I0319 12:05:00.569091 7454 scope.go:117] "RemoveContainer" containerID="e708db8e66828556f8b708025575f23f8aa12842fc7126337dc3672b562dc4b1" Mar 19 12:05:00.658498 master-0 kubenswrapper[7454]: I0319 12:05:00.658434 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:05:00.658498 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:05:00.658498 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:05:00.658498 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:05:00.658899 master-0 kubenswrapper[7454]: I0319 12:05:00.658500 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:05:01.536864 master-0 kubenswrapper[7454]: I0319 12:05:01.536766 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-ftml6_19de6601-10d4-4112-a21f-0398d2b160d1/cluster-baremetal-operator/1.log" Mar 19 12:05:01.660035 master-0 kubenswrapper[7454]: I0319 12:05:01.659934 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:05:01.660035 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:05:01.660035 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:05:01.660035 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:05:01.660532 master-0 kubenswrapper[7454]: I0319 12:05:01.660068 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:05:02.659385 master-0 kubenswrapper[7454]: I0319 12:05:02.659253 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:05:02.659385 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:05:02.659385 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:05:02.659385 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:05:02.660383 master-0 kubenswrapper[7454]: I0319 12:05:02.659416 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:05:03.660622 master-0 kubenswrapper[7454]: I0319 12:05:03.660537 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:05:03.660622 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:05:03.660622 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:05:03.660622 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:05:03.662319 master-0 kubenswrapper[7454]: I0319 12:05:03.662247 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:05:04.660386 master-0 kubenswrapper[7454]: I0319 12:05:04.660283 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:05:04.660386 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:05:04.660386 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:05:04.660386 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:05:04.661548 master-0 kubenswrapper[7454]: I0319 12:05:04.660424 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:05:05.247900 master-0 kubenswrapper[7454]: I0319 12:05:05.247787 7454 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:05:05.247900 master-0 kubenswrapper[7454]: I0319 12:05:05.247879 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:05:05.248659 master-0 kubenswrapper[7454]: I0319 12:05:05.248609 7454 scope.go:117] "RemoveContainer" containerID="9525efea18e9168adb2e8691fffa21e20effeae4cf60811da09efa9acd76b65f" Mar 19 12:05:05.249080 master-0 kubenswrapper[7454]: E0319 12:05:05.249022 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-7cdddc6cb-q222c_openshift-controller-manager(3a6b082a-649b-43f6-8e24-cf222873fe39)\"" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" podUID="3a6b082a-649b-43f6-8e24-cf222873fe39" Mar 19 12:05:05.660207 master-0 kubenswrapper[7454]: I0319 12:05:05.660064 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:05:05.660207 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:05:05.660207 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:05:05.660207 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:05:05.660756 master-0 kubenswrapper[7454]: I0319 12:05:05.660713 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:05:05.804122 master-0 kubenswrapper[7454]: E0319 12:05:05.804005 7454 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 19 12:05:06.660579 master-0 kubenswrapper[7454]: I0319 12:05:06.660510 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:05:06.660579 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:05:06.660579 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:05:06.660579 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:05:06.661657 master-0 kubenswrapper[7454]: I0319 12:05:06.660585 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:05:07.659927 master-0 kubenswrapper[7454]: I0319 12:05:07.659788 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:05:07.659927 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:05:07.659927 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:05:07.659927 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:05:07.659927 master-0 kubenswrapper[7454]: I0319 12:05:07.659912 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:05:08.659194 master-0 kubenswrapper[7454]: I0319 12:05:08.659098 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:05:08.659194 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:05:08.659194 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:05:08.659194 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:05:08.660376 master-0 kubenswrapper[7454]: I0319 12:05:08.659198 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:05:09.660241 master-0 kubenswrapper[7454]: I0319 12:05:09.660124 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:05:09.660241 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:05:09.660241 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:05:09.660241 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:05:09.661616 master-0 kubenswrapper[7454]: I0319 12:05:09.660295 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:05:09.775912 master-0 kubenswrapper[7454]: I0319 12:05:09.775788 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 12:05:10.661383 master-0 kubenswrapper[7454]: I0319 12:05:10.661271 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:05:10.661383 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:05:10.661383 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:05:10.661383 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:05:10.662321 master-0 kubenswrapper[7454]: I0319 12:05:10.661404 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:05:11.659977 master-0 kubenswrapper[7454]: I0319 12:05:11.659913 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:05:11.659977 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:05:11.659977 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:05:11.659977 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:05:11.660598 master-0 kubenswrapper[7454]: I0319 12:05:11.660534 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:05:12.660211 master-0 kubenswrapper[7454]: I0319 12:05:12.660110 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:05:12.660211 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:05:12.660211 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:05:12.660211 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:05:12.660211 master-0 kubenswrapper[7454]: I0319 12:05:12.660201 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:05:13.633892 master-0 kubenswrapper[7454]: I0319 12:05:13.633786 7454 scope.go:117] "RemoveContainer" containerID="3d6c29fa2fea2a4028ae9bf07fe3dfb5fccd02ce108e84c4ff9630eee5fdf4b0" Mar 19 12:05:13.661214 master-0 kubenswrapper[7454]: I0319 12:05:13.661083 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:05:13.661214 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:05:13.661214 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:05:13.661214 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:05:13.661214 master-0 kubenswrapper[7454]: I0319 12:05:13.661162 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:05:14.633728 master-0 kubenswrapper[7454]: I0319 12:05:14.633609 7454 scope.go:117] "RemoveContainer" containerID="dbd72cd315e8f5fa6faaefc2be981b3f9a0d499a3d7eead86b3d71318cde1c34" Mar 19 12:05:14.660527 master-0 kubenswrapper[7454]: I0319 12:05:14.660401 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:05:14.660527 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:05:14.660527 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:05:14.660527 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:05:14.660527 master-0 kubenswrapper[7454]: I0319 12:05:14.660496 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:05:14.666666 master-0 kubenswrapper[7454]: I0319 12:05:14.666577 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" event={"ID":"bf226d89-450d-4876-a113-345632b94ee9","Type":"ContainerStarted","Data":"bc6d9c74d81ce68ee90cab9cffafbb8af941cb3e2340696b47b923bf5819fb5a"} Mar 19 12:05:14.669989 master-0 kubenswrapper[7454]: I0319 12:05:14.669920 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-qv29l_fd40498c-f50a-408c-9a50-5d85ae666124/machine-approver-controller/0.log" Mar 19 12:05:14.670148 master-0 kubenswrapper[7454]: E0319 12:05:14.669776 7454 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ingress-operator-66b84d69b-btppx.189e3c3bf99a2156 openshift-ingress-operator 11469 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress-operator,Name:ingress-operator-66b84d69b-btppx,UID:b80027fd-7b39-477a-a337-ff9bb08e7eeb,APIVersion:v1,ResourceVersion:3649,FieldPath:spec.containers{ingress-operator},},Reason:BackOff,Message:Back-off restarting failed container ingress-operator in pod ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:58:51 +0000 UTC,LastTimestamp:2026-03-19 12:03:44.634438905 +0000 UTC m=+594.264904868,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 12:05:14.671666 master-0 kubenswrapper[7454]: I0319 12:05:14.671580 7454 generic.go:334] "Generic (PLEG): container finished" podID="fd40498c-f50a-408c-9a50-5d85ae666124" containerID="e46402e9e37c366c46da921e8257890f1d201b54bbd07d4bc4010bce5ecefa6c" exitCode=255 Mar 19 12:05:14.672042 master-0 kubenswrapper[7454]: I0319 12:05:14.671670 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" event={"ID":"fd40498c-f50a-408c-9a50-5d85ae666124","Type":"ContainerDied","Data":"e46402e9e37c366c46da921e8257890f1d201b54bbd07d4bc4010bce5ecefa6c"} Mar 19 12:05:14.672502 master-0 kubenswrapper[7454]: I0319 12:05:14.672432 7454 scope.go:117] "RemoveContainer" containerID="e46402e9e37c366c46da921e8257890f1d201b54bbd07d4bc4010bce5ecefa6c" Mar 19 12:05:15.661224 master-0 kubenswrapper[7454]: I0319 12:05:15.661134 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:05:15.661224 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:05:15.661224 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:05:15.661224 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:05:15.661793 master-0 kubenswrapper[7454]: I0319 12:05:15.661242 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:05:15.661793 master-0 kubenswrapper[7454]: I0319 12:05:15.661332 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:05:15.662375 master-0 kubenswrapper[7454]: I0319 12:05:15.662307 7454 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"5204ec6a181aadcc019743971b04d16299507e076f3ad2bde88b1a3554a20992"} pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" containerMessage="Container router failed startup probe, will be restarted" Mar 19 12:05:15.662494 master-0 kubenswrapper[7454]: I0319 12:05:15.662383 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" containerID="cri-o://5204ec6a181aadcc019743971b04d16299507e076f3ad2bde88b1a3554a20992" gracePeriod=3600 Mar 19 12:05:15.688096 master-0 kubenswrapper[7454]: I0319 12:05:15.687996 7454 generic.go:334] "Generic (PLEG): container finished" podID="ed7034eee202d25f8fdd5bf58084d919" containerID="d47b78b4162ef738abc79ae7fccddf86e10f2a7b582e6e8119dc73b890a42578" exitCode=0 Mar 19 12:05:15.688096 master-0 kubenswrapper[7454]: I0319 12:05:15.688065 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ed7034eee202d25f8fdd5bf58084d919","Type":"ContainerDied","Data":"d47b78b4162ef738abc79ae7fccddf86e10f2a7b582e6e8119dc73b890a42578"} Mar 19 12:05:15.689154 master-0 kubenswrapper[7454]: I0319 12:05:15.688625 7454 scope.go:117] "RemoveContainer" containerID="d47b78b4162ef738abc79ae7fccddf86e10f2a7b582e6e8119dc73b890a42578" Mar 19 12:05:15.691040 master-0 kubenswrapper[7454]: I0319 12:05:15.690997 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-ftml6_19de6601-10d4-4112-a21f-0398d2b160d1/cluster-baremetal-operator/1.log" Mar 19 12:05:15.691870 master-0 kubenswrapper[7454]: I0319 12:05:15.691758 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" event={"ID":"19de6601-10d4-4112-a21f-0398d2b160d1","Type":"ContainerStarted","Data":"b2e13fc5e0e47b30a814c50b22ebab528689038f4224f101e1963ee3ecce529a"} Mar 19 12:05:15.694636 master-0 kubenswrapper[7454]: I0319 12:05:15.694578 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-qv29l_fd40498c-f50a-408c-9a50-5d85ae666124/machine-approver-controller/0.log" Mar 19 12:05:15.695446 master-0 kubenswrapper[7454]: I0319 12:05:15.695387 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" event={"ID":"fd40498c-f50a-408c-9a50-5d85ae666124","Type":"ContainerStarted","Data":"22008d79d612bce9d3488088001793a11d6d36c515e9f1677b388e324f39e13f"} Mar 19 12:05:16.650635 master-0 kubenswrapper[7454]: I0319 12:05:16.650549 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:05:16.650635 master-0 kubenswrapper[7454]: I0319 12:05:16.650636 7454 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:05:16.651240 master-0 kubenswrapper[7454]: I0319 12:05:16.650664 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:05:16.714362 master-0 kubenswrapper[7454]: I0319 12:05:16.713502 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ed7034eee202d25f8fdd5bf58084d919","Type":"ContainerStarted","Data":"5d261740d47e7306918cefef333039548b8250950612585ba90f860cca83b5a2"} Mar 19 12:05:17.726852 master-0 kubenswrapper[7454]: I0319 12:05:17.726755 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-6m654_944eac68-e72b-4aed-b5dc-d7d9703178a3/snapshot-controller/2.log" Mar 19 12:05:17.727890 master-0 kubenswrapper[7454]: I0319 12:05:17.727741 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-6m654_944eac68-e72b-4aed-b5dc-d7d9703178a3/snapshot-controller/1.log" Mar 19 12:05:17.727890 master-0 kubenswrapper[7454]: I0319 12:05:17.727849 7454 generic.go:334] "Generic (PLEG): container finished" podID="944eac68-e72b-4aed-b5dc-d7d9703178a3" containerID="e81d0406e1f1789da991b6d3be3c0b8c07d3b9704f0b264dbaa399283ae48d6c" exitCode=1 Mar 19 12:05:17.728071 master-0 kubenswrapper[7454]: I0319 12:05:17.727971 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" event={"ID":"944eac68-e72b-4aed-b5dc-d7d9703178a3","Type":"ContainerDied","Data":"e81d0406e1f1789da991b6d3be3c0b8c07d3b9704f0b264dbaa399283ae48d6c"} Mar 19 12:05:17.728149 master-0 kubenswrapper[7454]: I0319 12:05:17.728077 7454 scope.go:117] "RemoveContainer" containerID="7b0aee976f8444b82e3c4d17e235fff6c9975468ebf15542296951ae3166eacc" Mar 19 12:05:17.729224 master-0 kubenswrapper[7454]: I0319 12:05:17.729172 7454 scope.go:117] "RemoveContainer" containerID="e81d0406e1f1789da991b6d3be3c0b8c07d3b9704f0b264dbaa399283ae48d6c" Mar 19 12:05:17.729947 master-0 kubenswrapper[7454]: E0319 12:05:17.729869 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-6m654_openshift-cluster-storage-operator(944eac68-e72b-4aed-b5dc-d7d9703178a3)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" podUID="944eac68-e72b-4aed-b5dc-d7d9703178a3" Mar 19 12:05:18.739211 master-0 kubenswrapper[7454]: I0319 12:05:18.739118 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-6m654_944eac68-e72b-4aed-b5dc-d7d9703178a3/snapshot-controller/2.log" Mar 19 12:05:18.743738 master-0 kubenswrapper[7454]: I0319 12:05:18.743683 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler/0.log" Mar 19 12:05:18.744354 master-0 kubenswrapper[7454]: I0319 12:05:18.744298 7454 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="6b57ecd81087b581c66ac63d9f2f1ef10437e651539d71691b6a055612b562c9" exitCode=1 Mar 19 12:05:18.744455 master-0 kubenswrapper[7454]: I0319 12:05:18.744349 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerDied","Data":"6b57ecd81087b581c66ac63d9f2f1ef10437e651539d71691b6a055612b562c9"} Mar 19 12:05:18.745053 master-0 kubenswrapper[7454]: I0319 12:05:18.745009 7454 scope.go:117] "RemoveContainer" containerID="6b57ecd81087b581c66ac63d9f2f1ef10437e651539d71691b6a055612b562c9" Mar 19 12:05:19.635459 master-0 kubenswrapper[7454]: I0319 12:05:19.635358 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:05:19.635459 master-0 kubenswrapper[7454]: I0319 12:05:19.635451 7454 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:05:19.636602 master-0 kubenswrapper[7454]: I0319 12:05:19.636515 7454 scope.go:117] "RemoveContainer" containerID="9525efea18e9168adb2e8691fffa21e20effeae4cf60811da09efa9acd76b65f" Mar 19 12:05:19.762623 master-0 kubenswrapper[7454]: I0319 12:05:19.762525 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler/0.log" Mar 19 12:05:19.763741 master-0 kubenswrapper[7454]: I0319 12:05:19.763667 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"4ad628e89e7621359063e42ff965fafd7ff7510f8646a17316c1e2a0906b3609"} Mar 19 12:05:19.764321 master-0 kubenswrapper[7454]: I0319 12:05:19.764268 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:05:20.776308 master-0 kubenswrapper[7454]: I0319 12:05:20.776227 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" event={"ID":"3a6b082a-649b-43f6-8e24-cf222873fe39","Type":"ContainerStarted","Data":"09044fa3844b85995a99e406cd860f5e46e3c822f972902a1ed997f5be96ef8c"} Mar 19 12:05:22.805619 master-0 kubenswrapper[7454]: E0319 12:05:22.805434 7454 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 19 12:05:25.248361 master-0 kubenswrapper[7454]: I0319 12:05:25.248246 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:05:25.256585 master-0 kubenswrapper[7454]: I0319 12:05:25.256516 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:05:26.633917 master-0 kubenswrapper[7454]: I0319 12:05:26.633784 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:05:26.649251 master-0 kubenswrapper[7454]: I0319 12:05:26.649154 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:05:26.969184 master-0 kubenswrapper[7454]: E0319 12:05:26.969075 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T12:05:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T12:05:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T12:05:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T12:05:16Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": context deadline exceeded" Mar 19 12:05:29.634175 master-0 kubenswrapper[7454]: I0319 12:05:29.634070 7454 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 12:05:29.634175 master-0 kubenswrapper[7454]: I0319 12:05:29.634185 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 12:05:32.634620 master-0 kubenswrapper[7454]: I0319 12:05:32.634519 7454 scope.go:117] "RemoveContainer" containerID="e81d0406e1f1789da991b6d3be3c0b8c07d3b9704f0b264dbaa399283ae48d6c" Mar 19 12:05:32.635515 master-0 kubenswrapper[7454]: E0319 12:05:32.635012 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-6m654_openshift-cluster-storage-operator(944eac68-e72b-4aed-b5dc-d7d9703178a3)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" podUID="944eac68-e72b-4aed-b5dc-d7d9703178a3" Mar 19 12:05:34.512670 master-0 kubenswrapper[7454]: E0319 12:05:34.512555 7454 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 19 12:05:34.910839 master-0 kubenswrapper[7454]: I0319 12:05:34.910724 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"06df91d89c735b834fc346a4f7854eb6c43febaa5e7607e925c686e24ccb4eda"} Mar 19 12:05:35.928885 master-0 kubenswrapper[7454]: I0319 12:05:35.928776 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"bd3a17cd87fde6f7144b0e322661921d9832fa6483a57510e51852051cbeb528"} Mar 19 12:05:35.928885 master-0 kubenswrapper[7454]: I0319 12:05:35.928885 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"39e0673977f4f7234890fa98a05a4d43d9da817767f59ad368c824fe6d9cdda5"} Mar 19 12:05:36.654028 master-0 kubenswrapper[7454]: I0319 12:05:36.653924 7454 status_manager.go:851] "Failed to get status for pod" podUID="8414b6b0-ee16-47a5-982b-ee58b136cfcf" pod="openshift-network-node-identity/network-node-identity-wd4nx" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods network-node-identity-wd4nx)" Mar 19 12:05:36.948779 master-0 kubenswrapper[7454]: I0319 12:05:36.948595 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"9c85ea03175b078051d022f10609cbe5a9f4cf523155732a5c478d72bb14664e"} Mar 19 12:05:36.948779 master-0 kubenswrapper[7454]: I0319 12:05:36.948664 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"e82f2fc3d8273fe92f80fac6c311d17b7083f322a8c31b9e4e35d22dddf4adb6"} Mar 19 12:05:36.949581 master-0 kubenswrapper[7454]: I0319 12:05:36.949272 7454 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="bb9699aa-8885-49ec-a3b3-8c199d95bbf9" Mar 19 12:05:36.949581 master-0 kubenswrapper[7454]: I0319 12:05:36.949330 7454 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="bb9699aa-8885-49ec-a3b3-8c199d95bbf9" Mar 19 12:05:36.970693 master-0 kubenswrapper[7454]: E0319 12:05:36.970567 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 12:05:38.663777 master-0 kubenswrapper[7454]: I0319 12:05:38.663697 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 19 12:05:38.663777 master-0 kubenswrapper[7454]: I0319 12:05:38.663774 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 19 12:05:39.633882 master-0 kubenswrapper[7454]: I0319 12:05:39.633771 7454 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 12:05:39.633882 master-0 kubenswrapper[7454]: I0319 12:05:39.633858 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 12:05:39.807691 master-0 kubenswrapper[7454]: E0319 12:05:39.807559 7454 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 19 12:05:46.633554 master-0 kubenswrapper[7454]: I0319 12:05:46.633462 7454 scope.go:117] "RemoveContainer" containerID="e81d0406e1f1789da991b6d3be3c0b8c07d3b9704f0b264dbaa399283ae48d6c" Mar 19 12:05:46.635183 master-0 kubenswrapper[7454]: I0319 12:05:46.635137 7454 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" start-of-body= Mar 19 12:05:46.635273 master-0 kubenswrapper[7454]: I0319 12:05:46.635198 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 19 12:05:46.648120 master-0 kubenswrapper[7454]: I0319 12:05:46.648060 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:05:46.648657 master-0 kubenswrapper[7454]: I0319 12:05:46.648605 7454 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"5d261740d47e7306918cefef333039548b8250950612585ba90f860cca83b5a2"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 19 12:05:46.648751 master-0 kubenswrapper[7454]: I0319 12:05:46.648709 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" containerID="cri-o://5d261740d47e7306918cefef333039548b8250950612585ba90f860cca83b5a2" gracePeriod=30 Mar 19 12:05:46.972017 master-0 kubenswrapper[7454]: E0319 12:05:46.971510 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 12:05:47.047263 master-0 kubenswrapper[7454]: I0319 12:05:47.047204 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/cluster-policy-controller/1.log" Mar 19 12:05:47.049433 master-0 kubenswrapper[7454]: I0319 12:05:47.049387 7454 generic.go:334] "Generic (PLEG): container finished" podID="ed7034eee202d25f8fdd5bf58084d919" containerID="5d261740d47e7306918cefef333039548b8250950612585ba90f860cca83b5a2" exitCode=255 Mar 19 12:05:47.049538 master-0 kubenswrapper[7454]: I0319 12:05:47.049469 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ed7034eee202d25f8fdd5bf58084d919","Type":"ContainerDied","Data":"5d261740d47e7306918cefef333039548b8250950612585ba90f860cca83b5a2"} Mar 19 12:05:47.049693 master-0 kubenswrapper[7454]: I0319 12:05:47.049629 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ed7034eee202d25f8fdd5bf58084d919","Type":"ContainerStarted","Data":"f770bc0056756cc6ba5e1f2815e45c32893439cd6d38c9442f87e1b6e4fefb5a"} Mar 19 12:05:47.049774 master-0 kubenswrapper[7454]: I0319 12:05:47.049683 7454 scope.go:117] "RemoveContainer" containerID="d47b78b4162ef738abc79ae7fccddf86e10f2a7b582e6e8119dc73b890a42578" Mar 19 12:05:47.053391 master-0 kubenswrapper[7454]: I0319 12:05:47.053339 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-6m654_944eac68-e72b-4aed-b5dc-d7d9703178a3/snapshot-controller/2.log" Mar 19 12:05:47.053477 master-0 kubenswrapper[7454]: I0319 12:05:47.053427 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" event={"ID":"944eac68-e72b-4aed-b5dc-d7d9703178a3","Type":"ContainerStarted","Data":"b5bafb20143fb32b9632c35086c90eeefa7dd77931f84a79471ae4dc2e4a6a71"} Mar 19 12:05:48.075619 master-0 kubenswrapper[7454]: I0319 12:05:48.075536 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/cluster-policy-controller/1.log" Mar 19 12:05:48.675136 master-0 kubenswrapper[7454]: E0319 12:05:48.674943 7454 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Mar 19 12:05:48.675136 master-0 kubenswrapper[7454]: &Event{ObjectMeta:{router-default-7dcf5569b5-lkpgl.189e3c088d68e91e openshift-ingress 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress,Name:router-default-7dcf5569b5-lkpgl,UID:91112ce6-4f9d-44c1-a4e7-fea126554bcf,APIVersion:v1,ResourceVersion:8225,FieldPath:spec.containers{router},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Mar 19 12:05:48.675136 master-0 kubenswrapper[7454]: body: [-]backend-http failed: reason withheld Mar 19 12:05:48.675136 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:05:48.675136 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:05:48.675136 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:05:48.675136 master-0 kubenswrapper[7454]: Mar 19 12:05:48.675136 master-0 kubenswrapper[7454]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:55:10.660421918 +0000 UTC m=+80.290887841,LastTimestamp:2026-03-19 12:03:44.66072295 +0000 UTC m=+594.291188893,Count:376,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 19 12:05:48.675136 master-0 kubenswrapper[7454]: > Mar 19 12:05:48.701058 master-0 kubenswrapper[7454]: I0319 12:05:48.701002 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 19 12:05:49.267137 master-0 kubenswrapper[7454]: I0319 12:05:49.267067 7454 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Mar 19 12:05:49.367245 master-0 kubenswrapper[7454]: I0319 12:05:49.367161 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 19 12:05:49.373380 master-0 kubenswrapper[7454]: I0319 12:05:49.373325 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 19 12:05:53.683172 master-0 kubenswrapper[7454]: I0319 12:05:53.683097 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 19 12:05:56.648253 master-0 kubenswrapper[7454]: I0319 12:05:56.648181 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:05:56.648253 master-0 kubenswrapper[7454]: I0319 12:05:56.648265 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:05:56.972848 master-0 kubenswrapper[7454]: E0319 12:05:56.972728 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 12:05:59.634081 master-0 kubenswrapper[7454]: I0319 12:05:59.633961 7454 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 12:05:59.634081 master-0 kubenswrapper[7454]: I0319 12:05:59.634056 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 12:06:02.190576 master-0 kubenswrapper[7454]: I0319 12:06:02.189228 7454 generic.go:334] "Generic (PLEG): container finished" podID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerID="5204ec6a181aadcc019743971b04d16299507e076f3ad2bde88b1a3554a20992" exitCode=0 Mar 19 12:06:02.190576 master-0 kubenswrapper[7454]: I0319 12:06:02.189297 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" event={"ID":"91112ce6-4f9d-44c1-a4e7-fea126554bcf","Type":"ContainerDied","Data":"5204ec6a181aadcc019743971b04d16299507e076f3ad2bde88b1a3554a20992"} Mar 19 12:06:02.190576 master-0 kubenswrapper[7454]: I0319 12:06:02.189341 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" event={"ID":"91112ce6-4f9d-44c1-a4e7-fea126554bcf","Type":"ContainerStarted","Data":"6f74355f30b0cc7b3534f39a3335ceb85c6bdd019a4b22eade41702408961aed"} Mar 19 12:06:02.190576 master-0 kubenswrapper[7454]: I0319 12:06:02.189371 7454 scope.go:117] "RemoveContainer" containerID="fc66004bdf7840ad3f084c0dfa71eeb2520e8e4a081e3e6ac34bc77b6fbd71ea" Mar 19 12:06:02.275862 master-0 kubenswrapper[7454]: E0319 12:06:02.275430 7454 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 19 12:06:02.656826 master-0 kubenswrapper[7454]: I0319 12:06:02.656726 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:06:02.659946 master-0 kubenswrapper[7454]: I0319 12:06:02.659890 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:02.659946 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:02.659946 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:02.659946 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:02.659946 master-0 kubenswrapper[7454]: I0319 12:06:02.659938 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:03.660789 master-0 kubenswrapper[7454]: I0319 12:06:03.660657 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:03.660789 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:03.660789 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:03.660789 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:03.662036 master-0 kubenswrapper[7454]: I0319 12:06:03.660846 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:04.659455 master-0 kubenswrapper[7454]: I0319 12:06:04.659377 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:04.659455 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:04.659455 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:04.659455 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:04.659455 master-0 kubenswrapper[7454]: I0319 12:06:04.659440 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:05.659821 master-0 kubenswrapper[7454]: I0319 12:06:05.659725 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:05.659821 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:05.659821 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:05.659821 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:05.660887 master-0 kubenswrapper[7454]: I0319 12:06:05.659844 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:06.660483 master-0 kubenswrapper[7454]: I0319 12:06:06.660404 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:06.660483 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:06.660483 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:06.660483 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:06.661571 master-0 kubenswrapper[7454]: I0319 12:06:06.661526 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:06.974310 master-0 kubenswrapper[7454]: E0319 12:06:06.974182 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 12:06:06.974310 master-0 kubenswrapper[7454]: E0319 12:06:06.974266 7454 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 19 12:06:07.656728 master-0 kubenswrapper[7454]: I0319 12:06:07.656645 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:06:07.659768 master-0 kubenswrapper[7454]: I0319 12:06:07.659695 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:07.659768 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:07.659768 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:07.659768 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:07.660068 master-0 kubenswrapper[7454]: I0319 12:06:07.659783 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:08.660640 master-0 kubenswrapper[7454]: I0319 12:06:08.660558 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:08.660640 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:08.660640 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:08.660640 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:08.661538 master-0 kubenswrapper[7454]: I0319 12:06:08.660656 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:09.635125 master-0 kubenswrapper[7454]: I0319 12:06:09.635044 7454 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 12:06:09.635337 master-0 kubenswrapper[7454]: I0319 12:06:09.635130 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 12:06:09.659075 master-0 kubenswrapper[7454]: I0319 12:06:09.659011 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:09.659075 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:09.659075 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:09.659075 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:09.659237 master-0 kubenswrapper[7454]: I0319 12:06:09.659124 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:10.610695 master-0 kubenswrapper[7454]: I0319 12:06:10.610607 7454 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 12:06:10.610695 master-0 kubenswrapper[7454]: I0319 12:06:10.610667 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 12:06:10.611745 master-0 kubenswrapper[7454]: I0319 12:06:10.610740 7454 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 12:06:10.611745 master-0 kubenswrapper[7454]: I0319 12:06:10.610769 7454 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 12:06:10.659935 master-0 kubenswrapper[7454]: I0319 12:06:10.659871 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:10.659935 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:10.659935 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:10.659935 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:10.660391 master-0 kubenswrapper[7454]: I0319 12:06:10.659950 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:11.659492 master-0 kubenswrapper[7454]: I0319 12:06:11.659387 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:11.659492 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:11.659492 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:11.659492 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:11.659492 master-0 kubenswrapper[7454]: I0319 12:06:11.659443 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:12.690826 master-0 kubenswrapper[7454]: I0319 12:06:12.687132 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:12.690826 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:12.690826 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:12.690826 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:12.690826 master-0 kubenswrapper[7454]: I0319 12:06:12.687230 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:13.659519 master-0 kubenswrapper[7454]: I0319 12:06:13.659448 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:13.659519 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:13.659519 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:13.659519 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:13.660018 master-0 kubenswrapper[7454]: I0319 12:06:13.659521 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:14.659914 master-0 kubenswrapper[7454]: I0319 12:06:14.659852 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:14.659914 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:14.659914 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:14.659914 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:14.660452 master-0 kubenswrapper[7454]: I0319 12:06:14.659946 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:15.660267 master-0 kubenswrapper[7454]: I0319 12:06:15.660145 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:15.660267 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:15.660267 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:15.660267 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:15.661489 master-0 kubenswrapper[7454]: I0319 12:06:15.660278 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:16.209253 master-0 kubenswrapper[7454]: E0319 12:06:16.209167 7454 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 19 12:06:16.634924 master-0 kubenswrapper[7454]: I0319 12:06:16.634843 7454 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 19 12:06:16.635178 master-0 kubenswrapper[7454]: I0319 12:06:16.634949 7454 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 19 12:06:16.635178 master-0 kubenswrapper[7454]: I0319 12:06:16.635011 7454 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 19 12:06:16.635178 master-0 kubenswrapper[7454]: I0319 12:06:16.635110 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 19 12:06:16.660135 master-0 kubenswrapper[7454]: I0319 12:06:16.660051 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:16.660135 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:16.660135 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:16.660135 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:16.660818 master-0 kubenswrapper[7454]: I0319 12:06:16.660173 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:17.310071 master-0 kubenswrapper[7454]: I0319 12:06:17.310001 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-6m654_944eac68-e72b-4aed-b5dc-d7d9703178a3/snapshot-controller/3.log" Mar 19 12:06:17.310911 master-0 kubenswrapper[7454]: I0319 12:06:17.310870 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-6m654_944eac68-e72b-4aed-b5dc-d7d9703178a3/snapshot-controller/2.log" Mar 19 12:06:17.310986 master-0 kubenswrapper[7454]: I0319 12:06:17.310953 7454 generic.go:334] "Generic (PLEG): container finished" podID="944eac68-e72b-4aed-b5dc-d7d9703178a3" containerID="b5bafb20143fb32b9632c35086c90eeefa7dd77931f84a79471ae4dc2e4a6a71" exitCode=1 Mar 19 12:06:17.311073 master-0 kubenswrapper[7454]: I0319 12:06:17.311021 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" event={"ID":"944eac68-e72b-4aed-b5dc-d7d9703178a3","Type":"ContainerDied","Data":"b5bafb20143fb32b9632c35086c90eeefa7dd77931f84a79471ae4dc2e4a6a71"} Mar 19 12:06:17.311141 master-0 kubenswrapper[7454]: I0319 12:06:17.311117 7454 scope.go:117] "RemoveContainer" containerID="e81d0406e1f1789da991b6d3be3c0b8c07d3b9704f0b264dbaa399283ae48d6c" Mar 19 12:06:17.312563 master-0 kubenswrapper[7454]: I0319 12:06:17.312520 7454 scope.go:117] "RemoveContainer" containerID="b5bafb20143fb32b9632c35086c90eeefa7dd77931f84a79471ae4dc2e4a6a71" Mar 19 12:06:17.314962 master-0 kubenswrapper[7454]: I0319 12:06:17.314909 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/cluster-policy-controller/1.log" Mar 19 12:06:17.316453 master-0 kubenswrapper[7454]: E0319 12:06:17.316342 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-6m654_openshift-cluster-storage-operator(944eac68-e72b-4aed-b5dc-d7d9703178a3)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" podUID="944eac68-e72b-4aed-b5dc-d7d9703178a3" Mar 19 12:06:17.317290 master-0 kubenswrapper[7454]: I0319 12:06:17.317240 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/kube-controller-manager/0.log" Mar 19 12:06:17.317371 master-0 kubenswrapper[7454]: I0319 12:06:17.317326 7454 generic.go:334] "Generic (PLEG): container finished" podID="ed7034eee202d25f8fdd5bf58084d919" containerID="190a2ede2af79ab256016ad5364d037b5e12b69b5a7a2227b7287826e6597c14" exitCode=1 Mar 19 12:06:17.317416 master-0 kubenswrapper[7454]: I0319 12:06:17.317370 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ed7034eee202d25f8fdd5bf58084d919","Type":"ContainerDied","Data":"190a2ede2af79ab256016ad5364d037b5e12b69b5a7a2227b7287826e6597c14"} Mar 19 12:06:17.319649 master-0 kubenswrapper[7454]: I0319 12:06:17.319612 7454 scope.go:117] "RemoveContainer" containerID="190a2ede2af79ab256016ad5364d037b5e12b69b5a7a2227b7287826e6597c14" Mar 19 12:06:17.659130 master-0 kubenswrapper[7454]: I0319 12:06:17.659014 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:17.659130 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:17.659130 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:17.659130 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:17.659130 master-0 kubenswrapper[7454]: I0319 12:06:17.659106 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:18.347257 master-0 kubenswrapper[7454]: I0319 12:06:18.347181 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-6m654_944eac68-e72b-4aed-b5dc-d7d9703178a3/snapshot-controller/3.log" Mar 19 12:06:18.352592 master-0 kubenswrapper[7454]: I0319 12:06:18.352524 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/cluster-policy-controller/1.log" Mar 19 12:06:18.355123 master-0 kubenswrapper[7454]: I0319 12:06:18.355061 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/kube-controller-manager/0.log" Mar 19 12:06:18.355260 master-0 kubenswrapper[7454]: I0319 12:06:18.355154 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ed7034eee202d25f8fdd5bf58084d919","Type":"ContainerStarted","Data":"8088add442d8a84ce49177d60c8f88d3eb643fdd316c8a11da9030fc8e5dfb04"} Mar 19 12:06:18.660473 master-0 kubenswrapper[7454]: I0319 12:06:18.660285 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:18.660473 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:18.660473 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:18.660473 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:18.660473 master-0 kubenswrapper[7454]: I0319 12:06:18.660389 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:19.619977 master-0 kubenswrapper[7454]: I0319 12:06:19.619931 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:06:19.634526 master-0 kubenswrapper[7454]: I0319 12:06:19.634463 7454 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 12:06:19.634768 master-0 kubenswrapper[7454]: I0319 12:06:19.634526 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 12:06:19.634768 master-0 kubenswrapper[7454]: I0319 12:06:19.634573 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:06:19.635073 master-0 kubenswrapper[7454]: I0319 12:06:19.635026 7454 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"f770bc0056756cc6ba5e1f2815e45c32893439cd6d38c9442f87e1b6e4fefb5a"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 19 12:06:19.635161 master-0 kubenswrapper[7454]: I0319 12:06:19.635146 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" containerID="cri-o://f770bc0056756cc6ba5e1f2815e45c32893439cd6d38c9442f87e1b6e4fefb5a" gracePeriod=30 Mar 19 12:06:19.660148 master-0 kubenswrapper[7454]: I0319 12:06:19.660044 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:19.660148 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:19.660148 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:19.660148 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:19.660148 master-0 kubenswrapper[7454]: I0319 12:06:19.660132 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:20.659564 master-0 kubenswrapper[7454]: I0319 12:06:20.659484 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:20.659564 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:20.659564 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:20.659564 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:20.660266 master-0 kubenswrapper[7454]: I0319 12:06:20.659584 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:21.658841 master-0 kubenswrapper[7454]: I0319 12:06:21.658754 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:21.658841 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:21.658841 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:21.658841 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:21.658841 master-0 kubenswrapper[7454]: I0319 12:06:21.658825 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:22.660179 master-0 kubenswrapper[7454]: I0319 12:06:22.660094 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:22.660179 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:22.660179 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:22.660179 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:22.661175 master-0 kubenswrapper[7454]: I0319 12:06:22.660183 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:22.681086 master-0 kubenswrapper[7454]: E0319 12:06:22.680907 7454 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{network-node-identity-wd4nx.189e3c1f220fcc58 openshift-network-node-identity 9495 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-network-node-identity,Name:network-node-identity-wd4nx,UID:8414b6b0-ee16-47a5-982b-ee58b136cfcf,APIVersion:v1,ResourceVersion:3425,FieldPath:spec.containers{approver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:56:47 +0000 UTC,LastTimestamp:2026-03-19 12:03:46.63639625 +0000 UTC m=+596.266862203,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 12:06:23.659543 master-0 kubenswrapper[7454]: I0319 12:06:23.659443 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:23.659543 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:23.659543 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:23.659543 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:23.659938 master-0 kubenswrapper[7454]: I0319 12:06:23.659589 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:24.658581 master-0 kubenswrapper[7454]: I0319 12:06:24.658516 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:24.658581 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:24.658581 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:24.658581 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:24.659707 master-0 kubenswrapper[7454]: I0319 12:06:24.658591 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:25.659886 master-0 kubenswrapper[7454]: I0319 12:06:25.659805 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:25.659886 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:25.659886 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:25.659886 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:25.659886 master-0 kubenswrapper[7454]: I0319 12:06:25.659873 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:26.647565 master-0 kubenswrapper[7454]: I0319 12:06:26.647509 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:06:26.648004 master-0 kubenswrapper[7454]: I0319 12:06:26.647967 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:06:26.648393 master-0 kubenswrapper[7454]: I0319 12:06:26.648357 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:06:26.660539 master-0 kubenswrapper[7454]: I0319 12:06:26.660465 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:26.660539 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:26.660539 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:26.660539 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:26.661608 master-0 kubenswrapper[7454]: I0319 12:06:26.660546 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:27.175670 master-0 kubenswrapper[7454]: E0319 12:06:27.175597 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T12:06:17Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T12:06:17Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T12:06:17Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T12:06:17Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 12:06:27.660966 master-0 kubenswrapper[7454]: I0319 12:06:27.660866 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:27.660966 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:27.660966 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:27.660966 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:27.660966 master-0 kubenswrapper[7454]: I0319 12:06:27.660963 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:28.659985 master-0 kubenswrapper[7454]: I0319 12:06:28.659924 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:28.659985 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:28.659985 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:28.659985 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:28.660332 master-0 kubenswrapper[7454]: I0319 12:06:28.660007 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:29.634063 master-0 kubenswrapper[7454]: I0319 12:06:29.633965 7454 scope.go:117] "RemoveContainer" containerID="b5bafb20143fb32b9632c35086c90eeefa7dd77931f84a79471ae4dc2e4a6a71" Mar 19 12:06:29.634849 master-0 kubenswrapper[7454]: E0319 12:06:29.634313 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-6m654_openshift-cluster-storage-operator(944eac68-e72b-4aed-b5dc-d7d9703178a3)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" podUID="944eac68-e72b-4aed-b5dc-d7d9703178a3" Mar 19 12:06:29.663956 master-0 kubenswrapper[7454]: I0319 12:06:29.663773 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:29.663956 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:29.663956 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:29.663956 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:29.664542 master-0 kubenswrapper[7454]: I0319 12:06:29.663951 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:30.659592 master-0 kubenswrapper[7454]: I0319 12:06:30.659525 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:30.659592 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:30.659592 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:30.659592 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:30.659592 master-0 kubenswrapper[7454]: I0319 12:06:30.659591 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:31.660030 master-0 kubenswrapper[7454]: I0319 12:06:31.659959 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:31.660030 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:31.660030 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:31.660030 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:31.660748 master-0 kubenswrapper[7454]: I0319 12:06:31.660067 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:32.660373 master-0 kubenswrapper[7454]: I0319 12:06:32.660230 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:32.660373 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:32.660373 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:32.660373 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:32.661639 master-0 kubenswrapper[7454]: I0319 12:06:32.660371 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:33.660530 master-0 kubenswrapper[7454]: I0319 12:06:33.660426 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:33.660530 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:33.660530 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:33.660530 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:33.660530 master-0 kubenswrapper[7454]: I0319 12:06:33.660514 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:34.660294 master-0 kubenswrapper[7454]: I0319 12:06:34.660182 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:34.660294 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:34.660294 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:34.660294 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:34.660294 master-0 kubenswrapper[7454]: I0319 12:06:34.660284 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:35.659453 master-0 kubenswrapper[7454]: I0319 12:06:35.659361 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:35.659453 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:35.659453 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:35.659453 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:35.659973 master-0 kubenswrapper[7454]: I0319 12:06:35.659458 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:36.643299 master-0 kubenswrapper[7454]: I0319 12:06:36.643223 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:06:36.659142 master-0 kubenswrapper[7454]: I0319 12:06:36.659057 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:36.659142 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:36.659142 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:36.659142 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:36.659403 master-0 kubenswrapper[7454]: I0319 12:06:36.659184 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:37.177591 master-0 kubenswrapper[7454]: E0319 12:06:37.177513 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes master-0)" Mar 19 12:06:37.660032 master-0 kubenswrapper[7454]: I0319 12:06:37.659956 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:37.660032 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:37.660032 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:37.660032 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:37.660774 master-0 kubenswrapper[7454]: I0319 12:06:37.660054 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:38.659773 master-0 kubenswrapper[7454]: I0319 12:06:38.659687 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:38.659773 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:38.659773 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:38.659773 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:38.661154 master-0 kubenswrapper[7454]: I0319 12:06:38.659782 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:39.660691 master-0 kubenswrapper[7454]: I0319 12:06:39.660604 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:39.660691 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:39.660691 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:39.660691 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:39.661712 master-0 kubenswrapper[7454]: I0319 12:06:39.660709 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:40.642696 master-0 kubenswrapper[7454]: I0319 12:06:40.642637 7454 scope.go:117] "RemoveContainer" containerID="b5bafb20143fb32b9632c35086c90eeefa7dd77931f84a79471ae4dc2e4a6a71" Mar 19 12:06:40.643461 master-0 kubenswrapper[7454]: E0319 12:06:40.643423 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-6m654_openshift-cluster-storage-operator(944eac68-e72b-4aed-b5dc-d7d9703178a3)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" podUID="944eac68-e72b-4aed-b5dc-d7d9703178a3" Mar 19 12:06:40.659751 master-0 kubenswrapper[7454]: I0319 12:06:40.659674 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:40.659751 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:40.659751 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:40.659751 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:40.660014 master-0 kubenswrapper[7454]: I0319 12:06:40.659831 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:41.659951 master-0 kubenswrapper[7454]: I0319 12:06:41.659890 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:41.659951 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:41.659951 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:41.659951 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:41.661047 master-0 kubenswrapper[7454]: I0319 12:06:41.660994 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:42.660162 master-0 kubenswrapper[7454]: I0319 12:06:42.660058 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:42.660162 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:42.660162 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:42.660162 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:42.660162 master-0 kubenswrapper[7454]: I0319 12:06:42.660137 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:43.660117 master-0 kubenswrapper[7454]: I0319 12:06:43.660000 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:43.660117 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:43.660117 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:43.660117 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:43.660117 master-0 kubenswrapper[7454]: I0319 12:06:43.660081 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:44.659668 master-0 kubenswrapper[7454]: I0319 12:06:44.659581 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:44.659668 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:44.659668 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:44.659668 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:44.660252 master-0 kubenswrapper[7454]: I0319 12:06:44.659676 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:45.658935 master-0 kubenswrapper[7454]: I0319 12:06:45.658898 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:45.658935 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:45.658935 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:45.658935 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:45.659545 master-0 kubenswrapper[7454]: I0319 12:06:45.659523 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:46.658893 master-0 kubenswrapper[7454]: I0319 12:06:46.658847 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:46.658893 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:46.658893 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:46.658893 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:46.659652 master-0 kubenswrapper[7454]: I0319 12:06:46.659619 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:47.177847 master-0 kubenswrapper[7454]: E0319 12:06:47.177747 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Mar 19 12:06:47.659399 master-0 kubenswrapper[7454]: I0319 12:06:47.659327 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:47.659399 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:47.659399 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:47.659399 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:47.659399 master-0 kubenswrapper[7454]: I0319 12:06:47.659386 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:48.660031 master-0 kubenswrapper[7454]: I0319 12:06:48.659973 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:48.660031 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:48.660031 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:48.660031 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:48.660496 master-0 kubenswrapper[7454]: I0319 12:06:48.660044 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:49.603951 master-0 kubenswrapper[7454]: I0319 12:06:49.603903 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/cluster-policy-controller/2.log" Mar 19 12:06:49.604985 master-0 kubenswrapper[7454]: I0319 12:06:49.604943 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/cluster-policy-controller/1.log" Mar 19 12:06:49.607500 master-0 kubenswrapper[7454]: I0319 12:06:49.607439 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/kube-controller-manager/0.log" Mar 19 12:06:49.607644 master-0 kubenswrapper[7454]: I0319 12:06:49.607558 7454 generic.go:334] "Generic (PLEG): container finished" podID="ed7034eee202d25f8fdd5bf58084d919" containerID="f770bc0056756cc6ba5e1f2815e45c32893439cd6d38c9442f87e1b6e4fefb5a" exitCode=255 Mar 19 12:06:49.608010 master-0 kubenswrapper[7454]: I0319 12:06:49.607948 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ed7034eee202d25f8fdd5bf58084d919","Type":"ContainerDied","Data":"f770bc0056756cc6ba5e1f2815e45c32893439cd6d38c9442f87e1b6e4fefb5a"} Mar 19 12:06:49.608144 master-0 kubenswrapper[7454]: I0319 12:06:49.608024 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ed7034eee202d25f8fdd5bf58084d919","Type":"ContainerStarted","Data":"9397faf4795d0da7838a4c1f3a5b85201054d72615494c0ba8368d62268a9114"} Mar 19 12:06:49.608144 master-0 kubenswrapper[7454]: I0319 12:06:49.608057 7454 scope.go:117] "RemoveContainer" containerID="5d261740d47e7306918cefef333039548b8250950612585ba90f860cca83b5a2" Mar 19 12:06:49.613679 master-0 kubenswrapper[7454]: I0319 12:06:49.613638 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-qv4cg_1089ea24-add9-482e-9276-e6ded12052d7/kube-apiserver-operator/1.log" Mar 19 12:06:49.614572 master-0 kubenswrapper[7454]: I0319 12:06:49.614480 7454 generic.go:334] "Generic (PLEG): container finished" podID="1089ea24-add9-482e-9276-e6ded12052d7" containerID="9d4f9e0f3811159c5b4172ecd015dfd36c71001f3a7087b4596cd25f8695fe99" exitCode=1 Mar 19 12:06:49.614797 master-0 kubenswrapper[7454]: I0319 12:06:49.614618 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" event={"ID":"1089ea24-add9-482e-9276-e6ded12052d7","Type":"ContainerDied","Data":"9d4f9e0f3811159c5b4172ecd015dfd36c71001f3a7087b4596cd25f8695fe99"} Mar 19 12:06:49.615428 master-0 kubenswrapper[7454]: I0319 12:06:49.615384 7454 scope.go:117] "RemoveContainer" containerID="9d4f9e0f3811159c5b4172ecd015dfd36c71001f3a7087b4596cd25f8695fe99" Mar 19 12:06:49.624882 master-0 kubenswrapper[7454]: I0319 12:06:49.622309 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-7cdddc6cb-q222c_3a6b082a-649b-43f6-8e24-cf222873fe39/controller-manager/2.log" Mar 19 12:06:49.624882 master-0 kubenswrapper[7454]: I0319 12:06:49.623898 7454 generic.go:334] "Generic (PLEG): container finished" podID="3a6b082a-649b-43f6-8e24-cf222873fe39" containerID="09044fa3844b85995a99e406cd860f5e46e3c822f972902a1ed997f5be96ef8c" exitCode=255 Mar 19 12:06:49.624882 master-0 kubenswrapper[7454]: I0319 12:06:49.623948 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" event={"ID":"3a6b082a-649b-43f6-8e24-cf222873fe39","Type":"ContainerDied","Data":"09044fa3844b85995a99e406cd860f5e46e3c822f972902a1ed997f5be96ef8c"} Mar 19 12:06:49.624882 master-0 kubenswrapper[7454]: I0319 12:06:49.624472 7454 scope.go:117] "RemoveContainer" containerID="09044fa3844b85995a99e406cd860f5e46e3c822f972902a1ed997f5be96ef8c" Mar 19 12:06:49.624882 master-0 kubenswrapper[7454]: E0319 12:06:49.624775 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=controller-manager pod=controller-manager-7cdddc6cb-q222c_openshift-controller-manager(3a6b082a-649b-43f6-8e24-cf222873fe39)\"" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" podUID="3a6b082a-649b-43f6-8e24-cf222873fe39" Mar 19 12:06:49.659485 master-0 kubenswrapper[7454]: I0319 12:06:49.659417 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:49.659485 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:49.659485 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:49.659485 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:49.659485 master-0 kubenswrapper[7454]: I0319 12:06:49.659480 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:49.661382 master-0 kubenswrapper[7454]: I0319 12:06:49.661279 7454 scope.go:117] "RemoveContainer" containerID="a04e94059c93f3fb95feb69e0b122c65aebac1f390cdd0cf514b18a508325ef8" Mar 19 12:06:49.715585 master-0 kubenswrapper[7454]: I0319 12:06:49.715542 7454 scope.go:117] "RemoveContainer" containerID="9525efea18e9168adb2e8691fffa21e20effeae4cf60811da09efa9acd76b65f" Mar 19 12:06:50.644847 master-0 kubenswrapper[7454]: I0319 12:06:50.644194 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/cluster-policy-controller/2.log" Mar 19 12:06:50.646705 master-0 kubenswrapper[7454]: I0319 12:06:50.646660 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/kube-controller-manager/0.log" Mar 19 12:06:50.649429 master-0 kubenswrapper[7454]: I0319 12:06:50.649393 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-qv4cg_1089ea24-add9-482e-9276-e6ded12052d7/kube-apiserver-operator/1.log" Mar 19 12:06:50.649554 master-0 kubenswrapper[7454]: I0319 12:06:50.649500 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" event={"ID":"1089ea24-add9-482e-9276-e6ded12052d7","Type":"ContainerStarted","Data":"7b70d5a46fbdbc272ee13227763b5a028d2f93b2e62fbbeaef054faab0e08e37"} Mar 19 12:06:50.652070 master-0 kubenswrapper[7454]: I0319 12:06:50.651781 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-7cdddc6cb-q222c_3a6b082a-649b-43f6-8e24-cf222873fe39/controller-manager/2.log" Mar 19 12:06:50.659896 master-0 kubenswrapper[7454]: I0319 12:06:50.659783 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:50.659896 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:50.659896 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:50.659896 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:50.659896 master-0 kubenswrapper[7454]: I0319 12:06:50.659866 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:51.660621 master-0 kubenswrapper[7454]: I0319 12:06:51.660550 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:51.660621 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:51.660621 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:51.660621 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:51.661436 master-0 kubenswrapper[7454]: I0319 12:06:51.661399 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:52.660208 master-0 kubenswrapper[7454]: I0319 12:06:52.660150 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:52.660208 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:52.660208 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:52.660208 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:52.661531 master-0 kubenswrapper[7454]: I0319 12:06:52.661480 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:53.659858 master-0 kubenswrapper[7454]: I0319 12:06:53.659706 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:53.659858 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:53.659858 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:53.659858 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:53.659858 master-0 kubenswrapper[7454]: I0319 12:06:53.659827 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:54.659387 master-0 kubenswrapper[7454]: I0319 12:06:54.659292 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:54.659387 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:54.659387 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:54.659387 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:54.659387 master-0 kubenswrapper[7454]: I0319 12:06:54.659378 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:55.248140 master-0 kubenswrapper[7454]: I0319 12:06:55.248071 7454 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:06:55.248571 master-0 kubenswrapper[7454]: I0319 12:06:55.248538 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:06:55.250129 master-0 kubenswrapper[7454]: I0319 12:06:55.250048 7454 scope.go:117] "RemoveContainer" containerID="09044fa3844b85995a99e406cd860f5e46e3c822f972902a1ed997f5be96ef8c" Mar 19 12:06:55.250592 master-0 kubenswrapper[7454]: E0319 12:06:55.250517 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=controller-manager pod=controller-manager-7cdddc6cb-q222c_openshift-controller-manager(3a6b082a-649b-43f6-8e24-cf222873fe39)\"" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" podUID="3a6b082a-649b-43f6-8e24-cf222873fe39" Mar 19 12:06:55.634117 master-0 kubenswrapper[7454]: I0319 12:06:55.633981 7454 scope.go:117] "RemoveContainer" containerID="b5bafb20143fb32b9632c35086c90eeefa7dd77931f84a79471ae4dc2e4a6a71" Mar 19 12:06:55.634733 master-0 kubenswrapper[7454]: E0319 12:06:55.634697 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-6m654_openshift-cluster-storage-operator(944eac68-e72b-4aed-b5dc-d7d9703178a3)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" podUID="944eac68-e72b-4aed-b5dc-d7d9703178a3" Mar 19 12:06:55.659966 master-0 kubenswrapper[7454]: I0319 12:06:55.659877 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:55.659966 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:55.659966 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:55.659966 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:55.661125 master-0 kubenswrapper[7454]: I0319 12:06:55.659970 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:55.688479 master-0 kubenswrapper[7454]: I0319 12:06:55.688425 7454 scope.go:117] "RemoveContainer" containerID="09044fa3844b85995a99e406cd860f5e46e3c822f972902a1ed997f5be96ef8c" Mar 19 12:06:55.688787 master-0 kubenswrapper[7454]: E0319 12:06:55.688628 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=controller-manager pod=controller-manager-7cdddc6cb-q222c_openshift-controller-manager(3a6b082a-649b-43f6-8e24-cf222873fe39)\"" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" podUID="3a6b082a-649b-43f6-8e24-cf222873fe39" Mar 19 12:06:56.648678 master-0 kubenswrapper[7454]: I0319 12:06:56.648607 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:06:56.648678 master-0 kubenswrapper[7454]: I0319 12:06:56.648670 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:06:56.661279 master-0 kubenswrapper[7454]: I0319 12:06:56.661156 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:56.661279 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:56.661279 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:56.661279 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:56.661279 master-0 kubenswrapper[7454]: I0319 12:06:56.661253 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:56.693269 master-0 kubenswrapper[7454]: E0319 12:06:56.684271 7454 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{network-node-identity-wd4nx.189e3c1f5f4b5b3a openshift-network-node-identity 9554 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-network-node-identity,Name:network-node-identity-wd4nx,UID:8414b6b0-ee16-47a5-982b-ee58b136cfcf,APIVersion:v1,ResourceVersion:3425,FieldPath:spec.containers{approver},},Reason:Created,Message:Created container: approver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:56:48 +0000 UTC,LastTimestamp:2026-03-19 12:03:46.754075239 +0000 UTC m=+596.384541152,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 12:06:57.178747 master-0 kubenswrapper[7454]: E0319 12:06:57.178610 7454 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 12:06:57.660056 master-0 kubenswrapper[7454]: I0319 12:06:57.659978 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:57.660056 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:57.660056 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:57.660056 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:57.660568 master-0 kubenswrapper[7454]: I0319 12:06:57.660114 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:58.660704 master-0 kubenswrapper[7454]: I0319 12:06:58.660619 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:58.660704 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:58.660704 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:58.660704 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:58.661657 master-0 kubenswrapper[7454]: I0319 12:06:58.660712 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:06:59.635063 master-0 kubenswrapper[7454]: I0319 12:06:59.634947 7454 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 12:06:59.635063 master-0 kubenswrapper[7454]: I0319 12:06:59.635027 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 12:06:59.659911 master-0 kubenswrapper[7454]: I0319 12:06:59.659745 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:06:59.659911 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:06:59.659911 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:06:59.659911 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:06:59.660273 master-0 kubenswrapper[7454]: I0319 12:06:59.659915 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:00.661689 master-0 kubenswrapper[7454]: I0319 12:07:00.661609 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:00.661689 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:00.661689 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:00.661689 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:00.663017 master-0 kubenswrapper[7454]: I0319 12:07:00.661744 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:01.659048 master-0 kubenswrapper[7454]: I0319 12:07:01.658988 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:01.659048 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:01.659048 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:01.659048 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:01.659384 master-0 kubenswrapper[7454]: I0319 12:07:01.659055 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:02.660507 master-0 kubenswrapper[7454]: I0319 12:07:02.660422 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:02.660507 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:02.660507 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:02.660507 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:02.661733 master-0 kubenswrapper[7454]: I0319 12:07:02.660515 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:03.659941 master-0 kubenswrapper[7454]: I0319 12:07:03.659843 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:03.659941 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:03.659941 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:03.659941 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:03.659941 master-0 kubenswrapper[7454]: I0319 12:07:03.659935 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:04.660553 master-0 kubenswrapper[7454]: I0319 12:07:04.660460 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:04.660553 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:04.660553 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:04.660553 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:04.660553 master-0 kubenswrapper[7454]: I0319 12:07:04.660549 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:05.660156 master-0 kubenswrapper[7454]: I0319 12:07:05.660074 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:05.660156 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:05.660156 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:05.660156 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:05.660686 master-0 kubenswrapper[7454]: I0319 12:07:05.660166 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:06.660383 master-0 kubenswrapper[7454]: I0319 12:07:06.660284 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:06.660383 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:06.660383 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:06.660383 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:06.661547 master-0 kubenswrapper[7454]: I0319 12:07:06.660388 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:07.660365 master-0 kubenswrapper[7454]: I0319 12:07:07.660287 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:07.660365 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:07.660365 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:07.660365 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:07.661715 master-0 kubenswrapper[7454]: I0319 12:07:07.660391 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:08.633695 master-0 kubenswrapper[7454]: I0319 12:07:08.633620 7454 scope.go:117] "RemoveContainer" containerID="b5bafb20143fb32b9632c35086c90eeefa7dd77931f84a79471ae4dc2e4a6a71" Mar 19 12:07:08.659365 master-0 kubenswrapper[7454]: I0319 12:07:08.659254 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:08.659365 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:08.659365 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:08.659365 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:08.659365 master-0 kubenswrapper[7454]: I0319 12:07:08.659336 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:09.634165 master-0 kubenswrapper[7454]: I0319 12:07:09.634060 7454 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 12:07:09.635199 master-0 kubenswrapper[7454]: I0319 12:07:09.634188 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 12:07:09.658644 master-0 kubenswrapper[7454]: I0319 12:07:09.658571 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:09.658644 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:09.658644 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:09.658644 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:09.658909 master-0 kubenswrapper[7454]: I0319 12:07:09.658648 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:09.817625 master-0 kubenswrapper[7454]: I0319 12:07:09.817558 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-6m654_944eac68-e72b-4aed-b5dc-d7d9703178a3/snapshot-controller/3.log" Mar 19 12:07:09.817935 master-0 kubenswrapper[7454]: I0319 12:07:09.817723 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" event={"ID":"944eac68-e72b-4aed-b5dc-d7d9703178a3","Type":"ContainerStarted","Data":"c5a947116c6a12f89fdc1149ba43ce5607536b57e0862f6ca233d92f281d6f5b"} Mar 19 12:07:10.638883 master-0 kubenswrapper[7454]: I0319 12:07:10.638621 7454 scope.go:117] "RemoveContainer" containerID="09044fa3844b85995a99e406cd860f5e46e3c822f972902a1ed997f5be96ef8c" Mar 19 12:07:10.659576 master-0 kubenswrapper[7454]: I0319 12:07:10.659448 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:10.659576 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:10.659576 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:10.659576 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:10.659576 master-0 kubenswrapper[7454]: I0319 12:07:10.659552 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:10.824848 master-0 kubenswrapper[7454]: I0319 12:07:10.824773 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-7cdddc6cb-q222c_3a6b082a-649b-43f6-8e24-cf222873fe39/controller-manager/2.log" Mar 19 12:07:10.824848 master-0 kubenswrapper[7454]: I0319 12:07:10.824846 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" event={"ID":"3a6b082a-649b-43f6-8e24-cf222873fe39","Type":"ContainerStarted","Data":"dc774cb792a9ef5e2c8edc274dec5d1dc05b08edfdb8c435ffa6ab475b3fa134"} Mar 19 12:07:10.826083 master-0 kubenswrapper[7454]: I0319 12:07:10.826008 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:07:10.828363 master-0 kubenswrapper[7454]: I0319 12:07:10.828246 7454 patch_prober.go:28] interesting pod/controller-manager-7cdddc6cb-q222c container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.55:8443/healthz\": dial tcp 10.128.0.55:8443: connect: connection refused" start-of-body= Mar 19 12:07:10.828634 master-0 kubenswrapper[7454]: I0319 12:07:10.828360 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" podUID="3a6b082a-649b-43f6-8e24-cf222873fe39" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.55:8443/healthz\": dial tcp 10.128.0.55:8443: connect: connection refused" Mar 19 12:07:11.660105 master-0 kubenswrapper[7454]: I0319 12:07:11.660019 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:11.660105 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:11.660105 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:11.660105 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:11.661061 master-0 kubenswrapper[7454]: I0319 12:07:11.660130 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:11.843424 master-0 kubenswrapper[7454]: I0319 12:07:11.843310 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:07:12.660549 master-0 kubenswrapper[7454]: I0319 12:07:12.660463 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:12.660549 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:12.660549 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:12.660549 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:12.661664 master-0 kubenswrapper[7454]: I0319 12:07:12.660558 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:13.660411 master-0 kubenswrapper[7454]: I0319 12:07:13.660321 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:13.660411 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:13.660411 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:13.660411 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:13.660411 master-0 kubenswrapper[7454]: I0319 12:07:13.660402 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:14.659537 master-0 kubenswrapper[7454]: I0319 12:07:14.659454 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:14.659537 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:14.659537 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:14.659537 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:14.660354 master-0 kubenswrapper[7454]: I0319 12:07:14.659548 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:15.660846 master-0 kubenswrapper[7454]: I0319 12:07:15.660681 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:15.660846 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:15.660846 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:15.660846 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:15.660846 master-0 kubenswrapper[7454]: I0319 12:07:15.660791 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:16.660902 master-0 kubenswrapper[7454]: I0319 12:07:16.660091 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:16.660902 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:16.660902 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:16.660902 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:16.660902 master-0 kubenswrapper[7454]: I0319 12:07:16.660185 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:17.659340 master-0 kubenswrapper[7454]: I0319 12:07:17.659274 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:17.659340 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:17.659340 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:17.659340 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:17.659959 master-0 kubenswrapper[7454]: I0319 12:07:17.659916 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:18.660101 master-0 kubenswrapper[7454]: I0319 12:07:18.660033 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:18.660101 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:18.660101 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:18.660101 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:18.661512 master-0 kubenswrapper[7454]: I0319 12:07:18.661455 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:19.634219 master-0 kubenswrapper[7454]: I0319 12:07:19.634158 7454 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Mar 19 12:07:19.634521 master-0 kubenswrapper[7454]: I0319 12:07:19.634250 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Mar 19 12:07:19.634521 master-0 kubenswrapper[7454]: I0319 12:07:19.634316 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:07:19.635164 master-0 kubenswrapper[7454]: I0319 12:07:19.635111 7454 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"9397faf4795d0da7838a4c1f3a5b85201054d72615494c0ba8368d62268a9114"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 19 12:07:19.635311 master-0 kubenswrapper[7454]: I0319 12:07:19.635271 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" containerID="cri-o://9397faf4795d0da7838a4c1f3a5b85201054d72615494c0ba8368d62268a9114" gracePeriod=30 Mar 19 12:07:19.664346 master-0 kubenswrapper[7454]: I0319 12:07:19.664262 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:19.664346 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:19.664346 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:19.664346 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:19.665354 master-0 kubenswrapper[7454]: I0319 12:07:19.664420 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:20.217602 master-0 kubenswrapper[7454]: I0319 12:07:20.217484 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 19 12:07:20.660128 master-0 kubenswrapper[7454]: I0319 12:07:20.660070 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:20.660128 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:20.660128 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:20.660128 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:20.660128 master-0 kubenswrapper[7454]: I0319 12:07:20.660138 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:20.680229 master-0 kubenswrapper[7454]: I0319 12:07:20.680128 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=0.680103999 podStartE2EDuration="680.103999ms" podCreationTimestamp="2026-03-19 12:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:07:20.677445936 +0000 UTC m=+810.307911859" watchObservedRunningTime="2026-03-19 12:07:20.680103999 +0000 UTC m=+810.310569942" Mar 19 12:07:21.659907 master-0 kubenswrapper[7454]: I0319 12:07:21.659846 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:21.659907 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:21.659907 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:21.659907 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:21.659907 master-0 kubenswrapper[7454]: I0319 12:07:21.659910 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:22.660596 master-0 kubenswrapper[7454]: I0319 12:07:22.660535 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:22.660596 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:22.660596 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:22.660596 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:22.660596 master-0 kubenswrapper[7454]: I0319 12:07:22.660610 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:23.659790 master-0 kubenswrapper[7454]: I0319 12:07:23.659732 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:23.659790 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:23.659790 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:23.659790 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:23.660433 master-0 kubenswrapper[7454]: I0319 12:07:23.660391 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:24.660215 master-0 kubenswrapper[7454]: I0319 12:07:24.660143 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:24.660215 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:24.660215 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:24.660215 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:24.660953 master-0 kubenswrapper[7454]: I0319 12:07:24.660242 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:25.659432 master-0 kubenswrapper[7454]: I0319 12:07:25.659354 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:25.659432 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:25.659432 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:25.659432 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:25.659432 master-0 kubenswrapper[7454]: I0319 12:07:25.659427 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:26.659926 master-0 kubenswrapper[7454]: I0319 12:07:26.659863 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:26.659926 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:26.659926 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:26.659926 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:26.660883 master-0 kubenswrapper[7454]: I0319 12:07:26.659932 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:27.660161 master-0 kubenswrapper[7454]: I0319 12:07:27.660080 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:27.660161 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:27.660161 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:27.660161 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:27.661159 master-0 kubenswrapper[7454]: I0319 12:07:27.660180 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:28.660050 master-0 kubenswrapper[7454]: I0319 12:07:28.659992 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:28.660050 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:28.660050 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:28.660050 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:28.661250 master-0 kubenswrapper[7454]: I0319 12:07:28.661204 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:29.660772 master-0 kubenswrapper[7454]: I0319 12:07:29.660699 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:29.660772 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:29.660772 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:29.660772 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:29.661829 master-0 kubenswrapper[7454]: I0319 12:07:29.660791 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:30.659932 master-0 kubenswrapper[7454]: I0319 12:07:30.659838 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:30.659932 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:30.659932 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:30.659932 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:30.660517 master-0 kubenswrapper[7454]: I0319 12:07:30.659951 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:30.688667 master-0 kubenswrapper[7454]: E0319 12:07:30.688443 7454 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{network-node-identity-wd4nx.189e3c1f625cf14a openshift-network-node-identity 9560 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-network-node-identity,Name:network-node-identity-wd4nx,UID:8414b6b0-ee16-47a5-982b-ee58b136cfcf,APIVersion:v1,ResourceVersion:3425,FieldPath:spec.containers{approver},},Reason:Started,Message:Started container approver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 11:56:48 +0000 UTC,LastTimestamp:2026-03-19 12:03:46.764135946 +0000 UTC m=+596.394601859,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 12:07:31.660133 master-0 kubenswrapper[7454]: I0319 12:07:31.660027 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:31.660133 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:31.660133 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:31.660133 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:31.660708 master-0 kubenswrapper[7454]: I0319 12:07:31.660138 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:32.659374 master-0 kubenswrapper[7454]: I0319 12:07:32.659270 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:32.659374 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:32.659374 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:32.659374 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:32.659374 master-0 kubenswrapper[7454]: I0319 12:07:32.659328 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:33.005016 master-0 kubenswrapper[7454]: I0319 12:07:33.004960 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-qv4cg_1089ea24-add9-482e-9276-e6ded12052d7/kube-apiserver-operator/2.log" Mar 19 12:07:33.006676 master-0 kubenswrapper[7454]: I0319 12:07:33.005613 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-qv4cg_1089ea24-add9-482e-9276-e6ded12052d7/kube-apiserver-operator/1.log" Mar 19 12:07:33.006676 master-0 kubenswrapper[7454]: I0319 12:07:33.005651 7454 generic.go:334] "Generic (PLEG): container finished" podID="1089ea24-add9-482e-9276-e6ded12052d7" containerID="7b70d5a46fbdbc272ee13227763b5a028d2f93b2e62fbbeaef054faab0e08e37" exitCode=255 Mar 19 12:07:33.006676 master-0 kubenswrapper[7454]: I0319 12:07:33.005707 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" event={"ID":"1089ea24-add9-482e-9276-e6ded12052d7","Type":"ContainerDied","Data":"7b70d5a46fbdbc272ee13227763b5a028d2f93b2e62fbbeaef054faab0e08e37"} Mar 19 12:07:33.006676 master-0 kubenswrapper[7454]: I0319 12:07:33.005744 7454 scope.go:117] "RemoveContainer" containerID="9d4f9e0f3811159c5b4172ecd015dfd36c71001f3a7087b4596cd25f8695fe99" Mar 19 12:07:33.006676 master-0 kubenswrapper[7454]: I0319 12:07:33.006235 7454 scope.go:117] "RemoveContainer" containerID="7b70d5a46fbdbc272ee13227763b5a028d2f93b2e62fbbeaef054faab0e08e37" Mar 19 12:07:33.006676 master-0 kubenswrapper[7454]: E0319 12:07:33.006401 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-8b68b9d9b-qv4cg_openshift-kube-apiserver-operator(1089ea24-add9-482e-9276-e6ded12052d7)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" podUID="1089ea24-add9-482e-9276-e6ded12052d7" Mar 19 12:07:33.010680 master-0 kubenswrapper[7454]: I0319 12:07:33.010628 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/cluster-policy-controller/3.log" Mar 19 12:07:33.011456 master-0 kubenswrapper[7454]: I0319 12:07:33.011402 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/cluster-policy-controller/2.log" Mar 19 12:07:33.020848 master-0 kubenswrapper[7454]: I0319 12:07:33.013506 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/kube-controller-manager/0.log" Mar 19 12:07:33.020848 master-0 kubenswrapper[7454]: I0319 12:07:33.018000 7454 generic.go:334] "Generic (PLEG): container finished" podID="ed7034eee202d25f8fdd5bf58084d919" containerID="9397faf4795d0da7838a4c1f3a5b85201054d72615494c0ba8368d62268a9114" exitCode=255 Mar 19 12:07:33.020848 master-0 kubenswrapper[7454]: I0319 12:07:33.018079 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ed7034eee202d25f8fdd5bf58084d919","Type":"ContainerDied","Data":"9397faf4795d0da7838a4c1f3a5b85201054d72615494c0ba8368d62268a9114"} Mar 19 12:07:33.020848 master-0 kubenswrapper[7454]: I0319 12:07:33.018132 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ed7034eee202d25f8fdd5bf58084d919","Type":"ContainerStarted","Data":"905b5c7c59d30b4b870a40d926e6ce6d9ad7f0bf509dc07ea760b5f841773a4f"} Mar 19 12:07:33.063426 master-0 kubenswrapper[7454]: I0319 12:07:33.063381 7454 scope.go:117] "RemoveContainer" containerID="f770bc0056756cc6ba5e1f2815e45c32893439cd6d38c9442f87e1b6e4fefb5a" Mar 19 12:07:33.660548 master-0 kubenswrapper[7454]: I0319 12:07:33.660447 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:33.660548 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:33.660548 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:33.660548 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:33.661586 master-0 kubenswrapper[7454]: I0319 12:07:33.660576 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:34.027895 master-0 kubenswrapper[7454]: I0319 12:07:34.027791 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-qv4cg_1089ea24-add9-482e-9276-e6ded12052d7/kube-apiserver-operator/2.log" Mar 19 12:07:34.032320 master-0 kubenswrapper[7454]: I0319 12:07:34.032253 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/cluster-policy-controller/3.log" Mar 19 12:07:34.035105 master-0 kubenswrapper[7454]: I0319 12:07:34.035054 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/kube-controller-manager/0.log" Mar 19 12:07:34.659214 master-0 kubenswrapper[7454]: I0319 12:07:34.659154 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:34.659214 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:34.659214 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:34.659214 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:34.659679 master-0 kubenswrapper[7454]: I0319 12:07:34.659232 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:35.660413 master-0 kubenswrapper[7454]: I0319 12:07:35.660319 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:35.660413 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:35.660413 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:35.660413 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:35.661485 master-0 kubenswrapper[7454]: I0319 12:07:35.660413 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:36.650115 master-0 kubenswrapper[7454]: I0319 12:07:36.650053 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:07:36.650647 master-0 kubenswrapper[7454]: I0319 12:07:36.650205 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:07:36.650647 master-0 kubenswrapper[7454]: I0319 12:07:36.650230 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:07:36.659987 master-0 kubenswrapper[7454]: I0319 12:07:36.659882 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:36.659987 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:36.659987 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:36.659987 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:36.660439 master-0 kubenswrapper[7454]: I0319 12:07:36.660089 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:37.659716 master-0 kubenswrapper[7454]: I0319 12:07:37.659629 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:37.659716 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:37.659716 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:37.659716 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:37.660199 master-0 kubenswrapper[7454]: I0319 12:07:37.659714 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:38.450981 master-0 kubenswrapper[7454]: I0319 12:07:38.450924 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 19 12:07:38.451572 master-0 kubenswrapper[7454]: E0319 12:07:38.451252 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b48817c-05cd-430b-9b1f-9cc037f1ca77" containerName="installer" Mar 19 12:07:38.451572 master-0 kubenswrapper[7454]: I0319 12:07:38.451279 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b48817c-05cd-430b-9b1f-9cc037f1ca77" containerName="installer" Mar 19 12:07:38.451572 master-0 kubenswrapper[7454]: I0319 12:07:38.451433 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b48817c-05cd-430b-9b1f-9cc037f1ca77" containerName="installer" Mar 19 12:07:38.452191 master-0 kubenswrapper[7454]: I0319 12:07:38.451963 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 12:07:38.454462 master-0 kubenswrapper[7454]: I0319 12:07:38.454405 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-dqvjj" Mar 19 12:07:38.454747 master-0 kubenswrapper[7454]: I0319 12:07:38.454710 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 19 12:07:38.462776 master-0 kubenswrapper[7454]: I0319 12:07:38.462732 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 19 12:07:38.628909 master-0 kubenswrapper[7454]: I0319 12:07:38.628827 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac20c616-753e-461a-9c39-2129239f47de-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"ac20c616-753e-461a-9c39-2129239f47de\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 12:07:38.628909 master-0 kubenswrapper[7454]: I0319 12:07:38.628920 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac20c616-753e-461a-9c39-2129239f47de-kube-api-access\") pod \"installer-5-master-0\" (UID: \"ac20c616-753e-461a-9c39-2129239f47de\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 12:07:38.629266 master-0 kubenswrapper[7454]: I0319 12:07:38.628976 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ac20c616-753e-461a-9c39-2129239f47de-var-lock\") pod \"installer-5-master-0\" (UID: \"ac20c616-753e-461a-9c39-2129239f47de\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 12:07:38.661058 master-0 kubenswrapper[7454]: I0319 12:07:38.660993 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:38.661058 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:38.661058 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:38.661058 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:38.661343 master-0 kubenswrapper[7454]: I0319 12:07:38.661114 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:38.731145 master-0 kubenswrapper[7454]: I0319 12:07:38.731076 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac20c616-753e-461a-9c39-2129239f47de-kube-api-access\") pod \"installer-5-master-0\" (UID: \"ac20c616-753e-461a-9c39-2129239f47de\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 12:07:38.731383 master-0 kubenswrapper[7454]: I0319 12:07:38.731215 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ac20c616-753e-461a-9c39-2129239f47de-var-lock\") pod \"installer-5-master-0\" (UID: \"ac20c616-753e-461a-9c39-2129239f47de\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 12:07:38.731383 master-0 kubenswrapper[7454]: I0319 12:07:38.731320 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac20c616-753e-461a-9c39-2129239f47de-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"ac20c616-753e-461a-9c39-2129239f47de\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 12:07:38.732642 master-0 kubenswrapper[7454]: I0319 12:07:38.732605 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ac20c616-753e-461a-9c39-2129239f47de-var-lock\") pod \"installer-5-master-0\" (UID: \"ac20c616-753e-461a-9c39-2129239f47de\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 12:07:38.732728 master-0 kubenswrapper[7454]: I0319 12:07:38.732671 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac20c616-753e-461a-9c39-2129239f47de-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"ac20c616-753e-461a-9c39-2129239f47de\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 12:07:38.767635 master-0 kubenswrapper[7454]: I0319 12:07:38.767587 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac20c616-753e-461a-9c39-2129239f47de-kube-api-access\") pod \"installer-5-master-0\" (UID: \"ac20c616-753e-461a-9c39-2129239f47de\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 12:07:38.771136 master-0 kubenswrapper[7454]: I0319 12:07:38.771090 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 12:07:39.100853 master-0 kubenswrapper[7454]: I0319 12:07:39.098234 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-6m654_944eac68-e72b-4aed-b5dc-d7d9703178a3/snapshot-controller/4.log" Mar 19 12:07:39.102188 master-0 kubenswrapper[7454]: I0319 12:07:39.102175 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-6m654_944eac68-e72b-4aed-b5dc-d7d9703178a3/snapshot-controller/3.log" Mar 19 12:07:39.102293 master-0 kubenswrapper[7454]: I0319 12:07:39.102275 7454 generic.go:334] "Generic (PLEG): container finished" podID="944eac68-e72b-4aed-b5dc-d7d9703178a3" containerID="c5a947116c6a12f89fdc1149ba43ce5607536b57e0862f6ca233d92f281d6f5b" exitCode=1 Mar 19 12:07:39.102372 master-0 kubenswrapper[7454]: I0319 12:07:39.102357 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" event={"ID":"944eac68-e72b-4aed-b5dc-d7d9703178a3","Type":"ContainerDied","Data":"c5a947116c6a12f89fdc1149ba43ce5607536b57e0862f6ca233d92f281d6f5b"} Mar 19 12:07:39.102462 master-0 kubenswrapper[7454]: I0319 12:07:39.102440 7454 scope.go:117] "RemoveContainer" containerID="b5bafb20143fb32b9632c35086c90eeefa7dd77931f84a79471ae4dc2e4a6a71" Mar 19 12:07:39.103313 master-0 kubenswrapper[7454]: I0319 12:07:39.102946 7454 scope.go:117] "RemoveContainer" containerID="c5a947116c6a12f89fdc1149ba43ce5607536b57e0862f6ca233d92f281d6f5b" Mar 19 12:07:39.104201 master-0 kubenswrapper[7454]: E0319 12:07:39.104182 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-6m654_openshift-cluster-storage-operator(944eac68-e72b-4aed-b5dc-d7d9703178a3)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" podUID="944eac68-e72b-4aed-b5dc-d7d9703178a3" Mar 19 12:07:39.370063 master-0 kubenswrapper[7454]: I0319 12:07:39.369895 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 19 12:07:39.666930 master-0 kubenswrapper[7454]: I0319 12:07:39.666437 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:39.666930 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:39.666930 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:39.666930 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:39.666930 master-0 kubenswrapper[7454]: I0319 12:07:39.666618 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:40.114487 master-0 kubenswrapper[7454]: I0319 12:07:40.114431 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-6m654_944eac68-e72b-4aed-b5dc-d7d9703178a3/snapshot-controller/4.log" Mar 19 12:07:40.116452 master-0 kubenswrapper[7454]: I0319 12:07:40.116399 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"ac20c616-753e-461a-9c39-2129239f47de","Type":"ContainerStarted","Data":"8022cb0787b078b8490d5e3b8eb77b94bc5a7657a78677fc984224192ff65ab6"} Mar 19 12:07:40.116452 master-0 kubenswrapper[7454]: I0319 12:07:40.116446 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"ac20c616-753e-461a-9c39-2129239f47de","Type":"ContainerStarted","Data":"fa598cee3a86e2c04eff522555d0cdf5e0216e7c4e188a8334de9e13d56ec286"} Mar 19 12:07:40.133864 master-0 kubenswrapper[7454]: I0319 12:07:40.133723 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=2.133692587 podStartE2EDuration="2.133692587s" podCreationTimestamp="2026-03-19 12:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:07:40.132576483 +0000 UTC m=+829.763042396" watchObservedRunningTime="2026-03-19 12:07:40.133692587 +0000 UTC m=+829.764158550" Mar 19 12:07:40.659854 master-0 kubenswrapper[7454]: I0319 12:07:40.659761 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:40.659854 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:40.659854 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:40.659854 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:40.659854 master-0 kubenswrapper[7454]: I0319 12:07:40.659822 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:41.660041 master-0 kubenswrapper[7454]: I0319 12:07:41.659943 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:41.660041 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:41.660041 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:41.660041 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:41.660836 master-0 kubenswrapper[7454]: I0319 12:07:41.660057 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:42.102835 master-0 kubenswrapper[7454]: I0319 12:07:42.102751 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 19 12:07:42.104577 master-0 kubenswrapper[7454]: I0319 12:07:42.104560 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 12:07:42.107875 master-0 kubenswrapper[7454]: I0319 12:07:42.107710 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kqbhm" Mar 19 12:07:42.107875 master-0 kubenswrapper[7454]: I0319 12:07:42.107749 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 19 12:07:42.120440 master-0 kubenswrapper[7454]: I0319 12:07:42.120371 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 19 12:07:42.202100 master-0 kubenswrapper[7454]: I0319 12:07:42.201985 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c-var-lock\") pod \"installer-4-master-0\" (UID: \"f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 12:07:42.202100 master-0 kubenswrapper[7454]: I0319 12:07:42.202064 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 12:07:42.202576 master-0 kubenswrapper[7454]: I0319 12:07:42.202127 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c-kube-api-access\") pod \"installer-4-master-0\" (UID: \"f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 12:07:42.303445 master-0 kubenswrapper[7454]: I0319 12:07:42.303367 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c-kube-api-access\") pod \"installer-4-master-0\" (UID: \"f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 12:07:42.303662 master-0 kubenswrapper[7454]: I0319 12:07:42.303487 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c-var-lock\") pod \"installer-4-master-0\" (UID: \"f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 12:07:42.303662 master-0 kubenswrapper[7454]: I0319 12:07:42.303540 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 12:07:42.303662 master-0 kubenswrapper[7454]: I0319 12:07:42.303620 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 12:07:42.303894 master-0 kubenswrapper[7454]: I0319 12:07:42.303866 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c-var-lock\") pod \"installer-4-master-0\" (UID: \"f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 12:07:42.326741 master-0 kubenswrapper[7454]: I0319 12:07:42.326682 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c-kube-api-access\") pod \"installer-4-master-0\" (UID: \"f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 12:07:42.436054 master-0 kubenswrapper[7454]: I0319 12:07:42.435936 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 12:07:42.659859 master-0 kubenswrapper[7454]: I0319 12:07:42.659736 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:42.659859 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:42.659859 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:42.659859 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:42.660558 master-0 kubenswrapper[7454]: I0319 12:07:42.659871 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:42.863315 master-0 kubenswrapper[7454]: I0319 12:07:42.863243 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 19 12:07:42.863685 master-0 kubenswrapper[7454]: W0319 12:07:42.863626 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf6d6f656_2d3e_4bb7_a1a6_98cf223ad25c.slice/crio-f0904905367e561b547f2af7eae1570bb91ab634506393bc2f83371ecfe7fbc0 WatchSource:0}: Error finding container f0904905367e561b547f2af7eae1570bb91ab634506393bc2f83371ecfe7fbc0: Status 404 returned error can't find the container with id f0904905367e561b547f2af7eae1570bb91ab634506393bc2f83371ecfe7fbc0 Mar 19 12:07:43.138490 master-0 kubenswrapper[7454]: I0319 12:07:43.138352 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c","Type":"ContainerStarted","Data":"f0904905367e561b547f2af7eae1570bb91ab634506393bc2f83371ecfe7fbc0"} Mar 19 12:07:43.660168 master-0 kubenswrapper[7454]: I0319 12:07:43.660102 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:43.660168 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:43.660168 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:43.660168 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:43.660784 master-0 kubenswrapper[7454]: I0319 12:07:43.660181 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:44.148767 master-0 kubenswrapper[7454]: I0319 12:07:44.148689 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c","Type":"ContainerStarted","Data":"7e673f997c20469e5f546d3e95284e0a33e36f035fae4d41c3c443160f062f50"} Mar 19 12:07:44.169658 master-0 kubenswrapper[7454]: I0319 12:07:44.169588 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=2.169565759 podStartE2EDuration="2.169565759s" podCreationTimestamp="2026-03-19 12:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:07:44.165935645 +0000 UTC m=+833.796401598" watchObservedRunningTime="2026-03-19 12:07:44.169565759 +0000 UTC m=+833.800031672" Mar 19 12:07:44.660343 master-0 kubenswrapper[7454]: I0319 12:07:44.660269 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:44.660343 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:44.660343 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:44.660343 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:44.660343 master-0 kubenswrapper[7454]: I0319 12:07:44.660370 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:45.634237 master-0 kubenswrapper[7454]: I0319 12:07:45.634125 7454 scope.go:117] "RemoveContainer" containerID="7b70d5a46fbdbc272ee13227763b5a028d2f93b2e62fbbeaef054faab0e08e37" Mar 19 12:07:45.659385 master-0 kubenswrapper[7454]: I0319 12:07:45.659317 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:45.659385 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:45.659385 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:45.659385 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:45.659876 master-0 kubenswrapper[7454]: I0319 12:07:45.659393 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:46.177367 master-0 kubenswrapper[7454]: I0319 12:07:46.177257 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-qv4cg_1089ea24-add9-482e-9276-e6ded12052d7/kube-apiserver-operator/2.log" Mar 19 12:07:46.177367 master-0 kubenswrapper[7454]: I0319 12:07:46.177330 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" event={"ID":"1089ea24-add9-482e-9276-e6ded12052d7","Type":"ContainerStarted","Data":"1f3079cfe8a3c413cb3395726d1c3b96098c3281fc5769a5502bd8e4aed0381a"} Mar 19 12:07:46.645227 master-0 kubenswrapper[7454]: I0319 12:07:46.645170 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:07:46.659611 master-0 kubenswrapper[7454]: I0319 12:07:46.659513 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:46.659611 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:46.659611 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:46.659611 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:46.659611 master-0 kubenswrapper[7454]: I0319 12:07:46.659599 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:47.660592 master-0 kubenswrapper[7454]: I0319 12:07:47.660534 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:47.660592 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:47.660592 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:47.660592 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:47.661682 master-0 kubenswrapper[7454]: I0319 12:07:47.661632 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:48.658907 master-0 kubenswrapper[7454]: I0319 12:07:48.658855 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:48.658907 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:48.658907 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:48.658907 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:48.659220 master-0 kubenswrapper[7454]: I0319 12:07:48.658930 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:49.659856 master-0 kubenswrapper[7454]: I0319 12:07:49.659674 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:49.659856 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:49.659856 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:49.659856 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:49.659856 master-0 kubenswrapper[7454]: I0319 12:07:49.659790 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:50.637212 master-0 kubenswrapper[7454]: I0319 12:07:50.637121 7454 scope.go:117] "RemoveContainer" containerID="c5a947116c6a12f89fdc1149ba43ce5607536b57e0862f6ca233d92f281d6f5b" Mar 19 12:07:50.637518 master-0 kubenswrapper[7454]: E0319 12:07:50.637419 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-6m654_openshift-cluster-storage-operator(944eac68-e72b-4aed-b5dc-d7d9703178a3)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" podUID="944eac68-e72b-4aed-b5dc-d7d9703178a3" Mar 19 12:07:50.660053 master-0 kubenswrapper[7454]: I0319 12:07:50.659954 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:50.660053 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:50.660053 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:50.660053 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:50.661927 master-0 kubenswrapper[7454]: I0319 12:07:50.660061 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:51.659578 master-0 kubenswrapper[7454]: I0319 12:07:51.659497 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:51.659578 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:51.659578 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:51.659578 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:51.660020 master-0 kubenswrapper[7454]: I0319 12:07:51.659599 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:52.659934 master-0 kubenswrapper[7454]: I0319 12:07:52.659876 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:52.659934 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:52.659934 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:52.659934 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:52.661282 master-0 kubenswrapper[7454]: I0319 12:07:52.659942 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:53.659694 master-0 kubenswrapper[7454]: I0319 12:07:53.659595 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:53.659694 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:53.659694 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:53.659694 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:53.660964 master-0 kubenswrapper[7454]: I0319 12:07:53.659698 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:54.659473 master-0 kubenswrapper[7454]: I0319 12:07:54.659368 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:54.659473 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:54.659473 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:54.659473 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:54.659473 master-0 kubenswrapper[7454]: I0319 12:07:54.659462 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:55.660150 master-0 kubenswrapper[7454]: I0319 12:07:55.659997 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:55.660150 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:55.660150 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:55.660150 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:55.660150 master-0 kubenswrapper[7454]: I0319 12:07:55.660094 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:56.661356 master-0 kubenswrapper[7454]: I0319 12:07:56.660512 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:56.661356 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:56.661356 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:56.661356 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:56.661356 master-0 kubenswrapper[7454]: I0319 12:07:56.660611 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:57.660228 master-0 kubenswrapper[7454]: I0319 12:07:57.660149 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:57.660228 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:57.660228 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:57.660228 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:57.660885 master-0 kubenswrapper[7454]: I0319 12:07:57.660256 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:58.660036 master-0 kubenswrapper[7454]: I0319 12:07:58.659980 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:58.660036 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:58.660036 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:58.660036 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:58.661325 master-0 kubenswrapper[7454]: I0319 12:07:58.660969 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:07:59.660058 master-0 kubenswrapper[7454]: I0319 12:07:59.659945 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:07:59.660058 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:07:59.660058 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:07:59.660058 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:07:59.661072 master-0 kubenswrapper[7454]: I0319 12:07:59.660074 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:08:00.660226 master-0 kubenswrapper[7454]: I0319 12:08:00.660157 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:08:00.660226 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:08:00.660226 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:08:00.660226 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:08:00.660948 master-0 kubenswrapper[7454]: I0319 12:08:00.660237 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:08:01.660280 master-0 kubenswrapper[7454]: I0319 12:08:01.660208 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:08:01.660280 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:08:01.660280 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:08:01.660280 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:08:01.660882 master-0 kubenswrapper[7454]: I0319 12:08:01.660307 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:08:01.660882 master-0 kubenswrapper[7454]: I0319 12:08:01.660390 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:08:01.661477 master-0 kubenswrapper[7454]: I0319 12:08:01.661424 7454 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"6f74355f30b0cc7b3534f39a3335ceb85c6bdd019a4b22eade41702408961aed"} pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" containerMessage="Container router failed startup probe, will be restarted" Mar 19 12:08:01.661543 master-0 kubenswrapper[7454]: I0319 12:08:01.661510 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" containerID="cri-o://6f74355f30b0cc7b3534f39a3335ceb85c6bdd019a4b22eade41702408961aed" gracePeriod=3600 Mar 19 12:08:04.634974 master-0 kubenswrapper[7454]: I0319 12:08:04.634884 7454 scope.go:117] "RemoveContainer" containerID="c5a947116c6a12f89fdc1149ba43ce5607536b57e0862f6ca233d92f281d6f5b" Mar 19 12:08:04.636023 master-0 kubenswrapper[7454]: E0319 12:08:04.635267 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-6m654_openshift-cluster-storage-operator(944eac68-e72b-4aed-b5dc-d7d9703178a3)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" podUID="944eac68-e72b-4aed-b5dc-d7d9703178a3" Mar 19 12:08:10.871195 master-0 kubenswrapper[7454]: I0319 12:08:10.871127 7454 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 19 12:08:10.872010 master-0 kubenswrapper[7454]: I0319 12:08:10.871437 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-cert-syncer" containerID="cri-o://4d47a2e9aa1638460fa6ef96bf2d0249d38af6d72c57ab083a850e1599710d6d" gracePeriod=30 Mar 19 12:08:10.872010 master-0 kubenswrapper[7454]: I0319 12:08:10.871489 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" containerID="cri-o://4ad628e89e7621359063e42ff965fafd7ff7510f8646a17316c1e2a0906b3609" gracePeriod=30 Mar 19 12:08:10.872010 master-0 kubenswrapper[7454]: I0319 12:08:10.871543 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-recovery-controller" containerID="cri-o://0b7cadf57c1ff393897dfb481975475d3dd6a6c04a5c37d34ce9d4c14fc55d3e" gracePeriod=30 Mar 19 12:08:10.872916 master-0 kubenswrapper[7454]: I0319 12:08:10.872788 7454 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 19 12:08:10.874926 master-0 kubenswrapper[7454]: E0319 12:08:10.873387 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-cert-syncer" Mar 19 12:08:10.874926 master-0 kubenswrapper[7454]: I0319 12:08:10.873417 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-cert-syncer" Mar 19 12:08:10.874926 master-0 kubenswrapper[7454]: E0319 12:08:10.873490 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" Mar 19 12:08:10.874926 master-0 kubenswrapper[7454]: I0319 12:08:10.873504 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" Mar 19 12:08:10.874926 master-0 kubenswrapper[7454]: E0319 12:08:10.873524 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="wait-for-host-port" Mar 19 12:08:10.874926 master-0 kubenswrapper[7454]: I0319 12:08:10.873534 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="wait-for-host-port" Mar 19 12:08:10.874926 master-0 kubenswrapper[7454]: E0319 12:08:10.873573 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-recovery-controller" Mar 19 12:08:10.874926 master-0 kubenswrapper[7454]: I0319 12:08:10.873585 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-recovery-controller" Mar 19 12:08:10.874926 master-0 kubenswrapper[7454]: E0319 12:08:10.873613 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" Mar 19 12:08:10.874926 master-0 kubenswrapper[7454]: I0319 12:08:10.873623 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" Mar 19 12:08:10.874926 master-0 kubenswrapper[7454]: I0319 12:08:10.873873 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-recovery-controller" Mar 19 12:08:10.874926 master-0 kubenswrapper[7454]: I0319 12:08:10.873936 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-cert-syncer" Mar 19 12:08:10.874926 master-0 kubenswrapper[7454]: I0319 12:08:10.873949 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" Mar 19 12:08:10.874926 master-0 kubenswrapper[7454]: I0319 12:08:10.874706 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" Mar 19 12:08:11.043933 master-0 kubenswrapper[7454]: I0319 12:08:11.043893 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler-cert-syncer/0.log" Mar 19 12:08:11.044556 master-0 kubenswrapper[7454]: I0319 12:08:11.044525 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler/0.log" Mar 19 12:08:11.045117 master-0 kubenswrapper[7454]: I0319 12:08:11.045093 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:08:11.048954 master-0 kubenswrapper[7454]: I0319 12:08:11.048843 7454 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="8413125cf444e5c95f023c5dd9c6151e" podUID="8e27b7d086edf5d2cf47b703574641d8" Mar 19 12:08:11.060269 master-0 kubenswrapper[7454]: I0319 12:08:11.060215 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:08:11.060693 master-0 kubenswrapper[7454]: I0319 12:08:11.060666 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:08:11.163488 master-0 kubenswrapper[7454]: I0319 12:08:11.163323 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"8413125cf444e5c95f023c5dd9c6151e\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " Mar 19 12:08:11.163709 master-0 kubenswrapper[7454]: I0319 12:08:11.163480 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "8413125cf444e5c95f023c5dd9c6151e" (UID: "8413125cf444e5c95f023c5dd9c6151e"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:08:11.163709 master-0 kubenswrapper[7454]: I0319 12:08:11.163584 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"8413125cf444e5c95f023c5dd9c6151e\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " Mar 19 12:08:11.163831 master-0 kubenswrapper[7454]: I0319 12:08:11.163707 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8413125cf444e5c95f023c5dd9c6151e" (UID: "8413125cf444e5c95f023c5dd9c6151e"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:08:11.164079 master-0 kubenswrapper[7454]: I0319 12:08:11.164036 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:08:11.164231 master-0 kubenswrapper[7454]: I0319 12:08:11.164191 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:08:11.164231 master-0 kubenswrapper[7454]: I0319 12:08:11.164202 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:08:11.164316 master-0 kubenswrapper[7454]: I0319 12:08:11.164267 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:08:11.164543 master-0 kubenswrapper[7454]: I0319 12:08:11.164508 7454 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:08:11.164586 master-0 kubenswrapper[7454]: I0319 12:08:11.164544 7454 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:08:11.378029 master-0 kubenswrapper[7454]: I0319 12:08:11.377957 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler-cert-syncer/0.log" Mar 19 12:08:11.378734 master-0 kubenswrapper[7454]: I0319 12:08:11.378689 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler/0.log" Mar 19 12:08:11.379192 master-0 kubenswrapper[7454]: I0319 12:08:11.379140 7454 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="4ad628e89e7621359063e42ff965fafd7ff7510f8646a17316c1e2a0906b3609" exitCode=0 Mar 19 12:08:11.379192 master-0 kubenswrapper[7454]: I0319 12:08:11.379174 7454 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="0b7cadf57c1ff393897dfb481975475d3dd6a6c04a5c37d34ce9d4c14fc55d3e" exitCode=0 Mar 19 12:08:11.379192 master-0 kubenswrapper[7454]: I0319 12:08:11.379184 7454 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="4d47a2e9aa1638460fa6ef96bf2d0249d38af6d72c57ab083a850e1599710d6d" exitCode=2 Mar 19 12:08:11.379468 master-0 kubenswrapper[7454]: I0319 12:08:11.379270 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:08:11.379468 master-0 kubenswrapper[7454]: I0319 12:08:11.379294 7454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8133bbc1cbc26e1060a9c5f9a0e6097cd17b1d59b0065a7002ebf7fa91eeabbd" Mar 19 12:08:11.379468 master-0 kubenswrapper[7454]: I0319 12:08:11.379314 7454 scope.go:117] "RemoveContainer" containerID="6b57ecd81087b581c66ac63d9f2f1ef10437e651539d71691b6a055612b562c9" Mar 19 12:08:11.382837 master-0 kubenswrapper[7454]: I0319 12:08:11.382744 7454 generic.go:334] "Generic (PLEG): container finished" podID="ac20c616-753e-461a-9c39-2129239f47de" containerID="8022cb0787b078b8490d5e3b8eb77b94bc5a7657a78677fc984224192ff65ab6" exitCode=0 Mar 19 12:08:11.382837 master-0 kubenswrapper[7454]: I0319 12:08:11.382823 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"ac20c616-753e-461a-9c39-2129239f47de","Type":"ContainerDied","Data":"8022cb0787b078b8490d5e3b8eb77b94bc5a7657a78677fc984224192ff65ab6"} Mar 19 12:08:11.384002 master-0 kubenswrapper[7454]: I0319 12:08:11.383920 7454 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="8413125cf444e5c95f023c5dd9c6151e" podUID="8e27b7d086edf5d2cf47b703574641d8" Mar 19 12:08:11.414593 master-0 kubenswrapper[7454]: I0319 12:08:11.414518 7454 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="8413125cf444e5c95f023c5dd9c6151e" podUID="8e27b7d086edf5d2cf47b703574641d8" Mar 19 12:08:12.396338 master-0 kubenswrapper[7454]: I0319 12:08:12.396240 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler-cert-syncer/0.log" Mar 19 12:08:12.646470 master-0 kubenswrapper[7454]: I0319 12:08:12.646313 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8413125cf444e5c95f023c5dd9c6151e" path="/var/lib/kubelet/pods/8413125cf444e5c95f023c5dd9c6151e/volumes" Mar 19 12:08:12.712752 master-0 kubenswrapper[7454]: I0319 12:08:12.712701 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 12:08:12.895264 master-0 kubenswrapper[7454]: I0319 12:08:12.895183 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ac20c616-753e-461a-9c39-2129239f47de-var-lock\") pod \"ac20c616-753e-461a-9c39-2129239f47de\" (UID: \"ac20c616-753e-461a-9c39-2129239f47de\") " Mar 19 12:08:12.895544 master-0 kubenswrapper[7454]: I0319 12:08:12.895319 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac20c616-753e-461a-9c39-2129239f47de-var-lock" (OuterVolumeSpecName: "var-lock") pod "ac20c616-753e-461a-9c39-2129239f47de" (UID: "ac20c616-753e-461a-9c39-2129239f47de"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:08:12.895544 master-0 kubenswrapper[7454]: I0319 12:08:12.895330 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac20c616-753e-461a-9c39-2129239f47de-kube-api-access\") pod \"ac20c616-753e-461a-9c39-2129239f47de\" (UID: \"ac20c616-753e-461a-9c39-2129239f47de\") " Mar 19 12:08:12.895544 master-0 kubenswrapper[7454]: I0319 12:08:12.895400 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac20c616-753e-461a-9c39-2129239f47de-kubelet-dir\") pod \"ac20c616-753e-461a-9c39-2129239f47de\" (UID: \"ac20c616-753e-461a-9c39-2129239f47de\") " Mar 19 12:08:12.895544 master-0 kubenswrapper[7454]: I0319 12:08:12.895497 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac20c616-753e-461a-9c39-2129239f47de-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ac20c616-753e-461a-9c39-2129239f47de" (UID: "ac20c616-753e-461a-9c39-2129239f47de"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:08:12.895842 master-0 kubenswrapper[7454]: I0319 12:08:12.895633 7454 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac20c616-753e-461a-9c39-2129239f47de-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:08:12.895842 master-0 kubenswrapper[7454]: I0319 12:08:12.895645 7454 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ac20c616-753e-461a-9c39-2129239f47de-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 12:08:12.898515 master-0 kubenswrapper[7454]: I0319 12:08:12.898361 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac20c616-753e-461a-9c39-2129239f47de-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ac20c616-753e-461a-9c39-2129239f47de" (UID: "ac20c616-753e-461a-9c39-2129239f47de"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:08:12.997213 master-0 kubenswrapper[7454]: I0319 12:08:12.997010 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac20c616-753e-461a-9c39-2129239f47de-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 12:08:13.404992 master-0 kubenswrapper[7454]: I0319 12:08:13.404918 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"ac20c616-753e-461a-9c39-2129239f47de","Type":"ContainerDied","Data":"fa598cee3a86e2c04eff522555d0cdf5e0216e7c4e188a8334de9e13d56ec286"} Mar 19 12:08:13.404992 master-0 kubenswrapper[7454]: I0319 12:08:13.404972 7454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa598cee3a86e2c04eff522555d0cdf5e0216e7c4e188a8334de9e13d56ec286" Mar 19 12:08:13.405627 master-0 kubenswrapper[7454]: I0319 12:08:13.405047 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 12:08:16.237501 master-0 kubenswrapper[7454]: I0319 12:08:16.237424 7454 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 12:08:16.238364 master-0 kubenswrapper[7454]: I0319 12:08:16.237919 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://6d38396688a212d80e4b9440cc838a81e9ba0076c58cc35f80f3248581700f34" gracePeriod=30 Mar 19 12:08:16.238364 master-0 kubenswrapper[7454]: I0319 12:08:16.238018 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" containerID="cri-o://905b5c7c59d30b4b870a40d926e6ce6d9ad7f0bf509dc07ea760b5f841773a4f" gracePeriod=30 Mar 19 12:08:16.238364 master-0 kubenswrapper[7454]: I0319 12:08:16.238054 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://7ba9fe238d802cb5b3d8a7a91252294e09ef5a02de2e8f653eef99bd12ecd678" gracePeriod=30 Mar 19 12:08:16.238364 master-0 kubenswrapper[7454]: I0319 12:08:16.238190 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="kube-controller-manager" containerID="cri-o://8088add442d8a84ce49177d60c8f88d3eb643fdd316c8a11da9030fc8e5dfb04" gracePeriod=30 Mar 19 12:08:16.239253 master-0 kubenswrapper[7454]: I0319 12:08:16.239080 7454 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 12:08:16.239537 master-0 kubenswrapper[7454]: E0319 12:08:16.239488 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" Mar 19 12:08:16.239537 master-0 kubenswrapper[7454]: I0319 12:08:16.239528 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" Mar 19 12:08:16.239639 master-0 kubenswrapper[7454]: E0319 12:08:16.239548 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="kube-controller-manager-cert-syncer" Mar 19 12:08:16.239639 master-0 kubenswrapper[7454]: I0319 12:08:16.239564 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="kube-controller-manager-cert-syncer" Mar 19 12:08:16.239639 master-0 kubenswrapper[7454]: E0319 12:08:16.239586 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="kube-controller-manager" Mar 19 12:08:16.239639 master-0 kubenswrapper[7454]: I0319 12:08:16.239604 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="kube-controller-manager" Mar 19 12:08:16.239639 master-0 kubenswrapper[7454]: E0319 12:08:16.239624 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="kube-controller-manager" Mar 19 12:08:16.239639 master-0 kubenswrapper[7454]: I0319 12:08:16.239637 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="kube-controller-manager" Mar 19 12:08:16.239921 master-0 kubenswrapper[7454]: E0319 12:08:16.239654 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" Mar 19 12:08:16.239921 master-0 kubenswrapper[7454]: I0319 12:08:16.239667 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" Mar 19 12:08:16.239921 master-0 kubenswrapper[7454]: E0319 12:08:16.239688 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac20c616-753e-461a-9c39-2129239f47de" containerName="installer" Mar 19 12:08:16.239921 master-0 kubenswrapper[7454]: I0319 12:08:16.239701 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac20c616-753e-461a-9c39-2129239f47de" containerName="installer" Mar 19 12:08:16.239921 master-0 kubenswrapper[7454]: E0319 12:08:16.239730 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" Mar 19 12:08:16.239921 master-0 kubenswrapper[7454]: I0319 12:08:16.239744 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" Mar 19 12:08:16.239921 master-0 kubenswrapper[7454]: E0319 12:08:16.239763 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" Mar 19 12:08:16.239921 master-0 kubenswrapper[7454]: I0319 12:08:16.239778 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" Mar 19 12:08:16.239921 master-0 kubenswrapper[7454]: E0319 12:08:16.239844 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="kube-controller-manager-recovery-controller" Mar 19 12:08:16.239921 master-0 kubenswrapper[7454]: I0319 12:08:16.239863 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="kube-controller-manager-recovery-controller" Mar 19 12:08:16.240848 master-0 kubenswrapper[7454]: I0319 12:08:16.240064 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="kube-controller-manager" Mar 19 12:08:16.240848 master-0 kubenswrapper[7454]: I0319 12:08:16.240092 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="kube-controller-manager-cert-syncer" Mar 19 12:08:16.240848 master-0 kubenswrapper[7454]: I0319 12:08:16.240115 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" Mar 19 12:08:16.240848 master-0 kubenswrapper[7454]: I0319 12:08:16.240137 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" Mar 19 12:08:16.240848 master-0 kubenswrapper[7454]: I0319 12:08:16.240157 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="kube-controller-manager-recovery-controller" Mar 19 12:08:16.240848 master-0 kubenswrapper[7454]: I0319 12:08:16.240175 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac20c616-753e-461a-9c39-2129239f47de" containerName="installer" Mar 19 12:08:16.240848 master-0 kubenswrapper[7454]: I0319 12:08:16.240198 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" Mar 19 12:08:16.240848 master-0 kubenswrapper[7454]: E0319 12:08:16.240432 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" Mar 19 12:08:16.240848 master-0 kubenswrapper[7454]: I0319 12:08:16.240454 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" Mar 19 12:08:16.240848 master-0 kubenswrapper[7454]: I0319 12:08:16.240684 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" Mar 19 12:08:16.240848 master-0 kubenswrapper[7454]: I0319 12:08:16.240707 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="kube-controller-manager" Mar 19 12:08:16.241286 master-0 kubenswrapper[7454]: I0319 12:08:16.241201 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed7034eee202d25f8fdd5bf58084d919" containerName="cluster-policy-controller" Mar 19 12:08:16.350290 master-0 kubenswrapper[7454]: I0319 12:08:16.350219 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/09672015532ae9d1d74ae4d426cd904b-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"09672015532ae9d1d74ae4d426cd904b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:08:16.350485 master-0 kubenswrapper[7454]: I0319 12:08:16.350372 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/09672015532ae9d1d74ae4d426cd904b-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"09672015532ae9d1d74ae4d426cd904b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:08:16.442489 master-0 kubenswrapper[7454]: I0319 12:08:16.442424 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/cluster-policy-controller/3.log" Mar 19 12:08:16.444352 master-0 kubenswrapper[7454]: I0319 12:08:16.444300 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/kube-controller-manager-cert-syncer/0.log" Mar 19 12:08:16.445176 master-0 kubenswrapper[7454]: I0319 12:08:16.445110 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/kube-controller-manager/0.log" Mar 19 12:08:16.445288 master-0 kubenswrapper[7454]: I0319 12:08:16.445192 7454 generic.go:334] "Generic (PLEG): container finished" podID="ed7034eee202d25f8fdd5bf58084d919" containerID="905b5c7c59d30b4b870a40d926e6ce6d9ad7f0bf509dc07ea760b5f841773a4f" exitCode=0 Mar 19 12:08:16.445288 master-0 kubenswrapper[7454]: I0319 12:08:16.445236 7454 generic.go:334] "Generic (PLEG): container finished" podID="ed7034eee202d25f8fdd5bf58084d919" containerID="8088add442d8a84ce49177d60c8f88d3eb643fdd316c8a11da9030fc8e5dfb04" exitCode=0 Mar 19 12:08:16.445288 master-0 kubenswrapper[7454]: I0319 12:08:16.445249 7454 generic.go:334] "Generic (PLEG): container finished" podID="ed7034eee202d25f8fdd5bf58084d919" containerID="7ba9fe238d802cb5b3d8a7a91252294e09ef5a02de2e8f653eef99bd12ecd678" exitCode=0 Mar 19 12:08:16.445288 master-0 kubenswrapper[7454]: I0319 12:08:16.445261 7454 generic.go:334] "Generic (PLEG): container finished" podID="ed7034eee202d25f8fdd5bf58084d919" containerID="6d38396688a212d80e4b9440cc838a81e9ba0076c58cc35f80f3248581700f34" exitCode=2 Mar 19 12:08:16.445529 master-0 kubenswrapper[7454]: I0319 12:08:16.445356 7454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c1162902cf97a8b88fecc587e4927a4fa7874565b759344e7b07063df911ac6" Mar 19 12:08:16.445529 master-0 kubenswrapper[7454]: I0319 12:08:16.445395 7454 scope.go:117] "RemoveContainer" containerID="9397faf4795d0da7838a4c1f3a5b85201054d72615494c0ba8368d62268a9114" Mar 19 12:08:16.448058 master-0 kubenswrapper[7454]: I0319 12:08:16.448006 7454 generic.go:334] "Generic (PLEG): container finished" podID="f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c" containerID="7e673f997c20469e5f546d3e95284e0a33e36f035fae4d41c3c443160f062f50" exitCode=0 Mar 19 12:08:16.448250 master-0 kubenswrapper[7454]: I0319 12:08:16.448060 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c","Type":"ContainerDied","Data":"7e673f997c20469e5f546d3e95284e0a33e36f035fae4d41c3c443160f062f50"} Mar 19 12:08:16.451703 master-0 kubenswrapper[7454]: I0319 12:08:16.451643 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/09672015532ae9d1d74ae4d426cd904b-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"09672015532ae9d1d74ae4d426cd904b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:08:16.451853 master-0 kubenswrapper[7454]: I0319 12:08:16.451818 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/09672015532ae9d1d74ae4d426cd904b-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"09672015532ae9d1d74ae4d426cd904b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:08:16.451980 master-0 kubenswrapper[7454]: I0319 12:08:16.451940 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/09672015532ae9d1d74ae4d426cd904b-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"09672015532ae9d1d74ae4d426cd904b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:08:16.452062 master-0 kubenswrapper[7454]: I0319 12:08:16.452016 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/09672015532ae9d1d74ae4d426cd904b-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"09672015532ae9d1d74ae4d426cd904b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:08:16.452761 master-0 kubenswrapper[7454]: I0319 12:08:16.452715 7454 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="ed7034eee202d25f8fdd5bf58084d919" podUID="09672015532ae9d1d74ae4d426cd904b" Mar 19 12:08:16.504926 master-0 kubenswrapper[7454]: I0319 12:08:16.504711 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/kube-controller-manager-cert-syncer/0.log" Mar 19 12:08:16.506384 master-0 kubenswrapper[7454]: I0319 12:08:16.505405 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/kube-controller-manager/0.log" Mar 19 12:08:16.506384 master-0 kubenswrapper[7454]: I0319 12:08:16.505505 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:08:16.516249 master-0 kubenswrapper[7454]: I0319 12:08:16.516178 7454 scope.go:117] "RemoveContainer" containerID="190a2ede2af79ab256016ad5364d037b5e12b69b5a7a2227b7287826e6597c14" Mar 19 12:08:16.518214 master-0 kubenswrapper[7454]: I0319 12:08:16.518157 7454 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="ed7034eee202d25f8fdd5bf58084d919" podUID="09672015532ae9d1d74ae4d426cd904b" Mar 19 12:08:16.654543 master-0 kubenswrapper[7454]: I0319 12:08:16.654463 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ed7034eee202d25f8fdd5bf58084d919-resource-dir\") pod \"ed7034eee202d25f8fdd5bf58084d919\" (UID: \"ed7034eee202d25f8fdd5bf58084d919\") " Mar 19 12:08:16.654863 master-0 kubenswrapper[7454]: I0319 12:08:16.654681 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ed7034eee202d25f8fdd5bf58084d919-cert-dir\") pod \"ed7034eee202d25f8fdd5bf58084d919\" (UID: \"ed7034eee202d25f8fdd5bf58084d919\") " Mar 19 12:08:16.654863 master-0 kubenswrapper[7454]: I0319 12:08:16.654826 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed7034eee202d25f8fdd5bf58084d919-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "ed7034eee202d25f8fdd5bf58084d919" (UID: "ed7034eee202d25f8fdd5bf58084d919"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:08:16.655041 master-0 kubenswrapper[7454]: I0319 12:08:16.654958 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed7034eee202d25f8fdd5bf58084d919-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "ed7034eee202d25f8fdd5bf58084d919" (UID: "ed7034eee202d25f8fdd5bf58084d919"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:08:16.655421 master-0 kubenswrapper[7454]: I0319 12:08:16.655385 7454 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ed7034eee202d25f8fdd5bf58084d919-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:08:16.655421 master-0 kubenswrapper[7454]: I0319 12:08:16.655420 7454 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ed7034eee202d25f8fdd5bf58084d919-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:08:17.463827 master-0 kubenswrapper[7454]: I0319 12:08:17.463753 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ed7034eee202d25f8fdd5bf58084d919/kube-controller-manager-cert-syncer/0.log" Mar 19 12:08:17.464449 master-0 kubenswrapper[7454]: I0319 12:08:17.463996 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:08:17.470492 master-0 kubenswrapper[7454]: I0319 12:08:17.470426 7454 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="ed7034eee202d25f8fdd5bf58084d919" podUID="09672015532ae9d1d74ae4d426cd904b" Mar 19 12:08:17.505707 master-0 kubenswrapper[7454]: I0319 12:08:17.505657 7454 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="ed7034eee202d25f8fdd5bf58084d919" podUID="09672015532ae9d1d74ae4d426cd904b" Mar 19 12:08:17.815347 master-0 kubenswrapper[7454]: I0319 12:08:17.815241 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 12:08:17.976895 master-0 kubenswrapper[7454]: I0319 12:08:17.976760 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c-kube-api-access\") pod \"f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c\" (UID: \"f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c\") " Mar 19 12:08:17.977136 master-0 kubenswrapper[7454]: I0319 12:08:17.976998 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c-kubelet-dir\") pod \"f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c\" (UID: \"f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c\") " Mar 19 12:08:17.977136 master-0 kubenswrapper[7454]: I0319 12:08:17.977040 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c-var-lock\") pod \"f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c\" (UID: \"f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c\") " Mar 19 12:08:17.977244 master-0 kubenswrapper[7454]: I0319 12:08:17.977144 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c" (UID: "f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:08:17.977244 master-0 kubenswrapper[7454]: I0319 12:08:17.977208 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c-var-lock" (OuterVolumeSpecName: "var-lock") pod "f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c" (UID: "f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:08:17.977418 master-0 kubenswrapper[7454]: I0319 12:08:17.977374 7454 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:08:17.977418 master-0 kubenswrapper[7454]: I0319 12:08:17.977411 7454 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 12:08:17.981970 master-0 kubenswrapper[7454]: I0319 12:08:17.981889 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c" (UID: "f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:08:18.078994 master-0 kubenswrapper[7454]: I0319 12:08:18.078761 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 12:08:18.476077 master-0 kubenswrapper[7454]: I0319 12:08:18.476025 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c","Type":"ContainerDied","Data":"f0904905367e561b547f2af7eae1570bb91ab634506393bc2f83371ecfe7fbc0"} Mar 19 12:08:18.477077 master-0 kubenswrapper[7454]: I0319 12:08:18.476086 7454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0904905367e561b547f2af7eae1570bb91ab634506393bc2f83371ecfe7fbc0" Mar 19 12:08:18.477077 master-0 kubenswrapper[7454]: I0319 12:08:18.476126 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 12:08:18.634396 master-0 kubenswrapper[7454]: I0319 12:08:18.634270 7454 scope.go:117] "RemoveContainer" containerID="c5a947116c6a12f89fdc1149ba43ce5607536b57e0862f6ca233d92f281d6f5b" Mar 19 12:08:18.634740 master-0 kubenswrapper[7454]: E0319 12:08:18.634672 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-6m654_openshift-cluster-storage-operator(944eac68-e72b-4aed-b5dc-d7d9703178a3)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" podUID="944eac68-e72b-4aed-b5dc-d7d9703178a3" Mar 19 12:08:18.647931 master-0 kubenswrapper[7454]: I0319 12:08:18.647865 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed7034eee202d25f8fdd5bf58084d919" path="/var/lib/kubelet/pods/ed7034eee202d25f8fdd5bf58084d919/volumes" Mar 19 12:08:22.633763 master-0 kubenswrapper[7454]: I0319 12:08:22.633688 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:08:22.668130 master-0 kubenswrapper[7454]: I0319 12:08:22.668036 7454 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="5e1ea45d-b0db-4c27-934c-79de38b9b6e8" Mar 19 12:08:22.668130 master-0 kubenswrapper[7454]: I0319 12:08:22.668095 7454 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="5e1ea45d-b0db-4c27-934c-79de38b9b6e8" Mar 19 12:08:22.690860 master-0 kubenswrapper[7454]: I0319 12:08:22.686889 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 19 12:08:22.690860 master-0 kubenswrapper[7454]: I0319 12:08:22.688134 7454 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:08:22.696866 master-0 kubenswrapper[7454]: I0319 12:08:22.696773 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 19 12:08:22.706921 master-0 kubenswrapper[7454]: I0319 12:08:22.706870 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:08:22.725245 master-0 kubenswrapper[7454]: I0319 12:08:22.725158 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 19 12:08:22.750131 master-0 kubenswrapper[7454]: W0319 12:08:22.750061 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e27b7d086edf5d2cf47b703574641d8.slice/crio-8b160a1a52470caaf8eb5167c80599083e3f1829f2580cc4817859648d8bb802 WatchSource:0}: Error finding container 8b160a1a52470caaf8eb5167c80599083e3f1829f2580cc4817859648d8bb802: Status 404 returned error can't find the container with id 8b160a1a52470caaf8eb5167c80599083e3f1829f2580cc4817859648d8bb802 Mar 19 12:08:23.520392 master-0 kubenswrapper[7454]: I0319 12:08:23.520315 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerDied","Data":"04102fb37d09b73e728e34206b1d91a20ab150cf6fe0171a324821c07888079f"} Mar 19 12:08:23.521245 master-0 kubenswrapper[7454]: I0319 12:08:23.520093 7454 generic.go:334] "Generic (PLEG): container finished" podID="8e27b7d086edf5d2cf47b703574641d8" containerID="04102fb37d09b73e728e34206b1d91a20ab150cf6fe0171a324821c07888079f" exitCode=0 Mar 19 12:08:23.521397 master-0 kubenswrapper[7454]: I0319 12:08:23.521320 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"8b160a1a52470caaf8eb5167c80599083e3f1829f2580cc4817859648d8bb802"} Mar 19 12:08:24.572205 master-0 kubenswrapper[7454]: I0319 12:08:24.572146 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"fcec6c469a1150ebd576b3e8ddd08ae79f306b35899ebd8eb5044a4ccd5c6c61"} Mar 19 12:08:24.572205 master-0 kubenswrapper[7454]: I0319 12:08:24.572215 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"d4b6f2e178f5cea03cca73846d1f496d006bc91e2a6e21d8cb7ab57e7c076671"} Mar 19 12:08:24.573156 master-0 kubenswrapper[7454]: I0319 12:08:24.572230 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"6e5f36c23efb75db8a09134649847dcb43474b94fa919dd3367661556f399de4"} Mar 19 12:08:24.573156 master-0 kubenswrapper[7454]: I0319 12:08:24.572408 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:08:24.601515 master-0 kubenswrapper[7454]: I0319 12:08:24.601405 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.601347175 podStartE2EDuration="2.601347175s" podCreationTimestamp="2026-03-19 12:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:08:24.594992716 +0000 UTC m=+874.225458629" watchObservedRunningTime="2026-03-19 12:08:24.601347175 +0000 UTC m=+874.231813128" Mar 19 12:08:29.633373 master-0 kubenswrapper[7454]: I0319 12:08:29.633293 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:08:29.657858 master-0 kubenswrapper[7454]: I0319 12:08:29.657781 7454 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="fcea8a18-6922-4ca0-9346-76e3f525725a" Mar 19 12:08:29.657858 master-0 kubenswrapper[7454]: I0319 12:08:29.657830 7454 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="fcea8a18-6922-4ca0-9346-76e3f525725a" Mar 19 12:08:29.797164 master-0 kubenswrapper[7454]: I0319 12:08:29.793451 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 12:08:29.800324 master-0 kubenswrapper[7454]: I0319 12:08:29.800250 7454 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:08:29.807270 master-0 kubenswrapper[7454]: I0319 12:08:29.807216 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 12:08:29.819790 master-0 kubenswrapper[7454]: I0319 12:08:29.819737 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:08:29.826021 master-0 kubenswrapper[7454]: I0319 12:08:29.825963 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 12:08:29.852806 master-0 kubenswrapper[7454]: W0319 12:08:29.852718 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09672015532ae9d1d74ae4d426cd904b.slice/crio-401877adce8e78dfdd3ac293a53a75da77fa4a3177086a087aa6915ac4d36604 WatchSource:0}: Error finding container 401877adce8e78dfdd3ac293a53a75da77fa4a3177086a087aa6915ac4d36604: Status 404 returned error can't find the container with id 401877adce8e78dfdd3ac293a53a75da77fa4a3177086a087aa6915ac4d36604 Mar 19 12:08:30.655824 master-0 kubenswrapper[7454]: I0319 12:08:30.655192 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"09672015532ae9d1d74ae4d426cd904b","Type":"ContainerStarted","Data":"e2254e5955e606c47be9604d12c39e06178d4d59ccf279a6986ce5edd6dc066e"} Mar 19 12:08:30.655824 master-0 kubenswrapper[7454]: I0319 12:08:30.655233 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"09672015532ae9d1d74ae4d426cd904b","Type":"ContainerStarted","Data":"0caac3ca6bbe34a0e2d497521111d7392578df46354c8eb9456dc2e8b18fadb9"} Mar 19 12:08:30.655824 master-0 kubenswrapper[7454]: I0319 12:08:30.655246 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"09672015532ae9d1d74ae4d426cd904b","Type":"ContainerStarted","Data":"401877adce8e78dfdd3ac293a53a75da77fa4a3177086a087aa6915ac4d36604"} Mar 19 12:08:31.634121 master-0 kubenswrapper[7454]: I0319 12:08:31.634058 7454 scope.go:117] "RemoveContainer" containerID="c5a947116c6a12f89fdc1149ba43ce5607536b57e0862f6ca233d92f281d6f5b" Mar 19 12:08:31.634440 master-0 kubenswrapper[7454]: E0319 12:08:31.634394 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-6m654_openshift-cluster-storage-operator(944eac68-e72b-4aed-b5dc-d7d9703178a3)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" podUID="944eac68-e72b-4aed-b5dc-d7d9703178a3" Mar 19 12:08:31.649449 master-0 kubenswrapper[7454]: I0319 12:08:31.649384 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"09672015532ae9d1d74ae4d426cd904b","Type":"ContainerStarted","Data":"3fff7305ffab3c7b2d64fb017b4d322893f65a346d3d05dc9207a0c3f727bb4b"} Mar 19 12:08:31.649449 master-0 kubenswrapper[7454]: I0319 12:08:31.649450 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"09672015532ae9d1d74ae4d426cd904b","Type":"ContainerStarted","Data":"14e2eab8d6fc7f70b2c656df6e5623f56e87c29ceaaedf3b47b4662d233279d5"} Mar 19 12:08:31.680493 master-0 kubenswrapper[7454]: I0319 12:08:31.680391 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.680371193 podStartE2EDuration="2.680371193s" podCreationTimestamp="2026-03-19 12:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:08:31.675752658 +0000 UTC m=+881.306218591" watchObservedRunningTime="2026-03-19 12:08:31.680371193 +0000 UTC m=+881.310837116" Mar 19 12:08:39.820408 master-0 kubenswrapper[7454]: I0319 12:08:39.820317 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:08:39.820408 master-0 kubenswrapper[7454]: I0319 12:08:39.820387 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:08:39.823646 master-0 kubenswrapper[7454]: I0319 12:08:39.820496 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:08:39.823646 master-0 kubenswrapper[7454]: I0319 12:08:39.821011 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:08:39.828382 master-0 kubenswrapper[7454]: I0319 12:08:39.827903 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:08:39.830209 master-0 kubenswrapper[7454]: I0319 12:08:39.830144 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:08:40.740895 master-0 kubenswrapper[7454]: I0319 12:08:40.732302 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:08:40.740895 master-0 kubenswrapper[7454]: I0319 12:08:40.736318 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:08:44.634690 master-0 kubenswrapper[7454]: I0319 12:08:44.634521 7454 scope.go:117] "RemoveContainer" containerID="c5a947116c6a12f89fdc1149ba43ce5607536b57e0862f6ca233d92f281d6f5b" Mar 19 12:08:44.635486 master-0 kubenswrapper[7454]: E0319 12:08:44.635030 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-6m654_openshift-cluster-storage-operator(944eac68-e72b-4aed-b5dc-d7d9703178a3)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" podUID="944eac68-e72b-4aed-b5dc-d7d9703178a3" Mar 19 12:08:44.759662 master-0 kubenswrapper[7454]: I0319 12:08:44.759598 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/4.log" Mar 19 12:08:44.760409 master-0 kubenswrapper[7454]: I0319 12:08:44.760366 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/3.log" Mar 19 12:08:44.760937 master-0 kubenswrapper[7454]: I0319 12:08:44.760889 7454 generic.go:334] "Generic (PLEG): container finished" podID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" containerID="b9013cf33c6b53af293ae5f76c1dea25442713e290f755f0ed35851ad4f7ec4d" exitCode=1 Mar 19 12:08:44.761040 master-0 kubenswrapper[7454]: I0319 12:08:44.760966 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" event={"ID":"b80027fd-7b39-477a-a337-ff9bb08e7eeb","Type":"ContainerDied","Data":"b9013cf33c6b53af293ae5f76c1dea25442713e290f755f0ed35851ad4f7ec4d"} Mar 19 12:08:44.761107 master-0 kubenswrapper[7454]: I0319 12:08:44.761057 7454 scope.go:117] "RemoveContainer" containerID="0618d6d0445d7e095cd15b094fe882be49fcec49db027db4fe7de076025a2a7e" Mar 19 12:08:44.761942 master-0 kubenswrapper[7454]: I0319 12:08:44.761906 7454 scope.go:117] "RemoveContainer" containerID="b9013cf33c6b53af293ae5f76c1dea25442713e290f755f0ed35851ad4f7ec4d" Mar 19 12:08:44.762451 master-0 kubenswrapper[7454]: E0319 12:08:44.762381 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 12:08:45.777539 master-0 kubenswrapper[7454]: I0319 12:08:45.777479 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/4.log" Mar 19 12:08:47.798387 master-0 kubenswrapper[7454]: I0319 12:08:47.798333 7454 generic.go:334] "Generic (PLEG): container finished" podID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerID="6f74355f30b0cc7b3534f39a3335ceb85c6bdd019a4b22eade41702408961aed" exitCode=0 Mar 19 12:08:47.798387 master-0 kubenswrapper[7454]: I0319 12:08:47.798385 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" event={"ID":"91112ce6-4f9d-44c1-a4e7-fea126554bcf","Type":"ContainerDied","Data":"6f74355f30b0cc7b3534f39a3335ceb85c6bdd019a4b22eade41702408961aed"} Mar 19 12:08:47.798935 master-0 kubenswrapper[7454]: I0319 12:08:47.798432 7454 scope.go:117] "RemoveContainer" containerID="5204ec6a181aadcc019743971b04d16299507e076f3ad2bde88b1a3554a20992" Mar 19 12:08:48.805965 master-0 kubenswrapper[7454]: I0319 12:08:48.805929 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" event={"ID":"91112ce6-4f9d-44c1-a4e7-fea126554bcf","Type":"ContainerStarted","Data":"a17333f8b7653c93420e9827fce00e5a871f02fd861b2a225722f6e8fbb5e010"} Mar 19 12:08:49.657550 master-0 kubenswrapper[7454]: I0319 12:08:49.657482 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:08:49.660572 master-0 kubenswrapper[7454]: I0319 12:08:49.660514 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:08:49.660572 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:08:49.660572 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:08:49.660572 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:08:49.660765 master-0 kubenswrapper[7454]: I0319 12:08:49.660591 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:08:50.662710 master-0 kubenswrapper[7454]: I0319 12:08:50.662669 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:08:50.662710 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:08:50.662710 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:08:50.662710 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:08:50.663374 master-0 kubenswrapper[7454]: I0319 12:08:50.663348 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:08:51.660461 master-0 kubenswrapper[7454]: I0319 12:08:51.660389 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:08:51.660461 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:08:51.660461 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:08:51.660461 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:08:51.660969 master-0 kubenswrapper[7454]: I0319 12:08:51.660469 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:08:52.659169 master-0 kubenswrapper[7454]: I0319 12:08:52.659087 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:08:52.659169 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:08:52.659169 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:08:52.659169 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:08:52.659754 master-0 kubenswrapper[7454]: I0319 12:08:52.659199 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:08:53.660381 master-0 kubenswrapper[7454]: I0319 12:08:53.660284 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:08:53.660381 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:08:53.660381 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:08:53.660381 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:08:53.660381 master-0 kubenswrapper[7454]: I0319 12:08:53.660371 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:08:54.658941 master-0 kubenswrapper[7454]: I0319 12:08:54.658855 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:08:54.658941 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:08:54.658941 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:08:54.658941 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:08:54.658941 master-0 kubenswrapper[7454]: I0319 12:08:54.658925 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:08:55.089626 master-0 kubenswrapper[7454]: I0319 12:08:55.089530 7454 scope.go:117] "RemoveContainer" containerID="0b7cadf57c1ff393897dfb481975475d3dd6a6c04a5c37d34ce9d4c14fc55d3e" Mar 19 12:08:55.113494 master-0 kubenswrapper[7454]: I0319 12:08:55.113440 7454 scope.go:117] "RemoveContainer" containerID="6d38396688a212d80e4b9440cc838a81e9ba0076c58cc35f80f3248581700f34" Mar 19 12:08:55.131870 master-0 kubenswrapper[7454]: I0319 12:08:55.131768 7454 scope.go:117] "RemoveContainer" containerID="7ba9fe238d802cb5b3d8a7a91252294e09ef5a02de2e8f653eef99bd12ecd678" Mar 19 12:08:55.150078 master-0 kubenswrapper[7454]: I0319 12:08:55.150018 7454 scope.go:117] "RemoveContainer" containerID="4d47a2e9aa1638460fa6ef96bf2d0249d38af6d72c57ab083a850e1599710d6d" Mar 19 12:08:55.165184 master-0 kubenswrapper[7454]: I0319 12:08:55.165142 7454 scope.go:117] "RemoveContainer" containerID="c2d0e5370bf40fbdeb8944db50e89737b0a663a2967772c4a3f69a71c3dd5111" Mar 19 12:08:55.661069 master-0 kubenswrapper[7454]: I0319 12:08:55.660975 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:08:55.661069 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:08:55.661069 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:08:55.661069 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:08:55.661556 master-0 kubenswrapper[7454]: I0319 12:08:55.661071 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:08:56.659942 master-0 kubenswrapper[7454]: I0319 12:08:56.659870 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:08:56.659942 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:08:56.659942 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:08:56.659942 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:08:56.660681 master-0 kubenswrapper[7454]: I0319 12:08:56.659949 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:08:57.634547 master-0 kubenswrapper[7454]: I0319 12:08:57.634456 7454 scope.go:117] "RemoveContainer" containerID="b9013cf33c6b53af293ae5f76c1dea25442713e290f755f0ed35851ad4f7ec4d" Mar 19 12:08:57.635035 master-0 kubenswrapper[7454]: E0319 12:08:57.634942 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 12:08:57.656946 master-0 kubenswrapper[7454]: I0319 12:08:57.656789 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:08:57.659793 master-0 kubenswrapper[7454]: I0319 12:08:57.659749 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:08:57.659793 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:08:57.659793 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:08:57.659793 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:08:57.660933 master-0 kubenswrapper[7454]: I0319 12:08:57.659860 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:08:58.634094 master-0 kubenswrapper[7454]: I0319 12:08:58.634019 7454 scope.go:117] "RemoveContainer" containerID="c5a947116c6a12f89fdc1149ba43ce5607536b57e0862f6ca233d92f281d6f5b" Mar 19 12:08:58.634435 master-0 kubenswrapper[7454]: E0319 12:08:58.634294 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-6m654_openshift-cluster-storage-operator(944eac68-e72b-4aed-b5dc-d7d9703178a3)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" podUID="944eac68-e72b-4aed-b5dc-d7d9703178a3" Mar 19 12:08:58.659354 master-0 kubenswrapper[7454]: I0319 12:08:58.659249 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:08:58.659354 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:08:58.659354 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:08:58.659354 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:08:58.659354 master-0 kubenswrapper[7454]: I0319 12:08:58.659309 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:08:59.658991 master-0 kubenswrapper[7454]: I0319 12:08:59.658896 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:08:59.658991 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:08:59.658991 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:08:59.658991 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:08:59.660054 master-0 kubenswrapper[7454]: I0319 12:08:59.658989 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:00.659510 master-0 kubenswrapper[7454]: I0319 12:09:00.659435 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:00.659510 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:00.659510 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:00.659510 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:00.660506 master-0 kubenswrapper[7454]: I0319 12:09:00.659537 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:01.660845 master-0 kubenswrapper[7454]: I0319 12:09:01.660464 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:01.660845 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:01.660845 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:01.660845 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:01.660845 master-0 kubenswrapper[7454]: I0319 12:09:01.660559 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:02.658944 master-0 kubenswrapper[7454]: I0319 12:09:02.658874 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:02.658944 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:02.658944 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:02.658944 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:02.659393 master-0 kubenswrapper[7454]: I0319 12:09:02.658954 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:03.659863 master-0 kubenswrapper[7454]: I0319 12:09:03.659761 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:03.659863 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:03.659863 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:03.659863 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:03.660550 master-0 kubenswrapper[7454]: I0319 12:09:03.659912 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:04.659172 master-0 kubenswrapper[7454]: I0319 12:09:04.659096 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:04.659172 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:04.659172 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:04.659172 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:04.659510 master-0 kubenswrapper[7454]: I0319 12:09:04.659210 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:05.660555 master-0 kubenswrapper[7454]: I0319 12:09:05.660454 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:05.660555 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:05.660555 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:05.660555 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:05.660555 master-0 kubenswrapper[7454]: I0319 12:09:05.660539 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:06.662483 master-0 kubenswrapper[7454]: I0319 12:09:06.662391 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:06.662483 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:06.662483 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:06.662483 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:06.663513 master-0 kubenswrapper[7454]: I0319 12:09:06.662495 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:07.660304 master-0 kubenswrapper[7454]: I0319 12:09:07.660170 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:07.660304 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:07.660304 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:07.660304 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:07.660304 master-0 kubenswrapper[7454]: I0319 12:09:07.660262 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:08.667300 master-0 kubenswrapper[7454]: I0319 12:09:08.667191 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:08.667300 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:08.667300 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:08.667300 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:08.668321 master-0 kubenswrapper[7454]: I0319 12:09:08.667321 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:09.659372 master-0 kubenswrapper[7454]: I0319 12:09:09.659282 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:09.659372 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:09.659372 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:09.659372 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:09.659372 master-0 kubenswrapper[7454]: I0319 12:09:09.659359 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:10.634053 master-0 kubenswrapper[7454]: I0319 12:09:10.633972 7454 scope.go:117] "RemoveContainer" containerID="c5a947116c6a12f89fdc1149ba43ce5607536b57e0862f6ca233d92f281d6f5b" Mar 19 12:09:10.660438 master-0 kubenswrapper[7454]: I0319 12:09:10.660373 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:10.660438 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:10.660438 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:10.660438 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:10.660770 master-0 kubenswrapper[7454]: I0319 12:09:10.660474 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:10.998650 master-0 kubenswrapper[7454]: I0319 12:09:10.998588 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-6m654_944eac68-e72b-4aed-b5dc-d7d9703178a3/snapshot-controller/4.log" Mar 19 12:09:10.998935 master-0 kubenswrapper[7454]: I0319 12:09:10.998676 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" event={"ID":"944eac68-e72b-4aed-b5dc-d7d9703178a3","Type":"ContainerStarted","Data":"c386d02cc2d7e55926d96e3821e53cb76c4ab587ae912a63758899e3b100a5d3"} Mar 19 12:09:11.660255 master-0 kubenswrapper[7454]: I0319 12:09:11.660168 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:11.660255 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:11.660255 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:11.660255 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:11.660255 master-0 kubenswrapper[7454]: I0319 12:09:11.660251 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:12.634917 master-0 kubenswrapper[7454]: I0319 12:09:12.634836 7454 scope.go:117] "RemoveContainer" containerID="b9013cf33c6b53af293ae5f76c1dea25442713e290f755f0ed35851ad4f7ec4d" Mar 19 12:09:12.635301 master-0 kubenswrapper[7454]: E0319 12:09:12.635244 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 12:09:12.660108 master-0 kubenswrapper[7454]: I0319 12:09:12.660028 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:12.660108 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:12.660108 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:12.660108 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:12.660108 master-0 kubenswrapper[7454]: I0319 12:09:12.660098 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:12.718060 master-0 kubenswrapper[7454]: I0319 12:09:12.717992 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:09:13.659263 master-0 kubenswrapper[7454]: I0319 12:09:13.659189 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:13.659263 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:13.659263 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:13.659263 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:13.659688 master-0 kubenswrapper[7454]: I0319 12:09:13.659350 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:14.659737 master-0 kubenswrapper[7454]: I0319 12:09:14.659667 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:14.659737 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:14.659737 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:14.659737 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:14.660510 master-0 kubenswrapper[7454]: I0319 12:09:14.659781 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:15.659939 master-0 kubenswrapper[7454]: I0319 12:09:15.659850 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:15.659939 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:15.659939 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:15.659939 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:15.661146 master-0 kubenswrapper[7454]: I0319 12:09:15.659938 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:16.660309 master-0 kubenswrapper[7454]: I0319 12:09:16.660180 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:16.660309 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:16.660309 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:16.660309 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:16.660309 master-0 kubenswrapper[7454]: I0319 12:09:16.660281 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:17.660061 master-0 kubenswrapper[7454]: I0319 12:09:17.659983 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:17.660061 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:17.660061 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:17.660061 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:17.661093 master-0 kubenswrapper[7454]: I0319 12:09:17.660069 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:18.660042 master-0 kubenswrapper[7454]: I0319 12:09:18.659943 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:18.660042 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:18.660042 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:18.660042 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:18.661165 master-0 kubenswrapper[7454]: I0319 12:09:18.660054 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:19.659779 master-0 kubenswrapper[7454]: I0319 12:09:19.659694 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:19.659779 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:19.659779 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:19.659779 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:19.660295 master-0 kubenswrapper[7454]: I0319 12:09:19.659790 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:20.659184 master-0 kubenswrapper[7454]: I0319 12:09:20.659129 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:20.659184 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:20.659184 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:20.659184 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:20.659731 master-0 kubenswrapper[7454]: I0319 12:09:20.659198 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:21.660304 master-0 kubenswrapper[7454]: I0319 12:09:21.660208 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:21.660304 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:21.660304 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:21.660304 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:21.661545 master-0 kubenswrapper[7454]: I0319 12:09:21.660318 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:22.660353 master-0 kubenswrapper[7454]: I0319 12:09:22.660267 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:22.660353 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:22.660353 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:22.660353 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:22.661494 master-0 kubenswrapper[7454]: I0319 12:09:22.660354 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:23.659288 master-0 kubenswrapper[7454]: I0319 12:09:23.659207 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:23.659288 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:23.659288 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:23.659288 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:23.659621 master-0 kubenswrapper[7454]: I0319 12:09:23.659321 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:24.659462 master-0 kubenswrapper[7454]: I0319 12:09:24.659390 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:24.659462 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:24.659462 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:24.659462 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:24.660509 master-0 kubenswrapper[7454]: I0319 12:09:24.659472 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:25.634318 master-0 kubenswrapper[7454]: I0319 12:09:25.634246 7454 scope.go:117] "RemoveContainer" containerID="b9013cf33c6b53af293ae5f76c1dea25442713e290f755f0ed35851ad4f7ec4d" Mar 19 12:09:25.634740 master-0 kubenswrapper[7454]: E0319 12:09:25.634691 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 12:09:25.660597 master-0 kubenswrapper[7454]: I0319 12:09:25.660543 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:25.660597 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:25.660597 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:25.660597 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:25.661174 master-0 kubenswrapper[7454]: I0319 12:09:25.660613 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:26.659712 master-0 kubenswrapper[7454]: I0319 12:09:26.659553 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:26.659712 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:26.659712 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:26.659712 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:26.660415 master-0 kubenswrapper[7454]: I0319 12:09:26.659736 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:27.961245 master-0 kubenswrapper[7454]: I0319 12:09:27.961060 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:27.961245 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:27.961245 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:27.961245 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:27.961245 master-0 kubenswrapper[7454]: I0319 12:09:27.961147 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:28.660637 master-0 kubenswrapper[7454]: I0319 12:09:28.660539 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:28.660637 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:28.660637 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:28.660637 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:28.661222 master-0 kubenswrapper[7454]: I0319 12:09:28.660645 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:29.659697 master-0 kubenswrapper[7454]: I0319 12:09:29.659615 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:29.659697 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:29.659697 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:29.659697 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:29.660986 master-0 kubenswrapper[7454]: I0319 12:09:29.659706 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:30.659324 master-0 kubenswrapper[7454]: I0319 12:09:30.659227 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:30.659324 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:30.659324 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:30.659324 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:30.659324 master-0 kubenswrapper[7454]: I0319 12:09:30.659308 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:31.660769 master-0 kubenswrapper[7454]: I0319 12:09:31.660679 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:31.660769 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:31.660769 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:31.660769 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:31.661835 master-0 kubenswrapper[7454]: I0319 12:09:31.660833 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:32.659843 master-0 kubenswrapper[7454]: I0319 12:09:32.659749 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:32.659843 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:32.659843 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:32.659843 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:32.660513 master-0 kubenswrapper[7454]: I0319 12:09:32.659903 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:33.660745 master-0 kubenswrapper[7454]: I0319 12:09:33.660642 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:33.660745 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:33.660745 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:33.660745 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:33.661928 master-0 kubenswrapper[7454]: I0319 12:09:33.660752 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:34.660413 master-0 kubenswrapper[7454]: I0319 12:09:34.660302 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:34.660413 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:34.660413 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:34.660413 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:34.660413 master-0 kubenswrapper[7454]: I0319 12:09:34.660399 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:35.660002 master-0 kubenswrapper[7454]: I0319 12:09:35.659943 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:35.660002 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:35.660002 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:35.660002 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:35.660480 master-0 kubenswrapper[7454]: I0319 12:09:35.660011 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:36.662428 master-0 kubenswrapper[7454]: I0319 12:09:36.662320 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:36.662428 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:36.662428 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:36.662428 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:36.663660 master-0 kubenswrapper[7454]: I0319 12:09:36.663161 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:37.634037 master-0 kubenswrapper[7454]: I0319 12:09:37.633966 7454 scope.go:117] "RemoveContainer" containerID="b9013cf33c6b53af293ae5f76c1dea25442713e290f755f0ed35851ad4f7ec4d" Mar 19 12:09:37.634353 master-0 kubenswrapper[7454]: E0319 12:09:37.634233 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 12:09:37.659241 master-0 kubenswrapper[7454]: I0319 12:09:37.659185 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:37.659241 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:37.659241 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:37.659241 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:37.659624 master-0 kubenswrapper[7454]: I0319 12:09:37.659254 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:38.659394 master-0 kubenswrapper[7454]: I0319 12:09:38.659329 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:38.659394 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:38.659394 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:38.659394 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:38.660958 master-0 kubenswrapper[7454]: I0319 12:09:38.660908 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:39.659771 master-0 kubenswrapper[7454]: I0319 12:09:39.659708 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:39.659771 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:39.659771 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:39.659771 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:39.659771 master-0 kubenswrapper[7454]: I0319 12:09:39.659780 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:40.659195 master-0 kubenswrapper[7454]: I0319 12:09:40.659084 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:40.659195 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:40.659195 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:40.659195 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:40.659195 master-0 kubenswrapper[7454]: I0319 12:09:40.659171 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:41.659807 master-0 kubenswrapper[7454]: I0319 12:09:41.659747 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:41.659807 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:41.659807 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:41.659807 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:41.660310 master-0 kubenswrapper[7454]: I0319 12:09:41.659817 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:42.659330 master-0 kubenswrapper[7454]: I0319 12:09:42.659272 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:42.659330 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:42.659330 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:42.659330 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:42.659605 master-0 kubenswrapper[7454]: I0319 12:09:42.659344 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:43.660051 master-0 kubenswrapper[7454]: I0319 12:09:43.659955 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:43.660051 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:43.660051 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:43.660051 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:43.661113 master-0 kubenswrapper[7454]: I0319 12:09:43.660074 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:44.659479 master-0 kubenswrapper[7454]: I0319 12:09:44.659411 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:44.659479 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:44.659479 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:44.659479 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:44.659859 master-0 kubenswrapper[7454]: I0319 12:09:44.659517 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:45.660002 master-0 kubenswrapper[7454]: I0319 12:09:45.659905 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:45.660002 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:45.660002 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:45.660002 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:45.661207 master-0 kubenswrapper[7454]: I0319 12:09:45.660017 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:46.659339 master-0 kubenswrapper[7454]: I0319 12:09:46.659262 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:46.659339 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:46.659339 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:46.659339 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:46.659676 master-0 kubenswrapper[7454]: I0319 12:09:46.659376 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:47.660396 master-0 kubenswrapper[7454]: I0319 12:09:47.660332 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:47.660396 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:47.660396 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:47.660396 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:47.661087 master-0 kubenswrapper[7454]: I0319 12:09:47.660397 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:48.659124 master-0 kubenswrapper[7454]: I0319 12:09:48.659046 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:48.659124 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:48.659124 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:48.659124 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:48.659413 master-0 kubenswrapper[7454]: I0319 12:09:48.659156 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:49.660011 master-0 kubenswrapper[7454]: I0319 12:09:49.659953 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:49.660011 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:49.660011 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:49.660011 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:49.660843 master-0 kubenswrapper[7454]: I0319 12:09:49.660020 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:50.638171 master-0 kubenswrapper[7454]: I0319 12:09:50.638115 7454 scope.go:117] "RemoveContainer" containerID="b9013cf33c6b53af293ae5f76c1dea25442713e290f755f0ed35851ad4f7ec4d" Mar 19 12:09:50.638716 master-0 kubenswrapper[7454]: E0319 12:09:50.638417 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 12:09:50.659874 master-0 kubenswrapper[7454]: I0319 12:09:50.659827 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:50.659874 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:50.659874 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:50.659874 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:50.660937 master-0 kubenswrapper[7454]: I0319 12:09:50.659880 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:51.660288 master-0 kubenswrapper[7454]: I0319 12:09:51.660208 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:51.660288 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:51.660288 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:51.660288 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:51.661623 master-0 kubenswrapper[7454]: I0319 12:09:51.660306 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:52.659998 master-0 kubenswrapper[7454]: I0319 12:09:52.659934 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:52.659998 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:52.659998 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:52.659998 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:52.661138 master-0 kubenswrapper[7454]: I0319 12:09:52.661092 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:53.660162 master-0 kubenswrapper[7454]: I0319 12:09:53.660067 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:53.660162 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:53.660162 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:53.660162 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:53.661261 master-0 kubenswrapper[7454]: I0319 12:09:53.660172 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:54.660085 master-0 kubenswrapper[7454]: I0319 12:09:54.659977 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:54.660085 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:54.660085 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:54.660085 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:54.660085 master-0 kubenswrapper[7454]: I0319 12:09:54.660069 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:55.660842 master-0 kubenswrapper[7454]: I0319 12:09:55.660718 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:55.660842 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:55.660842 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:55.660842 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:55.661941 master-0 kubenswrapper[7454]: I0319 12:09:55.660849 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:56.660426 master-0 kubenswrapper[7454]: I0319 12:09:56.660334 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:56.660426 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:56.660426 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:56.660426 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:56.661703 master-0 kubenswrapper[7454]: I0319 12:09:56.660431 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:57.660209 master-0 kubenswrapper[7454]: I0319 12:09:57.660112 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:57.660209 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:57.660209 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:57.660209 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:57.660768 master-0 kubenswrapper[7454]: I0319 12:09:57.660210 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:58.659312 master-0 kubenswrapper[7454]: I0319 12:09:58.659251 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:58.659312 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:58.659312 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:58.659312 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:58.660273 master-0 kubenswrapper[7454]: I0319 12:09:58.659334 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:09:59.659950 master-0 kubenswrapper[7454]: I0319 12:09:59.659870 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:09:59.659950 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:09:59.659950 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:09:59.659950 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:09:59.661036 master-0 kubenswrapper[7454]: I0319 12:09:59.659975 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:00.659660 master-0 kubenswrapper[7454]: I0319 12:10:00.659578 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:00.659660 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:00.659660 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:00.659660 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:00.660771 master-0 kubenswrapper[7454]: I0319 12:10:00.659669 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:01.660369 master-0 kubenswrapper[7454]: I0319 12:10:01.660267 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:01.660369 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:01.660369 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:01.660369 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:01.661558 master-0 kubenswrapper[7454]: I0319 12:10:01.660384 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:02.634582 master-0 kubenswrapper[7454]: I0319 12:10:02.634514 7454 scope.go:117] "RemoveContainer" containerID="b9013cf33c6b53af293ae5f76c1dea25442713e290f755f0ed35851ad4f7ec4d" Mar 19 12:10:02.635241 master-0 kubenswrapper[7454]: E0319 12:10:02.635188 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 12:10:02.659636 master-0 kubenswrapper[7454]: I0319 12:10:02.659576 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:02.659636 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:02.659636 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:02.659636 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:02.659919 master-0 kubenswrapper[7454]: I0319 12:10:02.659650 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:03.660104 master-0 kubenswrapper[7454]: I0319 12:10:03.659997 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:03.660104 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:03.660104 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:03.660104 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:03.660104 master-0 kubenswrapper[7454]: I0319 12:10:03.660083 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:04.659112 master-0 kubenswrapper[7454]: I0319 12:10:04.659049 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:04.659112 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:04.659112 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:04.659112 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:04.659595 master-0 kubenswrapper[7454]: I0319 12:10:04.659134 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:05.660002 master-0 kubenswrapper[7454]: I0319 12:10:05.659919 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:05.660002 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:05.660002 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:05.660002 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:05.661020 master-0 kubenswrapper[7454]: I0319 12:10:05.660065 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:06.660730 master-0 kubenswrapper[7454]: I0319 12:10:06.660230 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:06.660730 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:06.660730 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:06.660730 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:06.660730 master-0 kubenswrapper[7454]: I0319 12:10:06.660345 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:07.660330 master-0 kubenswrapper[7454]: I0319 12:10:07.660254 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:07.660330 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:07.660330 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:07.660330 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:07.660871 master-0 kubenswrapper[7454]: I0319 12:10:07.660334 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:08.659659 master-0 kubenswrapper[7454]: I0319 12:10:08.659605 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:08.659659 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:08.659659 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:08.659659 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:08.660292 master-0 kubenswrapper[7454]: I0319 12:10:08.659684 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:09.659691 master-0 kubenswrapper[7454]: I0319 12:10:09.659613 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:09.659691 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:09.659691 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:09.659691 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:09.660649 master-0 kubenswrapper[7454]: I0319 12:10:09.659737 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:10.659344 master-0 kubenswrapper[7454]: I0319 12:10:10.659228 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:10.659344 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:10.659344 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:10.659344 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:10.659344 master-0 kubenswrapper[7454]: I0319 12:10:10.659289 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:11.659136 master-0 kubenswrapper[7454]: I0319 12:10:11.659081 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:11.659136 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:11.659136 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:11.659136 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:11.660164 master-0 kubenswrapper[7454]: I0319 12:10:11.660128 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:12.659990 master-0 kubenswrapper[7454]: I0319 12:10:12.659882 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:12.659990 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:12.659990 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:12.659990 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:12.661088 master-0 kubenswrapper[7454]: I0319 12:10:12.660003 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:13.659966 master-0 kubenswrapper[7454]: I0319 12:10:13.659904 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:13.659966 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:13.659966 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:13.659966 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:13.660559 master-0 kubenswrapper[7454]: I0319 12:10:13.659993 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:14.633679 master-0 kubenswrapper[7454]: I0319 12:10:14.633639 7454 scope.go:117] "RemoveContainer" containerID="b9013cf33c6b53af293ae5f76c1dea25442713e290f755f0ed35851ad4f7ec4d" Mar 19 12:10:14.659926 master-0 kubenswrapper[7454]: I0319 12:10:14.659857 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:14.659926 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:14.659926 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:14.659926 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:14.660630 master-0 kubenswrapper[7454]: I0319 12:10:14.659942 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:15.516467 master-0 kubenswrapper[7454]: I0319 12:10:15.516415 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/4.log" Mar 19 12:10:15.517099 master-0 kubenswrapper[7454]: I0319 12:10:15.517044 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" event={"ID":"b80027fd-7b39-477a-a337-ff9bb08e7eeb","Type":"ContainerStarted","Data":"f6cff670ffd3b7d67c924d90e3c87c5305a542b098fae4298d16010aa46c7cd3"} Mar 19 12:10:15.663542 master-0 kubenswrapper[7454]: I0319 12:10:15.661233 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:15.663542 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:15.663542 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:15.663542 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:15.663542 master-0 kubenswrapper[7454]: I0319 12:10:15.661298 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:16.659945 master-0 kubenswrapper[7454]: I0319 12:10:16.659847 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:16.659945 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:16.659945 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:16.659945 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:16.659945 master-0 kubenswrapper[7454]: I0319 12:10:16.659904 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:17.659314 master-0 kubenswrapper[7454]: I0319 12:10:17.659204 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:17.659314 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:17.659314 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:17.659314 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:17.659314 master-0 kubenswrapper[7454]: I0319 12:10:17.659280 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:18.659924 master-0 kubenswrapper[7454]: I0319 12:10:18.659847 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:18.659924 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:18.659924 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:18.659924 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:18.659924 master-0 kubenswrapper[7454]: I0319 12:10:18.659917 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:19.660330 master-0 kubenswrapper[7454]: I0319 12:10:19.660227 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:19.660330 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:19.660330 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:19.660330 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:19.660330 master-0 kubenswrapper[7454]: I0319 12:10:19.660315 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:20.659775 master-0 kubenswrapper[7454]: I0319 12:10:20.659642 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:20.659775 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:20.659775 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:20.659775 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:20.661043 master-0 kubenswrapper[7454]: I0319 12:10:20.659822 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:21.660621 master-0 kubenswrapper[7454]: I0319 12:10:21.660508 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:21.660621 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:21.660621 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:21.660621 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:21.660621 master-0 kubenswrapper[7454]: I0319 12:10:21.660607 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:22.659668 master-0 kubenswrapper[7454]: I0319 12:10:22.659568 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:22.659668 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:22.659668 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:22.659668 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:22.659668 master-0 kubenswrapper[7454]: I0319 12:10:22.659649 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:23.661125 master-0 kubenswrapper[7454]: I0319 12:10:23.661022 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:23.661125 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:23.661125 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:23.661125 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:23.661125 master-0 kubenswrapper[7454]: I0319 12:10:23.661126 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:24.659916 master-0 kubenswrapper[7454]: I0319 12:10:24.659780 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:24.659916 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:24.659916 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:24.659916 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:24.660415 master-0 kubenswrapper[7454]: I0319 12:10:24.659919 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:25.660348 master-0 kubenswrapper[7454]: I0319 12:10:25.660244 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:25.660348 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:25.660348 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:25.660348 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:25.661372 master-0 kubenswrapper[7454]: I0319 12:10:25.660344 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:26.659570 master-0 kubenswrapper[7454]: I0319 12:10:26.659486 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:26.659570 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:26.659570 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:26.659570 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:26.660121 master-0 kubenswrapper[7454]: I0319 12:10:26.659583 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:27.660715 master-0 kubenswrapper[7454]: I0319 12:10:27.660631 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:27.660715 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:27.660715 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:27.660715 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:27.661850 master-0 kubenswrapper[7454]: I0319 12:10:27.660736 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:28.660549 master-0 kubenswrapper[7454]: I0319 12:10:28.660443 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:28.660549 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:28.660549 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:28.660549 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:28.661563 master-0 kubenswrapper[7454]: I0319 12:10:28.660555 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:29.660552 master-0 kubenswrapper[7454]: I0319 12:10:29.660469 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:29.660552 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:29.660552 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:29.660552 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:29.661563 master-0 kubenswrapper[7454]: I0319 12:10:29.660565 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:30.662526 master-0 kubenswrapper[7454]: I0319 12:10:30.662411 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:30.662526 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:30.662526 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:30.662526 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:30.663597 master-0 kubenswrapper[7454]: I0319 12:10:30.662558 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:31.661304 master-0 kubenswrapper[7454]: I0319 12:10:31.661210 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:31.661304 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:31.661304 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:31.661304 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:31.661786 master-0 kubenswrapper[7454]: I0319 12:10:31.661324 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:32.660624 master-0 kubenswrapper[7454]: I0319 12:10:32.660547 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:32.660624 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:32.660624 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:32.660624 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:32.661996 master-0 kubenswrapper[7454]: I0319 12:10:32.660633 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:33.660365 master-0 kubenswrapper[7454]: I0319 12:10:33.660281 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:33.660365 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:33.660365 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:33.660365 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:33.661139 master-0 kubenswrapper[7454]: I0319 12:10:33.660389 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:34.661951 master-0 kubenswrapper[7454]: I0319 12:10:34.660597 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:34.661951 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:34.661951 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:34.661951 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:34.661951 master-0 kubenswrapper[7454]: I0319 12:10:34.660728 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:35.662321 master-0 kubenswrapper[7454]: I0319 12:10:35.662191 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:35.662321 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:35.662321 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:35.662321 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:35.662321 master-0 kubenswrapper[7454]: I0319 12:10:35.662295 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:36.659877 master-0 kubenswrapper[7454]: I0319 12:10:36.659556 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:36.659877 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:36.659877 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:36.659877 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:36.659877 master-0 kubenswrapper[7454]: I0319 12:10:36.659618 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:37.660639 master-0 kubenswrapper[7454]: I0319 12:10:37.660556 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:37.660639 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:37.660639 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:37.660639 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:37.660639 master-0 kubenswrapper[7454]: I0319 12:10:37.660628 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:38.659586 master-0 kubenswrapper[7454]: I0319 12:10:38.659504 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:38.659586 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:38.659586 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:38.659586 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:38.660344 master-0 kubenswrapper[7454]: I0319 12:10:38.659597 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:39.659790 master-0 kubenswrapper[7454]: I0319 12:10:39.659586 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:39.659790 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:39.659790 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:39.659790 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:39.659790 master-0 kubenswrapper[7454]: I0319 12:10:39.659682 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:40.660353 master-0 kubenswrapper[7454]: I0319 12:10:40.660247 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:40.660353 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:40.660353 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:40.660353 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:40.660353 master-0 kubenswrapper[7454]: I0319 12:10:40.660340 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:41.659931 master-0 kubenswrapper[7454]: I0319 12:10:41.659856 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:41.659931 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:41.659931 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:41.659931 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:41.660379 master-0 kubenswrapper[7454]: I0319 12:10:41.659951 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:42.660295 master-0 kubenswrapper[7454]: I0319 12:10:42.660241 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:42.660295 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:42.660295 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:42.660295 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:42.661334 master-0 kubenswrapper[7454]: I0319 12:10:42.661052 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:43.660718 master-0 kubenswrapper[7454]: I0319 12:10:43.660655 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:43.660718 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:43.660718 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:43.660718 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:43.662343 master-0 kubenswrapper[7454]: I0319 12:10:43.662260 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:44.660822 master-0 kubenswrapper[7454]: I0319 12:10:44.660691 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:44.660822 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:44.660822 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:44.660822 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:44.662036 master-0 kubenswrapper[7454]: I0319 12:10:44.660871 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:45.660122 master-0 kubenswrapper[7454]: I0319 12:10:45.660036 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:45.660122 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:45.660122 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:45.660122 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:45.660122 master-0 kubenswrapper[7454]: I0319 12:10:45.660113 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:46.660558 master-0 kubenswrapper[7454]: I0319 12:10:46.660476 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:46.660558 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:46.660558 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:46.660558 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:46.661586 master-0 kubenswrapper[7454]: I0319 12:10:46.660576 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:47.660459 master-0 kubenswrapper[7454]: I0319 12:10:47.660368 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:47.660459 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:47.660459 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:47.660459 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:47.661527 master-0 kubenswrapper[7454]: I0319 12:10:47.660484 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:48.659765 master-0 kubenswrapper[7454]: I0319 12:10:48.659696 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:10:48.659765 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:10:48.659765 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:10:48.659765 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:10:48.660067 master-0 kubenswrapper[7454]: I0319 12:10:48.659781 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:10:48.660067 master-0 kubenswrapper[7454]: I0319 12:10:48.659870 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:10:48.660750 master-0 kubenswrapper[7454]: I0319 12:10:48.660713 7454 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"a17333f8b7653c93420e9827fce00e5a871f02fd861b2a225722f6e8fbb5e010"} pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" containerMessage="Container router failed startup probe, will be restarted" Mar 19 12:10:48.660821 master-0 kubenswrapper[7454]: I0319 12:10:48.660763 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" containerID="cri-o://a17333f8b7653c93420e9827fce00e5a871f02fd861b2a225722f6e8fbb5e010" gracePeriod=3600 Mar 19 12:11:35.666519 master-0 kubenswrapper[7454]: I0319 12:11:35.666411 7454 generic.go:334] "Generic (PLEG): container finished" podID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerID="a17333f8b7653c93420e9827fce00e5a871f02fd861b2a225722f6e8fbb5e010" exitCode=0 Mar 19 12:11:35.666519 master-0 kubenswrapper[7454]: I0319 12:11:35.666466 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" event={"ID":"91112ce6-4f9d-44c1-a4e7-fea126554bcf","Type":"ContainerDied","Data":"a17333f8b7653c93420e9827fce00e5a871f02fd861b2a225722f6e8fbb5e010"} Mar 19 12:11:35.666519 master-0 kubenswrapper[7454]: I0319 12:11:35.666534 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" event={"ID":"91112ce6-4f9d-44c1-a4e7-fea126554bcf","Type":"ContainerStarted","Data":"02580d8818d0f202a13ac68e82f20d4293f3530799a86f4d7e26b5116036380f"} Mar 19 12:11:35.668040 master-0 kubenswrapper[7454]: I0319 12:11:35.666561 7454 scope.go:117] "RemoveContainer" containerID="6f74355f30b0cc7b3534f39a3335ceb85c6bdd019a4b22eade41702408961aed" Mar 19 12:11:36.661697 master-0 kubenswrapper[7454]: I0319 12:11:36.660998 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:11:36.664365 master-0 kubenswrapper[7454]: I0319 12:11:36.664300 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:36.664365 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:36.664365 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:36.664365 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:36.664738 master-0 kubenswrapper[7454]: I0319 12:11:36.664388 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:37.656612 master-0 kubenswrapper[7454]: I0319 12:11:37.656539 7454 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:11:37.659193 master-0 kubenswrapper[7454]: I0319 12:11:37.659122 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:37.659193 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:37.659193 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:37.659193 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:37.659386 master-0 kubenswrapper[7454]: I0319 12:11:37.659229 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:37.706014 master-0 kubenswrapper[7454]: I0319 12:11:37.705954 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-pkgvq_d3017b5e-178e-49de-89d2-817a18398203/authentication-operator/1.log" Mar 19 12:11:37.706733 master-0 kubenswrapper[7454]: I0319 12:11:37.706681 7454 generic.go:334] "Generic (PLEG): container finished" podID="d3017b5e-178e-49de-89d2-817a18398203" containerID="6dedac466f0712e9cb88164ac3beff662b4163f5b6d34ec1e978daf51f4b9061" exitCode=1 Mar 19 12:11:37.706844 master-0 kubenswrapper[7454]: I0319 12:11:37.706756 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" event={"ID":"d3017b5e-178e-49de-89d2-817a18398203","Type":"ContainerDied","Data":"6dedac466f0712e9cb88164ac3beff662b4163f5b6d34ec1e978daf51f4b9061"} Mar 19 12:11:37.706911 master-0 kubenswrapper[7454]: I0319 12:11:37.706866 7454 scope.go:117] "RemoveContainer" containerID="ec99e0001708bd8c36619c411325f2d4bdab0ecd7770deeae64fffd8bdf90881" Mar 19 12:11:37.707748 master-0 kubenswrapper[7454]: I0319 12:11:37.707680 7454 scope.go:117] "RemoveContainer" containerID="6dedac466f0712e9cb88164ac3beff662b4163f5b6d34ec1e978daf51f4b9061" Mar 19 12:11:38.659229 master-0 kubenswrapper[7454]: I0319 12:11:38.659171 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:38.659229 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:38.659229 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:38.659229 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:38.659229 master-0 kubenswrapper[7454]: I0319 12:11:38.659221 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:38.714955 master-0 kubenswrapper[7454]: I0319 12:11:38.714873 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-pkgvq_d3017b5e-178e-49de-89d2-817a18398203/authentication-operator/1.log" Mar 19 12:11:38.715161 master-0 kubenswrapper[7454]: I0319 12:11:38.714995 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" event={"ID":"d3017b5e-178e-49de-89d2-817a18398203","Type":"ContainerStarted","Data":"50de772a7c55417ff26c0f06cc4e1e295815c158bb6e2317de46d1e8300d1e71"} Mar 19 12:11:39.660654 master-0 kubenswrapper[7454]: I0319 12:11:39.660564 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:39.660654 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:39.660654 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:39.660654 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:39.661823 master-0 kubenswrapper[7454]: I0319 12:11:39.660668 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:40.660391 master-0 kubenswrapper[7454]: I0319 12:11:40.660168 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:40.660391 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:40.660391 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:40.660391 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:40.660391 master-0 kubenswrapper[7454]: I0319 12:11:40.660281 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:41.660050 master-0 kubenswrapper[7454]: I0319 12:11:41.659880 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:41.660050 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:41.660050 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:41.660050 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:41.660050 master-0 kubenswrapper[7454]: I0319 12:11:41.659970 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:42.660449 master-0 kubenswrapper[7454]: I0319 12:11:42.660384 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:42.660449 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:42.660449 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:42.660449 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:42.661620 master-0 kubenswrapper[7454]: I0319 12:11:42.660471 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:43.658652 master-0 kubenswrapper[7454]: I0319 12:11:43.658600 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:43.658652 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:43.658652 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:43.658652 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:43.659051 master-0 kubenswrapper[7454]: I0319 12:11:43.658663 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:44.659560 master-0 kubenswrapper[7454]: I0319 12:11:44.659473 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:44.659560 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:44.659560 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:44.659560 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:44.660730 master-0 kubenswrapper[7454]: I0319 12:11:44.659560 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:45.660034 master-0 kubenswrapper[7454]: I0319 12:11:45.659924 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:45.660034 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:45.660034 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:45.660034 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:45.661110 master-0 kubenswrapper[7454]: I0319 12:11:45.660074 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:46.660505 master-0 kubenswrapper[7454]: I0319 12:11:46.660400 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:46.660505 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:46.660505 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:46.660505 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:46.661695 master-0 kubenswrapper[7454]: I0319 12:11:46.660510 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:47.659630 master-0 kubenswrapper[7454]: I0319 12:11:47.659515 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:47.659630 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:47.659630 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:47.659630 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:47.660139 master-0 kubenswrapper[7454]: I0319 12:11:47.659644 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:48.658669 master-0 kubenswrapper[7454]: I0319 12:11:48.658582 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:48.658669 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:48.658669 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:48.658669 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:48.658669 master-0 kubenswrapper[7454]: I0319 12:11:48.658659 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:49.658952 master-0 kubenswrapper[7454]: I0319 12:11:49.658850 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:49.658952 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:49.658952 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:49.658952 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:49.659943 master-0 kubenswrapper[7454]: I0319 12:11:49.658952 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:50.659586 master-0 kubenswrapper[7454]: I0319 12:11:50.659500 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:50.659586 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:50.659586 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:50.659586 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:50.660699 master-0 kubenswrapper[7454]: I0319 12:11:50.659615 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:51.659660 master-0 kubenswrapper[7454]: I0319 12:11:51.659494 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:51.659660 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:51.659660 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:51.659660 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:51.660890 master-0 kubenswrapper[7454]: I0319 12:11:51.659614 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:52.660462 master-0 kubenswrapper[7454]: I0319 12:11:52.660415 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:52.660462 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:52.660462 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:52.660462 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:52.661397 master-0 kubenswrapper[7454]: I0319 12:11:52.661076 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:53.660422 master-0 kubenswrapper[7454]: I0319 12:11:53.660319 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:53.660422 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:53.660422 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:53.660422 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:53.660422 master-0 kubenswrapper[7454]: I0319 12:11:53.660414 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:54.660089 master-0 kubenswrapper[7454]: I0319 12:11:54.660022 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:54.660089 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:54.660089 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:54.660089 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:54.660629 master-0 kubenswrapper[7454]: I0319 12:11:54.660105 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:55.233656 master-0 kubenswrapper[7454]: I0319 12:11:55.233564 7454 scope.go:117] "RemoveContainer" containerID="4ad628e89e7621359063e42ff965fafd7ff7510f8646a17316c1e2a0906b3609" Mar 19 12:11:55.659696 master-0 kubenswrapper[7454]: I0319 12:11:55.659511 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:55.659696 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:55.659696 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:55.659696 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:55.659696 master-0 kubenswrapper[7454]: I0319 12:11:55.659603 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:56.661905 master-0 kubenswrapper[7454]: I0319 12:11:56.661836 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:56.661905 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:56.661905 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:56.661905 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:56.662544 master-0 kubenswrapper[7454]: I0319 12:11:56.661933 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:57.659930 master-0 kubenswrapper[7454]: I0319 12:11:57.659842 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:57.659930 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:57.659930 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:57.659930 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:57.660366 master-0 kubenswrapper[7454]: I0319 12:11:57.659944 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:58.659731 master-0 kubenswrapper[7454]: I0319 12:11:58.659639 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:58.659731 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:58.659731 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:58.659731 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:58.659731 master-0 kubenswrapper[7454]: I0319 12:11:58.659713 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:11:59.658569 master-0 kubenswrapper[7454]: I0319 12:11:59.658519 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:11:59.658569 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:11:59.658569 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:11:59.658569 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:11:59.658926 master-0 kubenswrapper[7454]: I0319 12:11:59.658590 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:00.660300 master-0 kubenswrapper[7454]: I0319 12:12:00.660212 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:00.660300 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:00.660300 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:00.660300 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:00.660300 master-0 kubenswrapper[7454]: I0319 12:12:00.660306 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:01.659205 master-0 kubenswrapper[7454]: I0319 12:12:01.659063 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:01.659205 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:01.659205 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:01.659205 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:01.659205 master-0 kubenswrapper[7454]: I0319 12:12:01.659156 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:02.659758 master-0 kubenswrapper[7454]: I0319 12:12:02.659646 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:02.659758 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:02.659758 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:02.659758 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:02.659758 master-0 kubenswrapper[7454]: I0319 12:12:02.659730 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:03.659937 master-0 kubenswrapper[7454]: I0319 12:12:03.659833 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:03.659937 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:03.659937 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:03.659937 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:03.661166 master-0 kubenswrapper[7454]: I0319 12:12:03.659940 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:04.659460 master-0 kubenswrapper[7454]: I0319 12:12:04.659373 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:04.659460 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:04.659460 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:04.659460 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:04.659990 master-0 kubenswrapper[7454]: I0319 12:12:04.659466 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:05.660286 master-0 kubenswrapper[7454]: I0319 12:12:05.660217 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:05.660286 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:05.660286 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:05.660286 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:05.661036 master-0 kubenswrapper[7454]: I0319 12:12:05.660305 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:06.659415 master-0 kubenswrapper[7454]: I0319 12:12:06.659367 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:06.659415 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:06.659415 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:06.659415 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:06.660022 master-0 kubenswrapper[7454]: I0319 12:12:06.659990 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:07.660073 master-0 kubenswrapper[7454]: I0319 12:12:07.660019 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:07.660073 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:07.660073 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:07.660073 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:07.661106 master-0 kubenswrapper[7454]: I0319 12:12:07.660091 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:08.660417 master-0 kubenswrapper[7454]: I0319 12:12:08.660327 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:08.660417 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:08.660417 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:08.660417 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:08.661438 master-0 kubenswrapper[7454]: I0319 12:12:08.660418 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:09.660248 master-0 kubenswrapper[7454]: I0319 12:12:09.660165 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:09.660248 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:09.660248 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:09.660248 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:09.660248 master-0 kubenswrapper[7454]: I0319 12:12:09.660242 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:10.658748 master-0 kubenswrapper[7454]: I0319 12:12:10.658670 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:10.658748 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:10.658748 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:10.658748 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:10.658748 master-0 kubenswrapper[7454]: I0319 12:12:10.658729 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:11.659122 master-0 kubenswrapper[7454]: I0319 12:12:11.658999 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:11.659122 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:11.659122 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:11.659122 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:11.659122 master-0 kubenswrapper[7454]: I0319 12:12:11.659073 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:12.659467 master-0 kubenswrapper[7454]: I0319 12:12:12.659415 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:12.659467 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:12.659467 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:12.659467 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:12.660339 master-0 kubenswrapper[7454]: I0319 12:12:12.659481 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:13.659862 master-0 kubenswrapper[7454]: I0319 12:12:13.659777 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:13.659862 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:13.659862 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:13.659862 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:13.660560 master-0 kubenswrapper[7454]: I0319 12:12:13.659905 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:14.659722 master-0 kubenswrapper[7454]: I0319 12:12:14.659648 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:14.659722 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:14.659722 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:14.659722 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:14.660749 master-0 kubenswrapper[7454]: I0319 12:12:14.659743 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:15.659317 master-0 kubenswrapper[7454]: I0319 12:12:15.659251 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:15.659317 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:15.659317 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:15.659317 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:15.659795 master-0 kubenswrapper[7454]: I0319 12:12:15.659348 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:16.020418 master-0 kubenswrapper[7454]: I0319 12:12:16.020348 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/5.log" Mar 19 12:12:16.021355 master-0 kubenswrapper[7454]: I0319 12:12:16.021124 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/4.log" Mar 19 12:12:16.022602 master-0 kubenswrapper[7454]: I0319 12:12:16.021497 7454 generic.go:334] "Generic (PLEG): container finished" podID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" containerID="f6cff670ffd3b7d67c924d90e3c87c5305a542b098fae4298d16010aa46c7cd3" exitCode=1 Mar 19 12:12:16.022602 master-0 kubenswrapper[7454]: I0319 12:12:16.021556 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" event={"ID":"b80027fd-7b39-477a-a337-ff9bb08e7eeb","Type":"ContainerDied","Data":"f6cff670ffd3b7d67c924d90e3c87c5305a542b098fae4298d16010aa46c7cd3"} Mar 19 12:12:16.022602 master-0 kubenswrapper[7454]: I0319 12:12:16.021610 7454 scope.go:117] "RemoveContainer" containerID="b9013cf33c6b53af293ae5f76c1dea25442713e290f755f0ed35851ad4f7ec4d" Mar 19 12:12:16.025241 master-0 kubenswrapper[7454]: I0319 12:12:16.025088 7454 scope.go:117] "RemoveContainer" containerID="f6cff670ffd3b7d67c924d90e3c87c5305a542b098fae4298d16010aa46c7cd3" Mar 19 12:12:16.025567 master-0 kubenswrapper[7454]: E0319 12:12:16.025521 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 12:12:16.660152 master-0 kubenswrapper[7454]: I0319 12:12:16.660006 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:16.660152 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:16.660152 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:16.660152 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:16.660681 master-0 kubenswrapper[7454]: I0319 12:12:16.660190 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:17.033398 master-0 kubenswrapper[7454]: I0319 12:12:17.033307 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/5.log" Mar 19 12:12:17.659059 master-0 kubenswrapper[7454]: I0319 12:12:17.658991 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:17.659059 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:17.659059 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:17.659059 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:17.659494 master-0 kubenswrapper[7454]: I0319 12:12:17.659078 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:18.660271 master-0 kubenswrapper[7454]: I0319 12:12:18.660175 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:18.660271 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:18.660271 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:18.660271 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:18.661511 master-0 kubenswrapper[7454]: I0319 12:12:18.660274 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:19.659996 master-0 kubenswrapper[7454]: I0319 12:12:19.659909 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:19.659996 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:19.659996 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:19.659996 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:19.660954 master-0 kubenswrapper[7454]: I0319 12:12:19.660015 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:20.660008 master-0 kubenswrapper[7454]: I0319 12:12:20.659943 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:20.660008 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:20.660008 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:20.660008 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:20.660731 master-0 kubenswrapper[7454]: I0319 12:12:20.660030 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:21.659500 master-0 kubenswrapper[7454]: I0319 12:12:21.659346 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:21.659500 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:21.659500 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:21.659500 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:21.660108 master-0 kubenswrapper[7454]: I0319 12:12:21.660067 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:22.660088 master-0 kubenswrapper[7454]: I0319 12:12:22.660032 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:22.660088 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:22.660088 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:22.660088 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:22.661202 master-0 kubenswrapper[7454]: I0319 12:12:22.661156 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:23.659392 master-0 kubenswrapper[7454]: I0319 12:12:23.659322 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:23.659392 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:23.659392 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:23.659392 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:23.659758 master-0 kubenswrapper[7454]: I0319 12:12:23.659439 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:24.660945 master-0 kubenswrapper[7454]: I0319 12:12:24.660867 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:24.660945 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:24.660945 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:24.660945 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:24.661745 master-0 kubenswrapper[7454]: I0319 12:12:24.660961 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:25.659636 master-0 kubenswrapper[7454]: I0319 12:12:25.659532 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:25.659636 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:25.659636 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:25.659636 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:25.659636 master-0 kubenswrapper[7454]: I0319 12:12:25.659627 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:26.660380 master-0 kubenswrapper[7454]: I0319 12:12:26.660292 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:26.660380 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:26.660380 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:26.660380 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:26.661383 master-0 kubenswrapper[7454]: I0319 12:12:26.660352 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:27.659338 master-0 kubenswrapper[7454]: I0319 12:12:27.659251 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:27.659338 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:27.659338 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:27.659338 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:27.660212 master-0 kubenswrapper[7454]: I0319 12:12:27.659986 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:28.660525 master-0 kubenswrapper[7454]: I0319 12:12:28.660455 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:28.660525 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:28.660525 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:28.660525 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:28.661537 master-0 kubenswrapper[7454]: I0319 12:12:28.660559 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:29.634155 master-0 kubenswrapper[7454]: I0319 12:12:29.634046 7454 scope.go:117] "RemoveContainer" containerID="f6cff670ffd3b7d67c924d90e3c87c5305a542b098fae4298d16010aa46c7cd3" Mar 19 12:12:29.634524 master-0 kubenswrapper[7454]: E0319 12:12:29.634333 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 12:12:29.663240 master-0 kubenswrapper[7454]: I0319 12:12:29.663144 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:29.663240 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:29.663240 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:29.663240 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:29.663240 master-0 kubenswrapper[7454]: I0319 12:12:29.663202 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:30.659404 master-0 kubenswrapper[7454]: I0319 12:12:30.659324 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:30.659404 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:30.659404 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:30.659404 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:30.659404 master-0 kubenswrapper[7454]: I0319 12:12:30.659398 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:31.660488 master-0 kubenswrapper[7454]: I0319 12:12:31.660332 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:31.660488 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:31.660488 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:31.660488 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:31.660488 master-0 kubenswrapper[7454]: I0319 12:12:31.660424 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:32.660130 master-0 kubenswrapper[7454]: I0319 12:12:32.660026 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:32.660130 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:32.660130 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:32.660130 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:32.660130 master-0 kubenswrapper[7454]: I0319 12:12:32.660116 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:33.660300 master-0 kubenswrapper[7454]: I0319 12:12:33.660210 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:33.660300 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:33.660300 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:33.660300 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:33.661570 master-0 kubenswrapper[7454]: I0319 12:12:33.660325 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:34.660260 master-0 kubenswrapper[7454]: I0319 12:12:34.660170 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:34.660260 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:34.660260 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:34.660260 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:34.661375 master-0 kubenswrapper[7454]: I0319 12:12:34.660280 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:35.659137 master-0 kubenswrapper[7454]: I0319 12:12:35.658960 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:35.659137 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:35.659137 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:35.659137 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:35.659137 master-0 kubenswrapper[7454]: I0319 12:12:35.659047 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:36.660380 master-0 kubenswrapper[7454]: I0319 12:12:36.660310 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:36.660380 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:36.660380 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:36.660380 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:36.661301 master-0 kubenswrapper[7454]: I0319 12:12:36.660402 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:37.661094 master-0 kubenswrapper[7454]: I0319 12:12:37.660987 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:37.661094 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:37.661094 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:37.661094 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:37.662134 master-0 kubenswrapper[7454]: I0319 12:12:37.661082 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:38.659100 master-0 kubenswrapper[7454]: I0319 12:12:38.659039 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:38.659100 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:38.659100 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:38.659100 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:38.659100 master-0 kubenswrapper[7454]: I0319 12:12:38.659098 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:39.660369 master-0 kubenswrapper[7454]: I0319 12:12:39.660282 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:39.660369 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:39.660369 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:39.660369 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:39.660369 master-0 kubenswrapper[7454]: I0319 12:12:39.660372 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:40.659126 master-0 kubenswrapper[7454]: I0319 12:12:40.659057 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:40.659126 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:40.659126 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:40.659126 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:40.659462 master-0 kubenswrapper[7454]: I0319 12:12:40.659128 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:40.761784 master-0 kubenswrapper[7454]: I0319 12:12:40.761732 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-6975d7769d-nvxfv"] Mar 19 12:12:40.762388 master-0 kubenswrapper[7454]: E0319 12:12:40.761992 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c" containerName="installer" Mar 19 12:12:40.762388 master-0 kubenswrapper[7454]: I0319 12:12:40.762003 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c" containerName="installer" Mar 19 12:12:40.762388 master-0 kubenswrapper[7454]: I0319 12:12:40.762136 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c" containerName="installer" Mar 19 12:12:40.763020 master-0 kubenswrapper[7454]: I0319 12:12:40.763004 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.769709 master-0 kubenswrapper[7454]: I0319 12:12:40.769648 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 19 12:12:40.770675 master-0 kubenswrapper[7454]: I0319 12:12:40.770606 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 19 12:12:40.770759 master-0 kubenswrapper[7454]: I0319 12:12:40.770668 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 19 12:12:40.770759 master-0 kubenswrapper[7454]: I0319 12:12:40.770627 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 19 12:12:40.770967 master-0 kubenswrapper[7454]: I0319 12:12:40.770927 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-svqv2" Mar 19 12:12:40.782986 master-0 kubenswrapper[7454]: I0319 12:12:40.782931 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 19 12:12:40.784968 master-0 kubenswrapper[7454]: I0319 12:12:40.784920 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 19 12:12:40.788269 master-0 kubenswrapper[7454]: I0319 12:12:40.788223 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-6975d7769d-nvxfv"] Mar 19 12:12:40.850819 master-0 kubenswrapper[7454]: I0319 12:12:40.850764 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-client-tls\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.851047 master-0 kubenswrapper[7454]: I0319 12:12:40.850831 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.851047 master-0 kubenswrapper[7454]: I0319 12:12:40.850949 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm9zf\" (UniqueName: \"kubernetes.io/projected/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-kube-api-access-vm9zf\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.851047 master-0 kubenswrapper[7454]: I0319 12:12:40.850982 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-metrics-client-ca\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.851047 master-0 kubenswrapper[7454]: I0319 12:12:40.851028 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-serving-certs-ca-bundle\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.851173 master-0 kubenswrapper[7454]: I0319 12:12:40.851069 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.851173 master-0 kubenswrapper[7454]: I0319 12:12:40.851115 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-federate-client-tls\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.851173 master-0 kubenswrapper[7454]: I0319 12:12:40.851140 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.952889 master-0 kubenswrapper[7454]: I0319 12:12:40.952715 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.953218 master-0 kubenswrapper[7454]: I0319 12:12:40.953188 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm9zf\" (UniqueName: \"kubernetes.io/projected/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-kube-api-access-vm9zf\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.953383 master-0 kubenswrapper[7454]: I0319 12:12:40.953357 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-metrics-client-ca\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.953556 master-0 kubenswrapper[7454]: I0319 12:12:40.953535 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-serving-certs-ca-bundle\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.953754 master-0 kubenswrapper[7454]: I0319 12:12:40.953727 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.953966 master-0 kubenswrapper[7454]: I0319 12:12:40.953941 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-federate-client-tls\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.954106 master-0 kubenswrapper[7454]: I0319 12:12:40.954086 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.954261 master-0 kubenswrapper[7454]: I0319 12:12:40.954238 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-client-tls\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.954423 master-0 kubenswrapper[7454]: I0319 12:12:40.954327 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-metrics-client-ca\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.954559 master-0 kubenswrapper[7454]: I0319 12:12:40.954340 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-serving-certs-ca-bundle\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.954986 master-0 kubenswrapper[7454]: I0319 12:12:40.954936 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.958263 master-0 kubenswrapper[7454]: I0319 12:12:40.958232 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.958905 master-0 kubenswrapper[7454]: I0319 12:12:40.958787 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-federate-client-tls\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.959344 master-0 kubenswrapper[7454]: I0319 12:12:40.959310 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-client-tls\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.959912 master-0 kubenswrapper[7454]: I0319 12:12:40.959872 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:40.970995 master-0 kubenswrapper[7454]: I0319 12:12:40.970960 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm9zf\" (UniqueName: \"kubernetes.io/projected/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-kube-api-access-vm9zf\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:41.081219 master-0 kubenswrapper[7454]: I0319 12:12:41.081167 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:12:41.496492 master-0 kubenswrapper[7454]: I0319 12:12:41.496427 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-6975d7769d-nvxfv"] Mar 19 12:12:41.524094 master-0 kubenswrapper[7454]: W0319 12:12:41.523974 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c80f8d0_ee9b_4a4d_ba92_e241b2552e58.slice/crio-954ede16a95baa0dd18c714681dfe7d875a3e3012701640009a8298afe790b4b WatchSource:0}: Error finding container 954ede16a95baa0dd18c714681dfe7d875a3e3012701640009a8298afe790b4b: Status 404 returned error can't find the container with id 954ede16a95baa0dd18c714681dfe7d875a3e3012701640009a8298afe790b4b Mar 19 12:12:41.527857 master-0 kubenswrapper[7454]: I0319 12:12:41.527544 7454 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 12:12:41.658630 master-0 kubenswrapper[7454]: I0319 12:12:41.658574 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:41.658630 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:41.658630 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:41.658630 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:41.658978 master-0 kubenswrapper[7454]: I0319 12:12:41.658642 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:42.236004 master-0 kubenswrapper[7454]: I0319 12:12:42.235921 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" event={"ID":"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58","Type":"ContainerStarted","Data":"954ede16a95baa0dd18c714681dfe7d875a3e3012701640009a8298afe790b4b"} Mar 19 12:12:42.659871 master-0 kubenswrapper[7454]: I0319 12:12:42.659746 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:42.659871 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:42.659871 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:42.659871 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:42.659871 master-0 kubenswrapper[7454]: I0319 12:12:42.659836 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:43.659819 master-0 kubenswrapper[7454]: I0319 12:12:43.658896 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:43.659819 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:43.659819 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:43.659819 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:43.659819 master-0 kubenswrapper[7454]: I0319 12:12:43.658975 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:44.254337 master-0 kubenswrapper[7454]: I0319 12:12:44.253934 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" event={"ID":"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58","Type":"ContainerStarted","Data":"577376a46279ded36e6d5e477718a71d04391677d0d6651c2a7774693399b647"} Mar 19 12:12:44.635728 master-0 kubenswrapper[7454]: I0319 12:12:44.635613 7454 scope.go:117] "RemoveContainer" containerID="f6cff670ffd3b7d67c924d90e3c87c5305a542b098fae4298d16010aa46c7cd3" Mar 19 12:12:44.635941 master-0 kubenswrapper[7454]: E0319 12:12:44.635861 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 12:12:44.659199 master-0 kubenswrapper[7454]: I0319 12:12:44.659143 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:44.659199 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:44.659199 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:44.659199 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:44.659518 master-0 kubenswrapper[7454]: I0319 12:12:44.659220 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:45.276367 master-0 kubenswrapper[7454]: I0319 12:12:45.276232 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" event={"ID":"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58","Type":"ContainerStarted","Data":"5a2c94da47d9c1b216f09bd25a8052420a84aabbc234b551a0c85ea8e919350d"} Mar 19 12:12:45.276367 master-0 kubenswrapper[7454]: I0319 12:12:45.276279 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" event={"ID":"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58","Type":"ContainerStarted","Data":"4646c25326c5856910d7118ebef713c6ab014d1f41c33f0e19ff896831f1b67c"} Mar 19 12:12:45.660444 master-0 kubenswrapper[7454]: I0319 12:12:45.660241 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:45.660444 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:45.660444 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:45.660444 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:45.660444 master-0 kubenswrapper[7454]: I0319 12:12:45.660388 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:46.660114 master-0 kubenswrapper[7454]: I0319 12:12:46.660065 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:46.660114 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:46.660114 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:46.660114 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:46.660767 master-0 kubenswrapper[7454]: I0319 12:12:46.660142 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:47.660630 master-0 kubenswrapper[7454]: I0319 12:12:47.660545 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:47.660630 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:47.660630 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:47.660630 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:47.662033 master-0 kubenswrapper[7454]: I0319 12:12:47.660642 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:48.660278 master-0 kubenswrapper[7454]: I0319 12:12:48.660204 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:48.660278 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:48.660278 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:48.660278 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:48.661614 master-0 kubenswrapper[7454]: I0319 12:12:48.661567 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:49.659426 master-0 kubenswrapper[7454]: I0319 12:12:49.659373 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:49.659426 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:49.659426 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:49.659426 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:49.659749 master-0 kubenswrapper[7454]: I0319 12:12:49.659442 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:50.659399 master-0 kubenswrapper[7454]: I0319 12:12:50.659333 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:50.659399 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:50.659399 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:50.659399 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:50.660373 master-0 kubenswrapper[7454]: I0319 12:12:50.659418 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:51.660272 master-0 kubenswrapper[7454]: I0319 12:12:51.660112 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:51.660272 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:51.660272 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:51.660272 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:51.661492 master-0 kubenswrapper[7454]: I0319 12:12:51.661445 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:52.661126 master-0 kubenswrapper[7454]: I0319 12:12:52.661031 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:52.661126 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:52.661126 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:52.661126 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:52.662496 master-0 kubenswrapper[7454]: I0319 12:12:52.661139 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:53.659693 master-0 kubenswrapper[7454]: I0319 12:12:53.659614 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:53.659693 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:53.659693 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:53.659693 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:53.660384 master-0 kubenswrapper[7454]: I0319 12:12:53.659719 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:54.659033 master-0 kubenswrapper[7454]: I0319 12:12:54.658971 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:54.659033 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:54.659033 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:54.659033 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:54.659846 master-0 kubenswrapper[7454]: I0319 12:12:54.659042 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:55.277923 master-0 kubenswrapper[7454]: I0319 12:12:55.277831 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" podStartSLOduration=11.853191466 podStartE2EDuration="15.277774682s" podCreationTimestamp="2026-03-19 12:12:40 +0000 UTC" firstStartedPulling="2026-03-19 12:12:41.5274844 +0000 UTC m=+1131.157950313" lastFinishedPulling="2026-03-19 12:12:44.952067616 +0000 UTC m=+1134.582533529" observedRunningTime="2026-03-19 12:12:45.307793029 +0000 UTC m=+1134.938259012" watchObservedRunningTime="2026-03-19 12:12:55.277774682 +0000 UTC m=+1144.908240605" Mar 19 12:12:55.278872 master-0 kubenswrapper[7454]: I0319 12:12:55.278828 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 19 12:12:55.279639 master-0 kubenswrapper[7454]: I0319 12:12:55.279598 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 12:12:55.287838 master-0 kubenswrapper[7454]: I0319 12:12:55.286382 7454 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-hhsz7" Mar 19 12:12:55.287838 master-0 kubenswrapper[7454]: I0319 12:12:55.286559 7454 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 19 12:12:55.287838 master-0 kubenswrapper[7454]: I0319 12:12:55.286900 7454 scope.go:117] "RemoveContainer" containerID="8088add442d8a84ce49177d60c8f88d3eb643fdd316c8a11da9030fc8e5dfb04" Mar 19 12:12:55.337315 master-0 kubenswrapper[7454]: I0319 12:12:55.304756 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 19 12:12:55.406199 master-0 kubenswrapper[7454]: I0319 12:12:55.406136 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfb8e49c-30e6-4939-9ef9-1323883a8d6a-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"bfb8e49c-30e6-4939-9ef9-1323883a8d6a\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 12:12:55.406513 master-0 kubenswrapper[7454]: I0319 12:12:55.406450 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bfb8e49c-30e6-4939-9ef9-1323883a8d6a-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"bfb8e49c-30e6-4939-9ef9-1323883a8d6a\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 12:12:55.406715 master-0 kubenswrapper[7454]: I0319 12:12:55.406668 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bfb8e49c-30e6-4939-9ef9-1323883a8d6a-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"bfb8e49c-30e6-4939-9ef9-1323883a8d6a\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 12:12:55.509222 master-0 kubenswrapper[7454]: I0319 12:12:55.509130 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bfb8e49c-30e6-4939-9ef9-1323883a8d6a-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"bfb8e49c-30e6-4939-9ef9-1323883a8d6a\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 12:12:55.509540 master-0 kubenswrapper[7454]: I0319 12:12:55.509250 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bfb8e49c-30e6-4939-9ef9-1323883a8d6a-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"bfb8e49c-30e6-4939-9ef9-1323883a8d6a\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 12:12:55.509540 master-0 kubenswrapper[7454]: I0319 12:12:55.509388 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bfb8e49c-30e6-4939-9ef9-1323883a8d6a-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"bfb8e49c-30e6-4939-9ef9-1323883a8d6a\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 12:12:55.509540 master-0 kubenswrapper[7454]: I0319 12:12:55.509450 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfb8e49c-30e6-4939-9ef9-1323883a8d6a-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"bfb8e49c-30e6-4939-9ef9-1323883a8d6a\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 12:12:55.509872 master-0 kubenswrapper[7454]: I0319 12:12:55.509580 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bfb8e49c-30e6-4939-9ef9-1323883a8d6a-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"bfb8e49c-30e6-4939-9ef9-1323883a8d6a\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 12:12:55.529652 master-0 kubenswrapper[7454]: I0319 12:12:55.529548 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfb8e49c-30e6-4939-9ef9-1323883a8d6a-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"bfb8e49c-30e6-4939-9ef9-1323883a8d6a\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 12:12:55.657552 master-0 kubenswrapper[7454]: I0319 12:12:55.657430 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 12:12:55.661483 master-0 kubenswrapper[7454]: I0319 12:12:55.661442 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:55.661483 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:55.661483 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:55.661483 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:55.662189 master-0 kubenswrapper[7454]: I0319 12:12:55.662159 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:56.088260 master-0 kubenswrapper[7454]: I0319 12:12:56.088073 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 19 12:12:56.099185 master-0 kubenswrapper[7454]: W0319 12:12:56.099033 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbfb8e49c_30e6_4939_9ef9_1323883a8d6a.slice/crio-e2268d57847d1028cfcb3c36b8c37f3a09a3721c2f716f744757dbffd1bb03d4 WatchSource:0}: Error finding container e2268d57847d1028cfcb3c36b8c37f3a09a3721c2f716f744757dbffd1bb03d4: Status 404 returned error can't find the container with id e2268d57847d1028cfcb3c36b8c37f3a09a3721c2f716f744757dbffd1bb03d4 Mar 19 12:12:56.370650 master-0 kubenswrapper[7454]: I0319 12:12:56.370323 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"bfb8e49c-30e6-4939-9ef9-1323883a8d6a","Type":"ContainerStarted","Data":"e2268d57847d1028cfcb3c36b8c37f3a09a3721c2f716f744757dbffd1bb03d4"} Mar 19 12:12:56.659893 master-0 kubenswrapper[7454]: I0319 12:12:56.659672 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:56.659893 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:56.659893 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:56.659893 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:56.659893 master-0 kubenswrapper[7454]: I0319 12:12:56.659779 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:57.377276 master-0 kubenswrapper[7454]: I0319 12:12:57.377196 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"bfb8e49c-30e6-4939-9ef9-1323883a8d6a","Type":"ContainerStarted","Data":"4dc6cd1098d9b181306d55e6f29d0f09a98838187ca958b399501163372876ca"} Mar 19 12:12:57.396939 master-0 kubenswrapper[7454]: I0319 12:12:57.396781 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=2.396762216 podStartE2EDuration="2.396762216s" podCreationTimestamp="2026-03-19 12:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:12:57.393309647 +0000 UTC m=+1147.023775590" watchObservedRunningTime="2026-03-19 12:12:57.396762216 +0000 UTC m=+1147.027228139" Mar 19 12:12:57.659239 master-0 kubenswrapper[7454]: I0319 12:12:57.659038 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:57.659239 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:57.659239 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:57.659239 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:57.659239 master-0 kubenswrapper[7454]: I0319 12:12:57.659125 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:58.634427 master-0 kubenswrapper[7454]: I0319 12:12:58.634354 7454 scope.go:117] "RemoveContainer" containerID="f6cff670ffd3b7d67c924d90e3c87c5305a542b098fae4298d16010aa46c7cd3" Mar 19 12:12:58.635369 master-0 kubenswrapper[7454]: E0319 12:12:58.634660 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 12:12:58.659319 master-0 kubenswrapper[7454]: I0319 12:12:58.659237 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:58.659319 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:58.659319 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:58.659319 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:58.659319 master-0 kubenswrapper[7454]: I0319 12:12:58.659305 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:12:59.659697 master-0 kubenswrapper[7454]: I0319 12:12:59.659596 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:12:59.659697 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:12:59.659697 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:12:59.659697 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:12:59.660571 master-0 kubenswrapper[7454]: I0319 12:12:59.659691 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:00.660498 master-0 kubenswrapper[7454]: I0319 12:13:00.660395 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:00.660498 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:00.660498 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:00.660498 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:00.660498 master-0 kubenswrapper[7454]: I0319 12:13:00.660485 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:01.658973 master-0 kubenswrapper[7454]: I0319 12:13:01.658780 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:01.658973 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:01.658973 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:01.658973 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:01.659905 master-0 kubenswrapper[7454]: I0319 12:13:01.659015 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:01.881854 master-0 kubenswrapper[7454]: I0319 12:13:01.879668 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 19 12:13:01.881854 master-0 kubenswrapper[7454]: I0319 12:13:01.879899 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podUID="bfb8e49c-30e6-4939-9ef9-1323883a8d6a" containerName="installer" containerID="cri-o://4dc6cd1098d9b181306d55e6f29d0f09a98838187ca958b399501163372876ca" gracePeriod=30 Mar 19 12:13:02.659397 master-0 kubenswrapper[7454]: I0319 12:13:02.659313 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:02.659397 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:02.659397 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:02.659397 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:02.659397 master-0 kubenswrapper[7454]: I0319 12:13:02.659368 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:03.659477 master-0 kubenswrapper[7454]: I0319 12:13:03.659423 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:03.659477 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:03.659477 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:03.659477 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:03.660155 master-0 kubenswrapper[7454]: I0319 12:13:03.659501 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:04.659392 master-0 kubenswrapper[7454]: I0319 12:13:04.659344 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:04.659392 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:04.659392 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:04.659392 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:04.660029 master-0 kubenswrapper[7454]: I0319 12:13:04.659400 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:05.659389 master-0 kubenswrapper[7454]: I0319 12:13:05.659330 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:05.659389 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:05.659389 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:05.659389 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:05.660273 master-0 kubenswrapper[7454]: I0319 12:13:05.659422 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:06.481363 master-0 kubenswrapper[7454]: I0319 12:13:06.481298 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 19 12:13:06.482902 master-0 kubenswrapper[7454]: I0319 12:13:06.482866 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 12:13:06.501605 master-0 kubenswrapper[7454]: I0319 12:13:06.501535 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 19 12:13:06.572341 master-0 kubenswrapper[7454]: I0319 12:13:06.572291 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2bb56b31-fdeb-45a4-8359-1da828d6e4d0-var-lock\") pod \"installer-2-master-0\" (UID: \"2bb56b31-fdeb-45a4-8359-1da828d6e4d0\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 12:13:06.572695 master-0 kubenswrapper[7454]: I0319 12:13:06.572674 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2bb56b31-fdeb-45a4-8359-1da828d6e4d0-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"2bb56b31-fdeb-45a4-8359-1da828d6e4d0\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 12:13:06.572867 master-0 kubenswrapper[7454]: I0319 12:13:06.572838 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2bb56b31-fdeb-45a4-8359-1da828d6e4d0-kube-api-access\") pod \"installer-2-master-0\" (UID: \"2bb56b31-fdeb-45a4-8359-1da828d6e4d0\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 12:13:06.663222 master-0 kubenswrapper[7454]: I0319 12:13:06.661486 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:06.663222 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:06.663222 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:06.663222 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:06.663222 master-0 kubenswrapper[7454]: I0319 12:13:06.661667 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:06.673913 master-0 kubenswrapper[7454]: I0319 12:13:06.673840 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2bb56b31-fdeb-45a4-8359-1da828d6e4d0-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"2bb56b31-fdeb-45a4-8359-1da828d6e4d0\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 12:13:06.674149 master-0 kubenswrapper[7454]: I0319 12:13:06.673933 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2bb56b31-fdeb-45a4-8359-1da828d6e4d0-kube-api-access\") pod \"installer-2-master-0\" (UID: \"2bb56b31-fdeb-45a4-8359-1da828d6e4d0\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 12:13:06.674149 master-0 kubenswrapper[7454]: I0319 12:13:06.674016 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2bb56b31-fdeb-45a4-8359-1da828d6e4d0-var-lock\") pod \"installer-2-master-0\" (UID: \"2bb56b31-fdeb-45a4-8359-1da828d6e4d0\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 12:13:06.674953 master-0 kubenswrapper[7454]: I0319 12:13:06.674917 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2bb56b31-fdeb-45a4-8359-1da828d6e4d0-var-lock\") pod \"installer-2-master-0\" (UID: \"2bb56b31-fdeb-45a4-8359-1da828d6e4d0\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 12:13:06.675052 master-0 kubenswrapper[7454]: I0319 12:13:06.674967 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2bb56b31-fdeb-45a4-8359-1da828d6e4d0-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"2bb56b31-fdeb-45a4-8359-1da828d6e4d0\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 12:13:06.711687 master-0 kubenswrapper[7454]: I0319 12:13:06.697705 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2bb56b31-fdeb-45a4-8359-1da828d6e4d0-kube-api-access\") pod \"installer-2-master-0\" (UID: \"2bb56b31-fdeb-45a4-8359-1da828d6e4d0\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 12:13:06.814890 master-0 kubenswrapper[7454]: I0319 12:13:06.814744 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 12:13:07.230152 master-0 kubenswrapper[7454]: I0319 12:13:07.230081 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 19 12:13:07.457043 master-0 kubenswrapper[7454]: I0319 12:13:07.456971 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"2bb56b31-fdeb-45a4-8359-1da828d6e4d0","Type":"ContainerStarted","Data":"feebfe1c397fa5a3cbb50711147995408783129cd064e9e2a91332b156b1d6b9"} Mar 19 12:13:07.673564 master-0 kubenswrapper[7454]: I0319 12:13:07.673488 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:07.673564 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:07.673564 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:07.673564 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:07.673564 master-0 kubenswrapper[7454]: I0319 12:13:07.673562 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:08.467268 master-0 kubenswrapper[7454]: I0319 12:13:08.467197 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"2bb56b31-fdeb-45a4-8359-1da828d6e4d0","Type":"ContainerStarted","Data":"ff004ab4cc907cb5879a0b136f98d1ebbbac3f330e848ba8796137050e3775dc"} Mar 19 12:13:08.489369 master-0 kubenswrapper[7454]: I0319 12:13:08.489261 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=2.4892393520000002 podStartE2EDuration="2.489239352s" podCreationTimestamp="2026-03-19 12:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:13:08.4853751 +0000 UTC m=+1158.115841043" watchObservedRunningTime="2026-03-19 12:13:08.489239352 +0000 UTC m=+1158.119705285" Mar 19 12:13:08.659204 master-0 kubenswrapper[7454]: I0319 12:13:08.659165 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:08.659204 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:08.659204 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:08.659204 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:08.659714 master-0 kubenswrapper[7454]: I0319 12:13:08.659676 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:09.659829 master-0 kubenswrapper[7454]: I0319 12:13:09.659743 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:09.659829 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:09.659829 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:09.659829 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:09.660905 master-0 kubenswrapper[7454]: I0319 12:13:09.659864 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:10.659684 master-0 kubenswrapper[7454]: I0319 12:13:10.659577 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:10.659684 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:10.659684 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:10.659684 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:10.660851 master-0 kubenswrapper[7454]: I0319 12:13:10.660479 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:11.633969 master-0 kubenswrapper[7454]: I0319 12:13:11.633741 7454 scope.go:117] "RemoveContainer" containerID="f6cff670ffd3b7d67c924d90e3c87c5305a542b098fae4298d16010aa46c7cd3" Mar 19 12:13:11.634288 master-0 kubenswrapper[7454]: E0319 12:13:11.634238 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 12:13:11.659867 master-0 kubenswrapper[7454]: I0319 12:13:11.659643 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:11.659867 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:11.659867 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:11.659867 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:11.659867 master-0 kubenswrapper[7454]: I0319 12:13:11.659719 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:12.659119 master-0 kubenswrapper[7454]: I0319 12:13:12.659059 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:12.659119 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:12.659119 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:12.659119 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:12.659489 master-0 kubenswrapper[7454]: I0319 12:13:12.659129 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:13.660529 master-0 kubenswrapper[7454]: I0319 12:13:13.660439 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:13.660529 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:13.660529 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:13.660529 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:13.661228 master-0 kubenswrapper[7454]: I0319 12:13:13.660548 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:14.660584 master-0 kubenswrapper[7454]: I0319 12:13:14.660463 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:14.660584 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:14.660584 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:14.660584 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:14.661881 master-0 kubenswrapper[7454]: I0319 12:13:14.660598 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:15.660460 master-0 kubenswrapper[7454]: I0319 12:13:15.660348 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:15.660460 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:15.660460 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:15.660460 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:15.661491 master-0 kubenswrapper[7454]: I0319 12:13:15.660492 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:16.661728 master-0 kubenswrapper[7454]: I0319 12:13:16.661648 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:16.661728 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:16.661728 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:16.661728 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:16.663287 master-0 kubenswrapper[7454]: I0319 12:13:16.661750 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:17.660245 master-0 kubenswrapper[7454]: I0319 12:13:17.660163 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:17.660245 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:17.660245 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:17.660245 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:17.660245 master-0 kubenswrapper[7454]: I0319 12:13:17.660239 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:18.660127 master-0 kubenswrapper[7454]: I0319 12:13:18.659961 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:18.660127 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:18.660127 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:18.660127 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:18.660127 master-0 kubenswrapper[7454]: I0319 12:13:18.660035 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:19.660134 master-0 kubenswrapper[7454]: I0319 12:13:19.660077 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:19.660134 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:19.660134 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:19.660134 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:19.660820 master-0 kubenswrapper[7454]: I0319 12:13:19.660169 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:20.659650 master-0 kubenswrapper[7454]: I0319 12:13:20.659557 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:20.659650 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:20.659650 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:20.659650 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:20.660145 master-0 kubenswrapper[7454]: I0319 12:13:20.659659 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:20.691381 master-0 kubenswrapper[7454]: I0319 12:13:20.691289 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 19 12:13:20.692190 master-0 kubenswrapper[7454]: I0319 12:13:20.691691 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-master-0" podUID="2bb56b31-fdeb-45a4-8359-1da828d6e4d0" containerName="installer" containerID="cri-o://ff004ab4cc907cb5879a0b136f98d1ebbbac3f330e848ba8796137050e3775dc" gracePeriod=30 Mar 19 12:13:21.100949 master-0 kubenswrapper[7454]: I0319 12:13:21.100893 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_2bb56b31-fdeb-45a4-8359-1da828d6e4d0/installer/0.log" Mar 19 12:13:21.101123 master-0 kubenswrapper[7454]: I0319 12:13:21.100980 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 12:13:21.301037 master-0 kubenswrapper[7454]: I0319 12:13:21.300992 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2bb56b31-fdeb-45a4-8359-1da828d6e4d0-kube-api-access\") pod \"2bb56b31-fdeb-45a4-8359-1da828d6e4d0\" (UID: \"2bb56b31-fdeb-45a4-8359-1da828d6e4d0\") " Mar 19 12:13:21.301129 master-0 kubenswrapper[7454]: I0319 12:13:21.301099 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2bb56b31-fdeb-45a4-8359-1da828d6e4d0-var-lock\") pod \"2bb56b31-fdeb-45a4-8359-1da828d6e4d0\" (UID: \"2bb56b31-fdeb-45a4-8359-1da828d6e4d0\") " Mar 19 12:13:21.301200 master-0 kubenswrapper[7454]: I0319 12:13:21.301171 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2bb56b31-fdeb-45a4-8359-1da828d6e4d0-kubelet-dir\") pod \"2bb56b31-fdeb-45a4-8359-1da828d6e4d0\" (UID: \"2bb56b31-fdeb-45a4-8359-1da828d6e4d0\") " Mar 19 12:13:21.301345 master-0 kubenswrapper[7454]: I0319 12:13:21.301223 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bb56b31-fdeb-45a4-8359-1da828d6e4d0-var-lock" (OuterVolumeSpecName: "var-lock") pod "2bb56b31-fdeb-45a4-8359-1da828d6e4d0" (UID: "2bb56b31-fdeb-45a4-8359-1da828d6e4d0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:13:21.301397 master-0 kubenswrapper[7454]: I0319 12:13:21.301244 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bb56b31-fdeb-45a4-8359-1da828d6e4d0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2bb56b31-fdeb-45a4-8359-1da828d6e4d0" (UID: "2bb56b31-fdeb-45a4-8359-1da828d6e4d0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:13:21.301546 master-0 kubenswrapper[7454]: I0319 12:13:21.301524 7454 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2bb56b31-fdeb-45a4-8359-1da828d6e4d0-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 12:13:21.301546 master-0 kubenswrapper[7454]: I0319 12:13:21.301542 7454 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2bb56b31-fdeb-45a4-8359-1da828d6e4d0-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:13:21.303824 master-0 kubenswrapper[7454]: I0319 12:13:21.303757 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bb56b31-fdeb-45a4-8359-1da828d6e4d0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2bb56b31-fdeb-45a4-8359-1da828d6e4d0" (UID: "2bb56b31-fdeb-45a4-8359-1da828d6e4d0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:13:21.402350 master-0 kubenswrapper[7454]: I0319 12:13:21.402294 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2bb56b31-fdeb-45a4-8359-1da828d6e4d0-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 12:13:21.589897 master-0 kubenswrapper[7454]: I0319 12:13:21.589690 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_2bb56b31-fdeb-45a4-8359-1da828d6e4d0/installer/0.log" Mar 19 12:13:21.589897 master-0 kubenswrapper[7454]: I0319 12:13:21.589769 7454 generic.go:334] "Generic (PLEG): container finished" podID="2bb56b31-fdeb-45a4-8359-1da828d6e4d0" containerID="ff004ab4cc907cb5879a0b136f98d1ebbbac3f330e848ba8796137050e3775dc" exitCode=1 Mar 19 12:13:21.589897 master-0 kubenswrapper[7454]: I0319 12:13:21.589841 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"2bb56b31-fdeb-45a4-8359-1da828d6e4d0","Type":"ContainerDied","Data":"ff004ab4cc907cb5879a0b136f98d1ebbbac3f330e848ba8796137050e3775dc"} Mar 19 12:13:21.590297 master-0 kubenswrapper[7454]: I0319 12:13:21.589908 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"2bb56b31-fdeb-45a4-8359-1da828d6e4d0","Type":"ContainerDied","Data":"feebfe1c397fa5a3cbb50711147995408783129cd064e9e2a91332b156b1d6b9"} Mar 19 12:13:21.590297 master-0 kubenswrapper[7454]: I0319 12:13:21.589955 7454 scope.go:117] "RemoveContainer" containerID="ff004ab4cc907cb5879a0b136f98d1ebbbac3f330e848ba8796137050e3775dc" Mar 19 12:13:21.590297 master-0 kubenswrapper[7454]: I0319 12:13:21.589965 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 12:13:21.627967 master-0 kubenswrapper[7454]: I0319 12:13:21.627894 7454 scope.go:117] "RemoveContainer" containerID="ff004ab4cc907cb5879a0b136f98d1ebbbac3f330e848ba8796137050e3775dc" Mar 19 12:13:21.628761 master-0 kubenswrapper[7454]: E0319 12:13:21.628697 7454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff004ab4cc907cb5879a0b136f98d1ebbbac3f330e848ba8796137050e3775dc\": container with ID starting with ff004ab4cc907cb5879a0b136f98d1ebbbac3f330e848ba8796137050e3775dc not found: ID does not exist" containerID="ff004ab4cc907cb5879a0b136f98d1ebbbac3f330e848ba8796137050e3775dc" Mar 19 12:13:21.628927 master-0 kubenswrapper[7454]: I0319 12:13:21.628862 7454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff004ab4cc907cb5879a0b136f98d1ebbbac3f330e848ba8796137050e3775dc"} err="failed to get container status \"ff004ab4cc907cb5879a0b136f98d1ebbbac3f330e848ba8796137050e3775dc\": rpc error: code = NotFound desc = could not find container \"ff004ab4cc907cb5879a0b136f98d1ebbbac3f330e848ba8796137050e3775dc\": container with ID starting with ff004ab4cc907cb5879a0b136f98d1ebbbac3f330e848ba8796137050e3775dc not found: ID does not exist" Mar 19 12:13:21.649165 master-0 kubenswrapper[7454]: I0319 12:13:21.649092 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 19 12:13:21.655014 master-0 kubenswrapper[7454]: I0319 12:13:21.654956 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 19 12:13:21.659668 master-0 kubenswrapper[7454]: I0319 12:13:21.659599 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:21.659668 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:21.659668 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:21.659668 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:21.661549 master-0 kubenswrapper[7454]: I0319 12:13:21.659675 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:22.647713 master-0 kubenswrapper[7454]: I0319 12:13:22.647639 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bb56b31-fdeb-45a4-8359-1da828d6e4d0" path="/var/lib/kubelet/pods/2bb56b31-fdeb-45a4-8359-1da828d6e4d0/volumes" Mar 19 12:13:22.660609 master-0 kubenswrapper[7454]: I0319 12:13:22.660531 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:22.660609 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:22.660609 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:22.660609 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:22.660609 master-0 kubenswrapper[7454]: I0319 12:13:22.660603 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:23.660018 master-0 kubenswrapper[7454]: I0319 12:13:23.659908 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:23.660018 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:23.660018 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:23.660018 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:23.660986 master-0 kubenswrapper[7454]: I0319 12:13:23.660026 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:24.634454 master-0 kubenswrapper[7454]: I0319 12:13:24.634402 7454 scope.go:117] "RemoveContainer" containerID="f6cff670ffd3b7d67c924d90e3c87c5305a542b098fae4298d16010aa46c7cd3" Mar 19 12:13:24.634823 master-0 kubenswrapper[7454]: E0319 12:13:24.634747 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 12:13:24.659350 master-0 kubenswrapper[7454]: I0319 12:13:24.659293 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:24.659350 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:24.659350 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:24.659350 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:24.659969 master-0 kubenswrapper[7454]: I0319 12:13:24.659928 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:24.886710 master-0 kubenswrapper[7454]: I0319 12:13:24.886533 7454 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 19 12:13:24.887738 master-0 kubenswrapper[7454]: E0319 12:13:24.886847 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bb56b31-fdeb-45a4-8359-1da828d6e4d0" containerName="installer" Mar 19 12:13:24.887738 master-0 kubenswrapper[7454]: I0319 12:13:24.886861 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bb56b31-fdeb-45a4-8359-1da828d6e4d0" containerName="installer" Mar 19 12:13:24.887738 master-0 kubenswrapper[7454]: I0319 12:13:24.887048 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bb56b31-fdeb-45a4-8359-1da828d6e4d0" containerName="installer" Mar 19 12:13:24.887738 master-0 kubenswrapper[7454]: I0319 12:13:24.887596 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:13:24.916355 master-0 kubenswrapper[7454]: I0319 12:13:24.916275 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 19 12:13:24.958925 master-0 kubenswrapper[7454]: I0319 12:13:24.958847 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-var-lock\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:13:24.958925 master-0 kubenswrapper[7454]: I0319 12:13:24.958921 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:13:24.959179 master-0 kubenswrapper[7454]: I0319 12:13:24.958959 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:13:25.060298 master-0 kubenswrapper[7454]: I0319 12:13:25.060216 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-var-lock\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:13:25.060874 master-0 kubenswrapper[7454]: I0319 12:13:25.060772 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-var-lock\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:13:25.061107 master-0 kubenswrapper[7454]: I0319 12:13:25.060910 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:13:25.061390 master-0 kubenswrapper[7454]: I0319 12:13:25.061345 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:13:25.061767 master-0 kubenswrapper[7454]: I0319 12:13:25.061509 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:13:25.087954 master-0 kubenswrapper[7454]: I0319 12:13:25.087911 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:13:25.225467 master-0 kubenswrapper[7454]: I0319 12:13:25.225380 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:13:25.647039 master-0 kubenswrapper[7454]: I0319 12:13:25.645333 7454 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 19 12:13:25.659368 master-0 kubenswrapper[7454]: W0319 12:13:25.659261 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod89890698_dd48_486b_bd64_dc909aecd9e8.slice/crio-2035c8e72f2b89c4f96d115722ef5f74b915d093ec98a02ef0fa3a58ae56a155 WatchSource:0}: Error finding container 2035c8e72f2b89c4f96d115722ef5f74b915d093ec98a02ef0fa3a58ae56a155: Status 404 returned error can't find the container with id 2035c8e72f2b89c4f96d115722ef5f74b915d093ec98a02ef0fa3a58ae56a155 Mar 19 12:13:25.659368 master-0 kubenswrapper[7454]: I0319 12:13:25.659335 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:25.659368 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:25.659368 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:25.659368 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:25.659996 master-0 kubenswrapper[7454]: I0319 12:13:25.659398 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:26.644365 master-0 kubenswrapper[7454]: I0319 12:13:26.644313 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"89890698-dd48-486b-bd64-dc909aecd9e8","Type":"ContainerStarted","Data":"940ef039d55964b5c0d66bfc983f2f10d9883865e517e1851c87917cb03802e7"} Mar 19 12:13:26.644365 master-0 kubenswrapper[7454]: I0319 12:13:26.644355 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"89890698-dd48-486b-bd64-dc909aecd9e8","Type":"ContainerStarted","Data":"2035c8e72f2b89c4f96d115722ef5f74b915d093ec98a02ef0fa3a58ae56a155"} Mar 19 12:13:26.659630 master-0 kubenswrapper[7454]: I0319 12:13:26.659571 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:26.659630 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:26.659630 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:26.659630 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:26.660046 master-0 kubenswrapper[7454]: I0319 12:13:26.659645 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:26.684234 master-0 kubenswrapper[7454]: I0319 12:13:26.684067 7454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=2.684040095 podStartE2EDuration="2.684040095s" podCreationTimestamp="2026-03-19 12:13:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:13:26.672643557 +0000 UTC m=+1176.303109500" watchObservedRunningTime="2026-03-19 12:13:26.684040095 +0000 UTC m=+1176.314506018" Mar 19 12:13:27.659281 master-0 kubenswrapper[7454]: I0319 12:13:27.659196 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:27.659281 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:27.659281 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:27.659281 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:27.660195 master-0 kubenswrapper[7454]: I0319 12:13:27.659289 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:27.663374 master-0 kubenswrapper[7454]: I0319 12:13:27.663312 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_bfb8e49c-30e6-4939-9ef9-1323883a8d6a/installer/0.log" Mar 19 12:13:27.663521 master-0 kubenswrapper[7454]: I0319 12:13:27.663391 7454 generic.go:334] "Generic (PLEG): container finished" podID="bfb8e49c-30e6-4939-9ef9-1323883a8d6a" containerID="4dc6cd1098d9b181306d55e6f29d0f09a98838187ca958b399501163372876ca" exitCode=1 Mar 19 12:13:27.663597 master-0 kubenswrapper[7454]: I0319 12:13:27.663501 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"bfb8e49c-30e6-4939-9ef9-1323883a8d6a","Type":"ContainerDied","Data":"4dc6cd1098d9b181306d55e6f29d0f09a98838187ca958b399501163372876ca"} Mar 19 12:13:27.663597 master-0 kubenswrapper[7454]: I0319 12:13:27.663564 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"bfb8e49c-30e6-4939-9ef9-1323883a8d6a","Type":"ContainerDied","Data":"e2268d57847d1028cfcb3c36b8c37f3a09a3721c2f716f744757dbffd1bb03d4"} Mar 19 12:13:27.663597 master-0 kubenswrapper[7454]: I0319 12:13:27.663586 7454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2268d57847d1028cfcb3c36b8c37f3a09a3721c2f716f744757dbffd1bb03d4" Mar 19 12:13:27.683277 master-0 kubenswrapper[7454]: I0319 12:13:27.683245 7454 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_bfb8e49c-30e6-4939-9ef9-1323883a8d6a/installer/0.log" Mar 19 12:13:27.683624 master-0 kubenswrapper[7454]: I0319 12:13:27.683312 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 12:13:27.698061 master-0 kubenswrapper[7454]: I0319 12:13:27.698004 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bfb8e49c-30e6-4939-9ef9-1323883a8d6a-var-lock\") pod \"bfb8e49c-30e6-4939-9ef9-1323883a8d6a\" (UID: \"bfb8e49c-30e6-4939-9ef9-1323883a8d6a\") " Mar 19 12:13:27.698259 master-0 kubenswrapper[7454]: I0319 12:13:27.698098 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfb8e49c-30e6-4939-9ef9-1323883a8d6a-kube-api-access\") pod \"bfb8e49c-30e6-4939-9ef9-1323883a8d6a\" (UID: \"bfb8e49c-30e6-4939-9ef9-1323883a8d6a\") " Mar 19 12:13:27.698259 master-0 kubenswrapper[7454]: I0319 12:13:27.698127 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bfb8e49c-30e6-4939-9ef9-1323883a8d6a-kubelet-dir\") pod \"bfb8e49c-30e6-4939-9ef9-1323883a8d6a\" (UID: \"bfb8e49c-30e6-4939-9ef9-1323883a8d6a\") " Mar 19 12:13:27.698259 master-0 kubenswrapper[7454]: I0319 12:13:27.698169 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfb8e49c-30e6-4939-9ef9-1323883a8d6a-var-lock" (OuterVolumeSpecName: "var-lock") pod "bfb8e49c-30e6-4939-9ef9-1323883a8d6a" (UID: "bfb8e49c-30e6-4939-9ef9-1323883a8d6a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:13:27.698424 master-0 kubenswrapper[7454]: I0319 12:13:27.698268 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfb8e49c-30e6-4939-9ef9-1323883a8d6a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bfb8e49c-30e6-4939-9ef9-1323883a8d6a" (UID: "bfb8e49c-30e6-4939-9ef9-1323883a8d6a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:13:27.699124 master-0 kubenswrapper[7454]: I0319 12:13:27.699043 7454 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bfb8e49c-30e6-4939-9ef9-1323883a8d6a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:13:27.699124 master-0 kubenswrapper[7454]: I0319 12:13:27.699119 7454 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bfb8e49c-30e6-4939-9ef9-1323883a8d6a-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 12:13:27.706494 master-0 kubenswrapper[7454]: I0319 12:13:27.705023 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfb8e49c-30e6-4939-9ef9-1323883a8d6a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bfb8e49c-30e6-4939-9ef9-1323883a8d6a" (UID: "bfb8e49c-30e6-4939-9ef9-1323883a8d6a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:13:27.799563 master-0 kubenswrapper[7454]: I0319 12:13:27.799527 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfb8e49c-30e6-4939-9ef9-1323883a8d6a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 12:13:28.659250 master-0 kubenswrapper[7454]: I0319 12:13:28.659184 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:28.659250 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:28.659250 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:28.659250 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:28.659975 master-0 kubenswrapper[7454]: I0319 12:13:28.659266 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:28.673913 master-0 kubenswrapper[7454]: I0319 12:13:28.673786 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 12:13:28.714913 master-0 kubenswrapper[7454]: I0319 12:13:28.705435 7454 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 19 12:13:28.715936 master-0 kubenswrapper[7454]: I0319 12:13:28.715878 7454 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 19 12:13:29.659861 master-0 kubenswrapper[7454]: I0319 12:13:29.659727 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:29.659861 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:29.659861 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:29.659861 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:29.659861 master-0 kubenswrapper[7454]: I0319 12:13:29.659843 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:30.645599 master-0 kubenswrapper[7454]: I0319 12:13:30.645509 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfb8e49c-30e6-4939-9ef9-1323883a8d6a" path="/var/lib/kubelet/pods/bfb8e49c-30e6-4939-9ef9-1323883a8d6a/volumes" Mar 19 12:13:30.660479 master-0 kubenswrapper[7454]: I0319 12:13:30.660417 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:30.660479 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:30.660479 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:30.660479 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:30.661111 master-0 kubenswrapper[7454]: I0319 12:13:30.660477 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:31.661212 master-0 kubenswrapper[7454]: I0319 12:13:31.660961 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:31.661212 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:31.661212 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:31.661212 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:31.661212 master-0 kubenswrapper[7454]: I0319 12:13:31.661092 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:32.659336 master-0 kubenswrapper[7454]: I0319 12:13:32.659277 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:32.659336 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:32.659336 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:32.659336 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:32.659970 master-0 kubenswrapper[7454]: I0319 12:13:32.659904 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:33.659989 master-0 kubenswrapper[7454]: I0319 12:13:33.659762 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:33.659989 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:33.659989 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:33.659989 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:33.661428 master-0 kubenswrapper[7454]: I0319 12:13:33.659997 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:34.660045 master-0 kubenswrapper[7454]: I0319 12:13:34.659976 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:34.660045 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:34.660045 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:34.660045 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:34.661424 master-0 kubenswrapper[7454]: I0319 12:13:34.660058 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:35.661286 master-0 kubenswrapper[7454]: I0319 12:13:35.661202 7454 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-lkpgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 12:13:35.661286 master-0 kubenswrapper[7454]: [-]has-synced failed: reason withheld Mar 19 12:13:35.661286 master-0 kubenswrapper[7454]: [+]process-running ok Mar 19 12:13:35.661286 master-0 kubenswrapper[7454]: healthz check failed Mar 19 12:13:35.662246 master-0 kubenswrapper[7454]: I0319 12:13:35.661304 7454 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:13:35.662246 master-0 kubenswrapper[7454]: I0319 12:13:35.661381 7454 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:13:35.662384 master-0 kubenswrapper[7454]: I0319 12:13:35.662345 7454 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"02580d8818d0f202a13ac68e82f20d4293f3530799a86f4d7e26b5116036380f"} pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" containerMessage="Container router failed startup probe, will be restarted" Mar 19 12:13:35.662448 master-0 kubenswrapper[7454]: I0319 12:13:35.662410 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" podUID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerName="router" containerID="cri-o://02580d8818d0f202a13ac68e82f20d4293f3530799a86f4d7e26b5116036380f" gracePeriod=3600 Mar 19 12:13:39.633959 master-0 kubenswrapper[7454]: I0319 12:13:39.633885 7454 scope.go:117] "RemoveContainer" containerID="f6cff670ffd3b7d67c924d90e3c87c5305a542b098fae4298d16010aa46c7cd3" Mar 19 12:13:39.634996 master-0 kubenswrapper[7454]: E0319 12:13:39.634277 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 12:13:53.634520 master-0 kubenswrapper[7454]: I0319 12:13:53.634370 7454 scope.go:117] "RemoveContainer" containerID="f6cff670ffd3b7d67c924d90e3c87c5305a542b098fae4298d16010aa46c7cd3" Mar 19 12:13:53.635444 master-0 kubenswrapper[7454]: E0319 12:13:53.634863 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 12:13:55.316861 master-0 kubenswrapper[7454]: I0319 12:13:55.316754 7454 scope.go:117] "RemoveContainer" containerID="905b5c7c59d30b4b870a40d926e6ce6d9ad7f0bf509dc07ea760b5f841773a4f" Mar 19 12:14:04.633672 master-0 kubenswrapper[7454]: I0319 12:14:04.633619 7454 scope.go:117] "RemoveContainer" containerID="f6cff670ffd3b7d67c924d90e3c87c5305a542b098fae4298d16010aa46c7cd3" Mar 19 12:14:04.634448 master-0 kubenswrapper[7454]: E0319 12:14:04.633980 7454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-btppx_openshift-ingress-operator(b80027fd-7b39-477a-a337-ff9bb08e7eeb)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" podUID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" Mar 19 12:14:13.770782 master-0 kubenswrapper[7454]: I0319 12:14:13.770704 7454 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 12:14:13.771389 master-0 kubenswrapper[7454]: E0319 12:14:13.771051 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb8e49c-30e6-4939-9ef9-1323883a8d6a" containerName="installer" Mar 19 12:14:13.771389 master-0 kubenswrapper[7454]: I0319 12:14:13.771070 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb8e49c-30e6-4939-9ef9-1323883a8d6a" containerName="installer" Mar 19 12:14:13.771389 master-0 kubenswrapper[7454]: I0319 12:14:13.771270 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfb8e49c-30e6-4939-9ef9-1323883a8d6a" containerName="installer" Mar 19 12:14:13.771769 master-0 kubenswrapper[7454]: I0319 12:14:13.771740 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:13.772115 master-0 kubenswrapper[7454]: I0319 12:14:13.772061 7454 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 19 12:14:13.773077 master-0 kubenswrapper[7454]: I0319 12:14:13.773020 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://4eb7482c86a1b5f9e745f031e830bded6c37fd855abcbff4d6d73294bfadb247" gracePeriod=15 Mar 19 12:14:13.774276 master-0 kubenswrapper[7454]: I0319 12:14:13.774257 7454 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 19 12:14:13.774517 master-0 kubenswrapper[7454]: I0319 12:14:13.773002 7454 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" containerID="cri-o://f347ebf4af2e430c7010deb32f74eaaa375be42bd1cb0fd78e647b0e4fd96480" gracePeriod=15 Mar 19 12:14:13.774639 master-0 kubenswrapper[7454]: E0319 12:14:13.774625 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 19 12:14:13.774701 master-0 kubenswrapper[7454]: I0319 12:14:13.774692 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 19 12:14:13.774775 master-0 kubenswrapper[7454]: E0319 12:14:13.774766 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 19 12:14:13.774897 master-0 kubenswrapper[7454]: I0319 12:14:13.774881 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 19 12:14:13.775075 master-0 kubenswrapper[7454]: E0319 12:14:13.775058 7454 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 19 12:14:13.775166 master-0 kubenswrapper[7454]: I0319 12:14:13.775154 7454 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 19 12:14:13.775415 master-0 kubenswrapper[7454]: I0319 12:14:13.775399 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 19 12:14:13.775594 master-0 kubenswrapper[7454]: I0319 12:14:13.775581 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 19 12:14:13.775674 master-0 kubenswrapper[7454]: I0319 12:14:13.775663 7454 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 19 12:14:13.791203 master-0 kubenswrapper[7454]: I0319 12:14:13.791142 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:13.818409 master-0 kubenswrapper[7454]: I0319 12:14:13.818051 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:13.818409 master-0 kubenswrapper[7454]: I0319 12:14:13.818309 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:13.818623 master-0 kubenswrapper[7454]: I0319 12:14:13.818399 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:13.818623 master-0 kubenswrapper[7454]: I0319 12:14:13.818482 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:13.818623 master-0 kubenswrapper[7454]: I0319 12:14:13.818560 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:13.818874 master-0 kubenswrapper[7454]: I0319 12:14:13.818833 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:13.819906 master-0 kubenswrapper[7454]: I0319 12:14:13.819862 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:13.820000 master-0 kubenswrapper[7454]: I0319 12:14:13.819966 7454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:13.853220 master-0 kubenswrapper[7454]: E0319 12:14:13.853134 7454 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:13.873851 master-0 kubenswrapper[7454]: E0319 12:14:13.873747 7454 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:13.921778 master-0 kubenswrapper[7454]: I0319 12:14:13.921698 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:13.921778 master-0 kubenswrapper[7454]: I0319 12:14:13.921766 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:13.922058 master-0 kubenswrapper[7454]: I0319 12:14:13.921880 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:13.922058 master-0 kubenswrapper[7454]: I0319 12:14:13.921941 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:13.922058 master-0 kubenswrapper[7454]: I0319 12:14:13.921976 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:13.922058 master-0 kubenswrapper[7454]: I0319 12:14:13.921979 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:13.922058 master-0 kubenswrapper[7454]: I0319 12:14:13.922009 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:13.922058 master-0 kubenswrapper[7454]: I0319 12:14:13.922039 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:13.922232 master-0 kubenswrapper[7454]: I0319 12:14:13.922037 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:13.922232 master-0 kubenswrapper[7454]: I0319 12:14:13.922086 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:13.922232 master-0 kubenswrapper[7454]: I0319 12:14:13.922110 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:13.922232 master-0 kubenswrapper[7454]: I0319 12:14:13.922068 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:13.922232 master-0 kubenswrapper[7454]: I0319 12:14:13.922148 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:13.922232 master-0 kubenswrapper[7454]: I0319 12:14:13.922188 7454 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:13.922232 master-0 kubenswrapper[7454]: I0319 12:14:13.922225 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:13.922448 master-0 kubenswrapper[7454]: I0319 12:14:13.922260 7454 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:13.922448 master-0 kubenswrapper[7454]: I0319 12:14:13.922387 7454 patch_prober.go:28] interesting pod/bootstrap-kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" start-of-body= Mar 19 12:14:13.922448 master-0 kubenswrapper[7454]: I0319 12:14:13.922422 7454 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:14:13.923439 master-0 kubenswrapper[7454]: E0319 12:14:13.923311 7454 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event=< Mar 19 12:14:13.923439 master-0 kubenswrapper[7454]: &Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e3d12bd2270c1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.32.10:6443/readyz": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 12:14:13.923439 master-0 kubenswrapper[7454]: body: Mar 19 12:14:13.923439 master-0 kubenswrapper[7454]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 12:14:13.922410689 +0000 UTC m=+1223.552876612,LastTimestamp:2026-03-19 12:14:13.922410689 +0000 UTC m=+1223.552876612,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 19 12:14:13.923439 master-0 kubenswrapper[7454]: > Mar 19 12:14:14.039249 master-0 kubenswrapper[7454]: I0319 12:14:14.039084 7454 generic.go:334] "Generic (PLEG): container finished" podID="89890698-dd48-486b-bd64-dc909aecd9e8" containerID="940ef039d55964b5c0d66bfc983f2f10d9883865e517e1851c87917cb03802e7" exitCode=0 Mar 19 12:14:14.039249 master-0 kubenswrapper[7454]: I0319 12:14:14.039223 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"89890698-dd48-486b-bd64-dc909aecd9e8","Type":"ContainerDied","Data":"940ef039d55964b5c0d66bfc983f2f10d9883865e517e1851c87917cb03802e7"} Mar 19 12:14:14.041078 master-0 kubenswrapper[7454]: I0319 12:14:14.041025 7454 status_manager.go:851] "Failed to get status for pod" podUID="89890698-dd48-486b-bd64-dc909aecd9e8" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:14:14.042817 master-0 kubenswrapper[7454]: I0319 12:14:14.042750 7454 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="4eb7482c86a1b5f9e745f031e830bded6c37fd855abcbff4d6d73294bfadb247" exitCode=0 Mar 19 12:14:14.154730 master-0 kubenswrapper[7454]: I0319 12:14:14.154613 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:14.175397 master-0 kubenswrapper[7454]: I0319 12:14:14.175331 7454 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:14.194764 master-0 kubenswrapper[7454]: W0319 12:14:14.194702 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e7a82869988463543d3d8dd1f0b5fe3.slice/crio-27ccdb8fe17b3c5cb9acf1759072b6837f5312b119b69e4b34ee0c362bd4382c WatchSource:0}: Error finding container 27ccdb8fe17b3c5cb9acf1759072b6837f5312b119b69e4b34ee0c362bd4382c: Status 404 returned error can't find the container with id 27ccdb8fe17b3c5cb9acf1759072b6837f5312b119b69e4b34ee0c362bd4382c Mar 19 12:14:14.210431 master-0 kubenswrapper[7454]: W0319 12:14:14.210357 7454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb45ea2ef1cf2bc9d1d994d6538ae0a64.slice/crio-f381b85f9130b76eda5dc167d27eb69ac9b6f2de032bdb231577387d3f19b35d WatchSource:0}: Error finding container f381b85f9130b76eda5dc167d27eb69ac9b6f2de032bdb231577387d3f19b35d: Status 404 returned error can't find the container with id f381b85f9130b76eda5dc167d27eb69ac9b6f2de032bdb231577387d3f19b35d Mar 19 12:14:15.054967 master-0 kubenswrapper[7454]: I0319 12:14:15.054879 7454 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="9fba0a39bbccef4b23df04adb335845926b8d52e014255c123724fd850a01216" exitCode=0 Mar 19 12:14:15.055731 master-0 kubenswrapper[7454]: I0319 12:14:15.054981 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerDied","Data":"9fba0a39bbccef4b23df04adb335845926b8d52e014255c123724fd850a01216"} Mar 19 12:14:15.055731 master-0 kubenswrapper[7454]: I0319 12:14:15.055396 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"f381b85f9130b76eda5dc167d27eb69ac9b6f2de032bdb231577387d3f19b35d"} Mar 19 12:14:15.057008 master-0 kubenswrapper[7454]: E0319 12:14:15.056953 7454 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:15.057108 master-0 kubenswrapper[7454]: I0319 12:14:15.057058 7454 status_manager.go:851] "Failed to get status for pod" podUID="89890698-dd48-486b-bd64-dc909aecd9e8" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:14:15.057581 master-0 kubenswrapper[7454]: I0319 12:14:15.057517 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"7783994ea3804af3822e1e8ef880d160160be30c6cc27242405255670e8fc218"} Mar 19 12:14:15.057653 master-0 kubenswrapper[7454]: I0319 12:14:15.057602 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"27ccdb8fe17b3c5cb9acf1759072b6837f5312b119b69e4b34ee0c362bd4382c"} Mar 19 12:14:15.058663 master-0 kubenswrapper[7454]: E0319 12:14:15.058480 7454 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:15.058663 master-0 kubenswrapper[7454]: I0319 12:14:15.058473 7454 status_manager.go:851] "Failed to get status for pod" podUID="89890698-dd48-486b-bd64-dc909aecd9e8" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:14:15.329436 master-0 kubenswrapper[7454]: I0319 12:14:15.328983 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:14:15.330112 master-0 kubenswrapper[7454]: I0319 12:14:15.330060 7454 status_manager.go:851] "Failed to get status for pod" podUID="89890698-dd48-486b-bd64-dc909aecd9e8" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:14:15.345015 master-0 kubenswrapper[7454]: I0319 12:14:15.344082 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-var-lock\") pod \"89890698-dd48-486b-bd64-dc909aecd9e8\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " Mar 19 12:14:15.345015 master-0 kubenswrapper[7454]: I0319 12:14:15.344213 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-kubelet-dir\") pod \"89890698-dd48-486b-bd64-dc909aecd9e8\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " Mar 19 12:14:15.345015 master-0 kubenswrapper[7454]: I0319 12:14:15.344269 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access\") pod \"89890698-dd48-486b-bd64-dc909aecd9e8\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " Mar 19 12:14:15.345015 master-0 kubenswrapper[7454]: I0319 12:14:15.344660 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "89890698-dd48-486b-bd64-dc909aecd9e8" (UID: "89890698-dd48-486b-bd64-dc909aecd9e8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:14:15.345015 master-0 kubenswrapper[7454]: I0319 12:14:15.344698 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-var-lock" (OuterVolumeSpecName: "var-lock") pod "89890698-dd48-486b-bd64-dc909aecd9e8" (UID: "89890698-dd48-486b-bd64-dc909aecd9e8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:14:15.347245 master-0 kubenswrapper[7454]: I0319 12:14:15.347186 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "89890698-dd48-486b-bd64-dc909aecd9e8" (UID: "89890698-dd48-486b-bd64-dc909aecd9e8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:14:15.446623 master-0 kubenswrapper[7454]: I0319 12:14:15.446568 7454 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:14:15.446623 master-0 kubenswrapper[7454]: I0319 12:14:15.446612 7454 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 12:14:15.446623 master-0 kubenswrapper[7454]: I0319 12:14:15.446630 7454 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 12:14:16.078334 master-0 kubenswrapper[7454]: I0319 12:14:16.078294 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:14:16.078859 master-0 kubenswrapper[7454]: I0319 12:14:16.078429 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"89890698-dd48-486b-bd64-dc909aecd9e8","Type":"ContainerDied","Data":"2035c8e72f2b89c4f96d115722ef5f74b915d093ec98a02ef0fa3a58ae56a155"} Mar 19 12:14:16.078859 master-0 kubenswrapper[7454]: I0319 12:14:16.078469 7454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2035c8e72f2b89c4f96d115722ef5f74b915d093ec98a02ef0fa3a58ae56a155" Mar 19 12:14:16.084867 master-0 kubenswrapper[7454]: I0319 12:14:16.083610 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"8721a1bcca3e6b8ad33aab266086c8fc37d6d5fcfd1c38b4b527628964b85d3a"} Mar 19 12:14:16.084867 master-0 kubenswrapper[7454]: I0319 12:14:16.083653 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"565f9c73fa36f45fb9cd49587a2aa91819aea7c5a1fe5e602d37b0a519ad8eea"} Mar 19 12:14:16.084867 master-0 kubenswrapper[7454]: I0319 12:14:16.083665 7454 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"49208618a1a605bcca0e7a0e4a618f3e1b277ea51f314ec7cb82b073d6261eb8"} Mar 19 12:14:16.179410 master-0 kubenswrapper[7454]: I0319 12:14:16.179252 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 12:14:16.269235 master-0 kubenswrapper[7454]: I0319 12:14:16.269180 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 19 12:14:16.269235 master-0 kubenswrapper[7454]: I0319 12:14:16.269219 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 19 12:14:16.269488 master-0 kubenswrapper[7454]: I0319 12:14:16.269322 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 19 12:14:16.269488 master-0 kubenswrapper[7454]: I0319 12:14:16.269315 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:14:16.269488 master-0 kubenswrapper[7454]: I0319 12:14:16.269358 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 19 12:14:16.269488 master-0 kubenswrapper[7454]: I0319 12:14:16.269375 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets" (OuterVolumeSpecName: "secrets") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:14:16.269488 master-0 kubenswrapper[7454]: I0319 12:14:16.269396 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:14:16.269488 master-0 kubenswrapper[7454]: I0319 12:14:16.269409 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 19 12:14:16.269488 master-0 kubenswrapper[7454]: I0319 12:14:16.269418 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs" (OuterVolumeSpecName: "logs") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:14:16.269488 master-0 kubenswrapper[7454]: I0319 12:14:16.269433 7454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 19 12:14:16.269840 master-0 kubenswrapper[7454]: I0319 12:14:16.269501 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config" (OuterVolumeSpecName: "config") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:14:16.269840 master-0 kubenswrapper[7454]: I0319 12:14:16.269590 7454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:14:16.269959 master-0 kubenswrapper[7454]: I0319 12:14:16.269903 7454 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:14:16.269959 master-0 kubenswrapper[7454]: I0319 12:14:16.269919 7454 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:14:16.269959 master-0 kubenswrapper[7454]: I0319 12:14:16.269946 7454 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:14:16.269959 master-0 kubenswrapper[7454]: I0319 12:14:16.269956 7454 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 19 12:14:16.270123 master-0 kubenswrapper[7454]: I0319 12:14:16.269967 7454 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 19 12:14:16.270123 master-0 kubenswrapper[7454]: I0319 12:14:16.269980 7454 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") on node \"master-0\" DevicePath \"\"" Mar 19 12:14:16.642893 master-0 kubenswrapper[7454]: I0319 12:14:16.642834 7454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49fac1b46a11e49501805e891baae4a9" path="/var/lib/kubelet/pods/49fac1b46a11e49501805e891baae4a9/volumes" Mar 19 12:14:16.643237 master-0 kubenswrapper[7454]: I0319 12:14:16.643207 7454 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 19 12:14:17.171485 master-0 kubenswrapper[7454]: I0319 12:14:17.170619 7454 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 12:14:17.171485 master-0 kubenswrapper[7454]: I0319 12:14:17.171381 7454 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="f347ebf4af2e430c7010deb32f74eaaa375be42bd1cb0fd78e647b0e4fd96480" exitCode=0 Mar 19 12:14:21.336262 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 19 12:14:21.369045 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 12:14:21.369292 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 19 12:14:21.378912 master-0 systemd[1]: kubelet.service: Consumed 3min 4.157s CPU time. Mar 19 12:14:21.393555 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 19 12:14:21.495946 master-0 kubenswrapper[31830]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 12:14:21.495946 master-0 kubenswrapper[31830]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 19 12:14:21.495946 master-0 kubenswrapper[31830]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 12:14:21.496505 master-0 kubenswrapper[31830]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 12:14:21.496505 master-0 kubenswrapper[31830]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 12:14:21.496505 master-0 kubenswrapper[31830]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 12:14:21.496505 master-0 kubenswrapper[31830]: I0319 12:14:21.496158 31830 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 12:14:21.499334 master-0 kubenswrapper[31830]: W0319 12:14:21.499306 31830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 12:14:21.499334 master-0 kubenswrapper[31830]: W0319 12:14:21.499326 31830 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 12:14:21.499334 master-0 kubenswrapper[31830]: W0319 12:14:21.499333 31830 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 12:14:21.499477 master-0 kubenswrapper[31830]: W0319 12:14:21.499339 31830 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 12:14:21.499477 master-0 kubenswrapper[31830]: W0319 12:14:21.499345 31830 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 12:14:21.499477 master-0 kubenswrapper[31830]: W0319 12:14:21.499350 31830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 12:14:21.499477 master-0 kubenswrapper[31830]: W0319 12:14:21.499356 31830 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 12:14:21.499477 master-0 kubenswrapper[31830]: W0319 12:14:21.499364 31830 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 12:14:21.499477 master-0 kubenswrapper[31830]: W0319 12:14:21.499371 31830 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 12:14:21.499477 master-0 kubenswrapper[31830]: W0319 12:14:21.499377 31830 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 12:14:21.499477 master-0 kubenswrapper[31830]: W0319 12:14:21.499384 31830 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 12:14:21.499477 master-0 kubenswrapper[31830]: W0319 12:14:21.499390 31830 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 12:14:21.499477 master-0 kubenswrapper[31830]: W0319 12:14:21.499397 31830 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 12:14:21.499477 master-0 kubenswrapper[31830]: W0319 12:14:21.499404 31830 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 12:14:21.499477 master-0 kubenswrapper[31830]: W0319 12:14:21.499410 31830 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 12:14:21.499477 master-0 kubenswrapper[31830]: W0319 12:14:21.499416 31830 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 12:14:21.499477 master-0 kubenswrapper[31830]: W0319 12:14:21.499421 31830 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 12:14:21.499477 master-0 kubenswrapper[31830]: W0319 12:14:21.499427 31830 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 12:14:21.499477 master-0 kubenswrapper[31830]: W0319 12:14:21.499432 31830 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 12:14:21.499477 master-0 kubenswrapper[31830]: W0319 12:14:21.499440 31830 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 12:14:21.499477 master-0 kubenswrapper[31830]: W0319 12:14:21.499446 31830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 12:14:21.499477 master-0 kubenswrapper[31830]: W0319 12:14:21.499461 31830 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 12:14:21.500052 master-0 kubenswrapper[31830]: W0319 12:14:21.499468 31830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 12:14:21.500052 master-0 kubenswrapper[31830]: W0319 12:14:21.499474 31830 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 12:14:21.500052 master-0 kubenswrapper[31830]: W0319 12:14:21.499482 31830 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 12:14:21.500052 master-0 kubenswrapper[31830]: W0319 12:14:21.499488 31830 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 12:14:21.500052 master-0 kubenswrapper[31830]: W0319 12:14:21.499496 31830 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 12:14:21.500052 master-0 kubenswrapper[31830]: W0319 12:14:21.499502 31830 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 12:14:21.500052 master-0 kubenswrapper[31830]: W0319 12:14:21.499508 31830 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 12:14:21.500052 master-0 kubenswrapper[31830]: W0319 12:14:21.499518 31830 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 12:14:21.500052 master-0 kubenswrapper[31830]: W0319 12:14:21.499526 31830 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 12:14:21.500052 master-0 kubenswrapper[31830]: W0319 12:14:21.499534 31830 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 12:14:21.500052 master-0 kubenswrapper[31830]: W0319 12:14:21.499541 31830 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 12:14:21.500052 master-0 kubenswrapper[31830]: W0319 12:14:21.499547 31830 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 12:14:21.500052 master-0 kubenswrapper[31830]: W0319 12:14:21.499554 31830 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 12:14:21.500052 master-0 kubenswrapper[31830]: W0319 12:14:21.499561 31830 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 12:14:21.500052 master-0 kubenswrapper[31830]: W0319 12:14:21.499568 31830 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 12:14:21.500052 master-0 kubenswrapper[31830]: W0319 12:14:21.499575 31830 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 12:14:21.500052 master-0 kubenswrapper[31830]: W0319 12:14:21.499581 31830 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 12:14:21.500052 master-0 kubenswrapper[31830]: W0319 12:14:21.499588 31830 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 12:14:21.500052 master-0 kubenswrapper[31830]: W0319 12:14:21.499594 31830 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 12:14:21.500052 master-0 kubenswrapper[31830]: W0319 12:14:21.499599 31830 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 12:14:21.500628 master-0 kubenswrapper[31830]: W0319 12:14:21.499604 31830 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 12:14:21.500628 master-0 kubenswrapper[31830]: W0319 12:14:21.499609 31830 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 12:14:21.500628 master-0 kubenswrapper[31830]: W0319 12:14:21.499616 31830 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 12:14:21.500628 master-0 kubenswrapper[31830]: W0319 12:14:21.499621 31830 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 12:14:21.500628 master-0 kubenswrapper[31830]: W0319 12:14:21.499627 31830 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 12:14:21.500628 master-0 kubenswrapper[31830]: W0319 12:14:21.499632 31830 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 12:14:21.500628 master-0 kubenswrapper[31830]: W0319 12:14:21.499637 31830 feature_gate.go:330] unrecognized feature gate: Example Mar 19 12:14:21.500628 master-0 kubenswrapper[31830]: W0319 12:14:21.499642 31830 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 12:14:21.500628 master-0 kubenswrapper[31830]: W0319 12:14:21.499648 31830 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 12:14:21.500628 master-0 kubenswrapper[31830]: W0319 12:14:21.499654 31830 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 12:14:21.500628 master-0 kubenswrapper[31830]: W0319 12:14:21.499659 31830 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 12:14:21.500628 master-0 kubenswrapper[31830]: W0319 12:14:21.499665 31830 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 12:14:21.500628 master-0 kubenswrapper[31830]: W0319 12:14:21.499672 31830 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 12:14:21.500628 master-0 kubenswrapper[31830]: W0319 12:14:21.499681 31830 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 12:14:21.500628 master-0 kubenswrapper[31830]: W0319 12:14:21.499690 31830 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 12:14:21.500628 master-0 kubenswrapper[31830]: W0319 12:14:21.499701 31830 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 12:14:21.500628 master-0 kubenswrapper[31830]: W0319 12:14:21.499708 31830 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 12:14:21.500628 master-0 kubenswrapper[31830]: W0319 12:14:21.499715 31830 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 12:14:21.500628 master-0 kubenswrapper[31830]: W0319 12:14:21.499722 31830 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 12:14:21.500628 master-0 kubenswrapper[31830]: W0319 12:14:21.499728 31830 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 12:14:21.501151 master-0 kubenswrapper[31830]: W0319 12:14:21.499735 31830 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 12:14:21.501151 master-0 kubenswrapper[31830]: W0319 12:14:21.499740 31830 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 12:14:21.501151 master-0 kubenswrapper[31830]: W0319 12:14:21.499745 31830 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 12:14:21.501151 master-0 kubenswrapper[31830]: W0319 12:14:21.499753 31830 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 12:14:21.501151 master-0 kubenswrapper[31830]: W0319 12:14:21.499760 31830 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 12:14:21.501151 master-0 kubenswrapper[31830]: W0319 12:14:21.499766 31830 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 12:14:21.501151 master-0 kubenswrapper[31830]: W0319 12:14:21.499772 31830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 12:14:21.501151 master-0 kubenswrapper[31830]: W0319 12:14:21.499778 31830 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 12:14:21.501151 master-0 kubenswrapper[31830]: W0319 12:14:21.499785 31830 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 12:14:21.501151 master-0 kubenswrapper[31830]: W0319 12:14:21.499790 31830 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 12:14:21.501151 master-0 kubenswrapper[31830]: I0319 12:14:21.499921 31830 flags.go:64] FLAG: --address="0.0.0.0" Mar 19 12:14:21.501151 master-0 kubenswrapper[31830]: I0319 12:14:21.499933 31830 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 19 12:14:21.501151 master-0 kubenswrapper[31830]: I0319 12:14:21.499944 31830 flags.go:64] FLAG: --anonymous-auth="true" Mar 19 12:14:21.501151 master-0 kubenswrapper[31830]: I0319 12:14:21.499952 31830 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 19 12:14:21.501151 master-0 kubenswrapper[31830]: I0319 12:14:21.499960 31830 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 19 12:14:21.501151 master-0 kubenswrapper[31830]: I0319 12:14:21.499966 31830 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 19 12:14:21.501151 master-0 kubenswrapper[31830]: I0319 12:14:21.499975 31830 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 19 12:14:21.501151 master-0 kubenswrapper[31830]: I0319 12:14:21.499983 31830 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 19 12:14:21.501151 master-0 kubenswrapper[31830]: I0319 12:14:21.499989 31830 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 19 12:14:21.501151 master-0 kubenswrapper[31830]: I0319 12:14:21.499996 31830 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 19 12:14:21.501151 master-0 kubenswrapper[31830]: I0319 12:14:21.500002 31830 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500009 31830 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500015 31830 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500022 31830 flags.go:64] FLAG: --cgroup-root="" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500028 31830 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500035 31830 flags.go:64] FLAG: --client-ca-file="" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500040 31830 flags.go:64] FLAG: --cloud-config="" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500046 31830 flags.go:64] FLAG: --cloud-provider="" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500052 31830 flags.go:64] FLAG: --cluster-dns="[]" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500059 31830 flags.go:64] FLAG: --cluster-domain="" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500065 31830 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500072 31830 flags.go:64] FLAG: --config-dir="" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500079 31830 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500085 31830 flags.go:64] FLAG: --container-log-max-files="5" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500093 31830 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500099 31830 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500105 31830 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500111 31830 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500117 31830 flags.go:64] FLAG: --contention-profiling="false" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500123 31830 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500129 31830 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500135 31830 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500141 31830 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500148 31830 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500155 31830 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 19 12:14:21.502131 master-0 kubenswrapper[31830]: I0319 12:14:21.500161 31830 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500167 31830 flags.go:64] FLAG: --enable-load-reader="false" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500173 31830 flags.go:64] FLAG: --enable-server="true" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500179 31830 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500193 31830 flags.go:64] FLAG: --event-burst="100" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500200 31830 flags.go:64] FLAG: --event-qps="50" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500206 31830 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500212 31830 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500218 31830 flags.go:64] FLAG: --eviction-hard="" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500225 31830 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500232 31830 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500238 31830 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500245 31830 flags.go:64] FLAG: --eviction-soft="" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500250 31830 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500257 31830 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500263 31830 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500269 31830 flags.go:64] FLAG: --experimental-mounter-path="" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500275 31830 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500281 31830 flags.go:64] FLAG: --fail-swap-on="true" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500287 31830 flags.go:64] FLAG: --feature-gates="" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500295 31830 flags.go:64] FLAG: --file-check-frequency="20s" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500302 31830 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500309 31830 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500315 31830 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500321 31830 flags.go:64] FLAG: --healthz-port="10248" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500328 31830 flags.go:64] FLAG: --help="false" Mar 19 12:14:21.502833 master-0 kubenswrapper[31830]: I0319 12:14:21.500334 31830 flags.go:64] FLAG: --hostname-override="" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500340 31830 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500346 31830 flags.go:64] FLAG: --http-check-frequency="20s" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500352 31830 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500358 31830 flags.go:64] FLAG: --image-credential-provider-config="" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500364 31830 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500370 31830 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500376 31830 flags.go:64] FLAG: --image-service-endpoint="" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500382 31830 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500388 31830 flags.go:64] FLAG: --kube-api-burst="100" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500401 31830 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500409 31830 flags.go:64] FLAG: --kube-api-qps="50" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500416 31830 flags.go:64] FLAG: --kube-reserved="" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500423 31830 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500431 31830 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500439 31830 flags.go:64] FLAG: --kubelet-cgroups="" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500452 31830 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500459 31830 flags.go:64] FLAG: --lock-file="" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500466 31830 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500474 31830 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500481 31830 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500493 31830 flags.go:64] FLAG: --log-json-split-stream="false" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500500 31830 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500510 31830 flags.go:64] FLAG: --log-text-split-stream="false" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500516 31830 flags.go:64] FLAG: --logging-format="text" Mar 19 12:14:21.503605 master-0 kubenswrapper[31830]: I0319 12:14:21.500522 31830 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500529 31830 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500535 31830 flags.go:64] FLAG: --manifest-url="" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500541 31830 flags.go:64] FLAG: --manifest-url-header="" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500548 31830 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500554 31830 flags.go:64] FLAG: --max-open-files="1000000" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500562 31830 flags.go:64] FLAG: --max-pods="110" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500569 31830 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500626 31830 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500635 31830 flags.go:64] FLAG: --memory-manager-policy="None" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500641 31830 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500647 31830 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500654 31830 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500660 31830 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500676 31830 flags.go:64] FLAG: --node-status-max-images="50" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500684 31830 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500692 31830 flags.go:64] FLAG: --oom-score-adj="-999" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500703 31830 flags.go:64] FLAG: --pod-cidr="" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500711 31830 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500721 31830 flags.go:64] FLAG: --pod-manifest-path="" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500727 31830 flags.go:64] FLAG: --pod-max-pids="-1" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500733 31830 flags.go:64] FLAG: --pods-per-core="0" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500739 31830 flags.go:64] FLAG: --port="10250" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500746 31830 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 19 12:14:21.504359 master-0 kubenswrapper[31830]: I0319 12:14:21.500753 31830 flags.go:64] FLAG: --provider-id="" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500759 31830 flags.go:64] FLAG: --qos-reserved="" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500765 31830 flags.go:64] FLAG: --read-only-port="10255" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500771 31830 flags.go:64] FLAG: --register-node="true" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500777 31830 flags.go:64] FLAG: --register-schedulable="true" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500783 31830 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500811 31830 flags.go:64] FLAG: --registry-burst="10" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500818 31830 flags.go:64] FLAG: --registry-qps="5" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500824 31830 flags.go:64] FLAG: --reserved-cpus="" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500830 31830 flags.go:64] FLAG: --reserved-memory="" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500837 31830 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500845 31830 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500852 31830 flags.go:64] FLAG: --rotate-certificates="false" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500859 31830 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500866 31830 flags.go:64] FLAG: --runonce="false" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500872 31830 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500878 31830 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500886 31830 flags.go:64] FLAG: --seccomp-default="false" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500892 31830 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500899 31830 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500905 31830 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500911 31830 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500918 31830 flags.go:64] FLAG: --storage-driver-password="root" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500923 31830 flags.go:64] FLAG: --storage-driver-secure="false" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500930 31830 flags.go:64] FLAG: --storage-driver-table="stats" Mar 19 12:14:21.505150 master-0 kubenswrapper[31830]: I0319 12:14:21.500938 31830 flags.go:64] FLAG: --storage-driver-user="root" Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: I0319 12:14:21.500944 31830 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: I0319 12:14:21.500950 31830 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: I0319 12:14:21.500957 31830 flags.go:64] FLAG: --system-cgroups="" Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: I0319 12:14:21.500964 31830 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: I0319 12:14:21.500973 31830 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: I0319 12:14:21.500979 31830 flags.go:64] FLAG: --tls-cert-file="" Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: I0319 12:14:21.500985 31830 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: I0319 12:14:21.500993 31830 flags.go:64] FLAG: --tls-min-version="" Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: I0319 12:14:21.500999 31830 flags.go:64] FLAG: --tls-private-key-file="" Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: I0319 12:14:21.501005 31830 flags.go:64] FLAG: --topology-manager-policy="none" Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: I0319 12:14:21.501011 31830 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: I0319 12:14:21.501018 31830 flags.go:64] FLAG: --topology-manager-scope="container" Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: I0319 12:14:21.501026 31830 flags.go:64] FLAG: --v="2" Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: I0319 12:14:21.501036 31830 flags.go:64] FLAG: --version="false" Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: I0319 12:14:21.501045 31830 flags.go:64] FLAG: --vmodule="" Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: I0319 12:14:21.501054 31830 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: I0319 12:14:21.501062 31830 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: W0319 12:14:21.501473 31830 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: W0319 12:14:21.501491 31830 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: W0319 12:14:21.501501 31830 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: W0319 12:14:21.501604 31830 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: W0319 12:14:21.501613 31830 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: W0319 12:14:21.501620 31830 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 12:14:21.505971 master-0 kubenswrapper[31830]: W0319 12:14:21.501641 31830 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 12:14:21.506776 master-0 kubenswrapper[31830]: W0319 12:14:21.501650 31830 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 12:14:21.506776 master-0 kubenswrapper[31830]: W0319 12:14:21.501657 31830 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 12:14:21.506776 master-0 kubenswrapper[31830]: W0319 12:14:21.501738 31830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 12:14:21.506776 master-0 kubenswrapper[31830]: W0319 12:14:21.501752 31830 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 12:14:21.506776 master-0 kubenswrapper[31830]: W0319 12:14:21.501760 31830 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 12:14:21.506776 master-0 kubenswrapper[31830]: W0319 12:14:21.501843 31830 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 12:14:21.506776 master-0 kubenswrapper[31830]: W0319 12:14:21.501891 31830 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 12:14:21.506776 master-0 kubenswrapper[31830]: W0319 12:14:21.501901 31830 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 12:14:21.506776 master-0 kubenswrapper[31830]: W0319 12:14:21.501908 31830 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 12:14:21.506776 master-0 kubenswrapper[31830]: W0319 12:14:21.501915 31830 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 12:14:21.506776 master-0 kubenswrapper[31830]: W0319 12:14:21.501923 31830 feature_gate.go:330] unrecognized feature gate: Example Mar 19 12:14:21.506776 master-0 kubenswrapper[31830]: W0319 12:14:21.501938 31830 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 12:14:21.506776 master-0 kubenswrapper[31830]: W0319 12:14:21.501946 31830 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 12:14:21.506776 master-0 kubenswrapper[31830]: W0319 12:14:21.501953 31830 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 12:14:21.506776 master-0 kubenswrapper[31830]: W0319 12:14:21.501960 31830 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 12:14:21.506776 master-0 kubenswrapper[31830]: W0319 12:14:21.501968 31830 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 12:14:21.506776 master-0 kubenswrapper[31830]: W0319 12:14:21.501976 31830 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 12:14:21.506776 master-0 kubenswrapper[31830]: W0319 12:14:21.501983 31830 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 12:14:21.506776 master-0 kubenswrapper[31830]: W0319 12:14:21.501990 31830 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 12:14:21.506776 master-0 kubenswrapper[31830]: W0319 12:14:21.501997 31830 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 12:14:21.507328 master-0 kubenswrapper[31830]: W0319 12:14:21.502007 31830 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 12:14:21.507328 master-0 kubenswrapper[31830]: W0319 12:14:21.502016 31830 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 12:14:21.507328 master-0 kubenswrapper[31830]: W0319 12:14:21.502024 31830 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 12:14:21.507328 master-0 kubenswrapper[31830]: W0319 12:14:21.502039 31830 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 12:14:21.507328 master-0 kubenswrapper[31830]: W0319 12:14:21.502047 31830 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 12:14:21.507328 master-0 kubenswrapper[31830]: W0319 12:14:21.502058 31830 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 12:14:21.507328 master-0 kubenswrapper[31830]: W0319 12:14:21.502069 31830 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 12:14:21.507328 master-0 kubenswrapper[31830]: W0319 12:14:21.502077 31830 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 12:14:21.507328 master-0 kubenswrapper[31830]: W0319 12:14:21.502086 31830 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 12:14:21.507328 master-0 kubenswrapper[31830]: W0319 12:14:21.502093 31830 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 12:14:21.507328 master-0 kubenswrapper[31830]: W0319 12:14:21.502101 31830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 12:14:21.507328 master-0 kubenswrapper[31830]: W0319 12:14:21.502108 31830 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 12:14:21.507328 master-0 kubenswrapper[31830]: W0319 12:14:21.502116 31830 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 12:14:21.507328 master-0 kubenswrapper[31830]: W0319 12:14:21.502123 31830 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 12:14:21.507328 master-0 kubenswrapper[31830]: W0319 12:14:21.502130 31830 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 12:14:21.507328 master-0 kubenswrapper[31830]: W0319 12:14:21.502137 31830 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 12:14:21.507328 master-0 kubenswrapper[31830]: W0319 12:14:21.502151 31830 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 12:14:21.507328 master-0 kubenswrapper[31830]: W0319 12:14:21.502160 31830 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 12:14:21.507328 master-0 kubenswrapper[31830]: W0319 12:14:21.502167 31830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 12:14:21.507871 master-0 kubenswrapper[31830]: W0319 12:14:21.502174 31830 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 12:14:21.507871 master-0 kubenswrapper[31830]: W0319 12:14:21.502182 31830 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 12:14:21.507871 master-0 kubenswrapper[31830]: W0319 12:14:21.502192 31830 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 12:14:21.507871 master-0 kubenswrapper[31830]: W0319 12:14:21.502200 31830 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 12:14:21.507871 master-0 kubenswrapper[31830]: W0319 12:14:21.502207 31830 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 12:14:21.507871 master-0 kubenswrapper[31830]: W0319 12:14:21.502214 31830 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 12:14:21.507871 master-0 kubenswrapper[31830]: W0319 12:14:21.502222 31830 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 12:14:21.507871 master-0 kubenswrapper[31830]: W0319 12:14:21.502231 31830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 12:14:21.507871 master-0 kubenswrapper[31830]: W0319 12:14:21.502238 31830 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 12:14:21.507871 master-0 kubenswrapper[31830]: W0319 12:14:21.502251 31830 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 12:14:21.507871 master-0 kubenswrapper[31830]: W0319 12:14:21.502258 31830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 12:14:21.507871 master-0 kubenswrapper[31830]: W0319 12:14:21.502265 31830 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 12:14:21.507871 master-0 kubenswrapper[31830]: W0319 12:14:21.502272 31830 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 12:14:21.507871 master-0 kubenswrapper[31830]: W0319 12:14:21.502281 31830 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 12:14:21.507871 master-0 kubenswrapper[31830]: W0319 12:14:21.502292 31830 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 12:14:21.507871 master-0 kubenswrapper[31830]: W0319 12:14:21.502302 31830 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 12:14:21.507871 master-0 kubenswrapper[31830]: W0319 12:14:21.502312 31830 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 12:14:21.507871 master-0 kubenswrapper[31830]: W0319 12:14:21.502320 31830 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 12:14:21.507871 master-0 kubenswrapper[31830]: W0319 12:14:21.502328 31830 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 12:14:21.508353 master-0 kubenswrapper[31830]: W0319 12:14:21.502336 31830 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 12:14:21.508353 master-0 kubenswrapper[31830]: W0319 12:14:21.502344 31830 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 12:14:21.508353 master-0 kubenswrapper[31830]: W0319 12:14:21.502360 31830 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 12:14:21.508353 master-0 kubenswrapper[31830]: W0319 12:14:21.502368 31830 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 12:14:21.508353 master-0 kubenswrapper[31830]: W0319 12:14:21.502375 31830 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 12:14:21.508353 master-0 kubenswrapper[31830]: W0319 12:14:21.502382 31830 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 12:14:21.508353 master-0 kubenswrapper[31830]: W0319 12:14:21.502390 31830 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 12:14:21.508353 master-0 kubenswrapper[31830]: I0319 12:14:21.502402 31830 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 19 12:14:21.511115 master-0 kubenswrapper[31830]: I0319 12:14:21.511061 31830 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 19 12:14:21.511115 master-0 kubenswrapper[31830]: I0319 12:14:21.511110 31830 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 12:14:21.511458 master-0 kubenswrapper[31830]: W0319 12:14:21.511234 31830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 12:14:21.511458 master-0 kubenswrapper[31830]: W0319 12:14:21.511245 31830 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 12:14:21.511458 master-0 kubenswrapper[31830]: W0319 12:14:21.511252 31830 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 12:14:21.511458 master-0 kubenswrapper[31830]: W0319 12:14:21.511259 31830 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 12:14:21.511458 master-0 kubenswrapper[31830]: W0319 12:14:21.511268 31830 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 12:14:21.511458 master-0 kubenswrapper[31830]: W0319 12:14:21.511276 31830 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 12:14:21.511458 master-0 kubenswrapper[31830]: W0319 12:14:21.511284 31830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 12:14:21.511458 master-0 kubenswrapper[31830]: W0319 12:14:21.511291 31830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 12:14:21.511458 master-0 kubenswrapper[31830]: W0319 12:14:21.511299 31830 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 12:14:21.511458 master-0 kubenswrapper[31830]: W0319 12:14:21.511307 31830 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 12:14:21.511458 master-0 kubenswrapper[31830]: W0319 12:14:21.511314 31830 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 12:14:21.511458 master-0 kubenswrapper[31830]: W0319 12:14:21.511320 31830 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 12:14:21.511458 master-0 kubenswrapper[31830]: W0319 12:14:21.511326 31830 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 12:14:21.511458 master-0 kubenswrapper[31830]: W0319 12:14:21.511332 31830 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 12:14:21.511458 master-0 kubenswrapper[31830]: W0319 12:14:21.511341 31830 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 12:14:21.511458 master-0 kubenswrapper[31830]: W0319 12:14:21.511352 31830 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 12:14:21.511458 master-0 kubenswrapper[31830]: W0319 12:14:21.511357 31830 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 12:14:21.511458 master-0 kubenswrapper[31830]: W0319 12:14:21.511363 31830 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 12:14:21.511458 master-0 kubenswrapper[31830]: W0319 12:14:21.511368 31830 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 12:14:21.512302 master-0 kubenswrapper[31830]: W0319 12:14:21.511373 31830 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 12:14:21.512302 master-0 kubenswrapper[31830]: W0319 12:14:21.511378 31830 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 12:14:21.512302 master-0 kubenswrapper[31830]: W0319 12:14:21.511383 31830 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 12:14:21.512302 master-0 kubenswrapper[31830]: W0319 12:14:21.511388 31830 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 12:14:21.512302 master-0 kubenswrapper[31830]: W0319 12:14:21.511394 31830 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 12:14:21.512302 master-0 kubenswrapper[31830]: W0319 12:14:21.511401 31830 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 12:14:21.512302 master-0 kubenswrapper[31830]: W0319 12:14:21.511408 31830 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 12:14:21.512302 master-0 kubenswrapper[31830]: W0319 12:14:21.511414 31830 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 12:14:21.512302 master-0 kubenswrapper[31830]: W0319 12:14:21.511419 31830 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 12:14:21.512302 master-0 kubenswrapper[31830]: W0319 12:14:21.511424 31830 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 12:14:21.512302 master-0 kubenswrapper[31830]: W0319 12:14:21.511430 31830 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 12:14:21.512302 master-0 kubenswrapper[31830]: W0319 12:14:21.511436 31830 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 12:14:21.512302 master-0 kubenswrapper[31830]: W0319 12:14:21.511441 31830 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 12:14:21.512302 master-0 kubenswrapper[31830]: W0319 12:14:21.511446 31830 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 12:14:21.512302 master-0 kubenswrapper[31830]: W0319 12:14:21.511451 31830 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 12:14:21.512302 master-0 kubenswrapper[31830]: W0319 12:14:21.511457 31830 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 12:14:21.512302 master-0 kubenswrapper[31830]: W0319 12:14:21.511462 31830 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 12:14:21.512302 master-0 kubenswrapper[31830]: W0319 12:14:21.511468 31830 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 12:14:21.512302 master-0 kubenswrapper[31830]: W0319 12:14:21.511473 31830 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 12:14:21.513149 master-0 kubenswrapper[31830]: W0319 12:14:21.511478 31830 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 12:14:21.513149 master-0 kubenswrapper[31830]: W0319 12:14:21.511485 31830 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 12:14:21.513149 master-0 kubenswrapper[31830]: W0319 12:14:21.511491 31830 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 12:14:21.513149 master-0 kubenswrapper[31830]: W0319 12:14:21.511498 31830 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 12:14:21.513149 master-0 kubenswrapper[31830]: W0319 12:14:21.511505 31830 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 12:14:21.513149 master-0 kubenswrapper[31830]: W0319 12:14:21.511512 31830 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 12:14:21.513149 master-0 kubenswrapper[31830]: W0319 12:14:21.511518 31830 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 12:14:21.513149 master-0 kubenswrapper[31830]: W0319 12:14:21.511525 31830 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 12:14:21.513149 master-0 kubenswrapper[31830]: W0319 12:14:21.511532 31830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 12:14:21.513149 master-0 kubenswrapper[31830]: W0319 12:14:21.511540 31830 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 12:14:21.513149 master-0 kubenswrapper[31830]: W0319 12:14:21.511547 31830 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 12:14:21.513149 master-0 kubenswrapper[31830]: W0319 12:14:21.511553 31830 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 12:14:21.513149 master-0 kubenswrapper[31830]: W0319 12:14:21.511560 31830 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 12:14:21.513149 master-0 kubenswrapper[31830]: W0319 12:14:21.511566 31830 feature_gate.go:330] unrecognized feature gate: Example Mar 19 12:14:21.513149 master-0 kubenswrapper[31830]: W0319 12:14:21.511574 31830 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 12:14:21.513149 master-0 kubenswrapper[31830]: W0319 12:14:21.511580 31830 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 12:14:21.513149 master-0 kubenswrapper[31830]: W0319 12:14:21.511586 31830 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 12:14:21.513149 master-0 kubenswrapper[31830]: W0319 12:14:21.511592 31830 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 12:14:21.513149 master-0 kubenswrapper[31830]: W0319 12:14:21.511598 31830 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 12:14:21.513149 master-0 kubenswrapper[31830]: W0319 12:14:21.511604 31830 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 12:14:21.513877 master-0 kubenswrapper[31830]: W0319 12:14:21.511610 31830 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 12:14:21.513877 master-0 kubenswrapper[31830]: W0319 12:14:21.511616 31830 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 12:14:21.513877 master-0 kubenswrapper[31830]: W0319 12:14:21.511623 31830 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 12:14:21.513877 master-0 kubenswrapper[31830]: W0319 12:14:21.511630 31830 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 12:14:21.513877 master-0 kubenswrapper[31830]: W0319 12:14:21.511636 31830 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 12:14:21.513877 master-0 kubenswrapper[31830]: W0319 12:14:21.511643 31830 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 12:14:21.513877 master-0 kubenswrapper[31830]: W0319 12:14:21.511652 31830 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 12:14:21.513877 master-0 kubenswrapper[31830]: W0319 12:14:21.511660 31830 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 12:14:21.513877 master-0 kubenswrapper[31830]: W0319 12:14:21.511667 31830 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 12:14:21.513877 master-0 kubenswrapper[31830]: W0319 12:14:21.511672 31830 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 12:14:21.513877 master-0 kubenswrapper[31830]: W0319 12:14:21.511678 31830 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 12:14:21.513877 master-0 kubenswrapper[31830]: W0319 12:14:21.511683 31830 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 12:14:21.513877 master-0 kubenswrapper[31830]: W0319 12:14:21.511688 31830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 12:14:21.513877 master-0 kubenswrapper[31830]: W0319 12:14:21.511692 31830 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 12:14:21.513877 master-0 kubenswrapper[31830]: I0319 12:14:21.511702 31830 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 19 12:14:21.514624 master-0 kubenswrapper[31830]: W0319 12:14:21.511910 31830 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 12:14:21.514624 master-0 kubenswrapper[31830]: W0319 12:14:21.511923 31830 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 12:14:21.514624 master-0 kubenswrapper[31830]: W0319 12:14:21.511929 31830 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 12:14:21.514624 master-0 kubenswrapper[31830]: W0319 12:14:21.511935 31830 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 12:14:21.514624 master-0 kubenswrapper[31830]: W0319 12:14:21.511940 31830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 12:14:21.514624 master-0 kubenswrapper[31830]: W0319 12:14:21.511946 31830 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 12:14:21.514624 master-0 kubenswrapper[31830]: W0319 12:14:21.511953 31830 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 12:14:21.514624 master-0 kubenswrapper[31830]: W0319 12:14:21.511961 31830 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 12:14:21.514624 master-0 kubenswrapper[31830]: W0319 12:14:21.511967 31830 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 12:14:21.514624 master-0 kubenswrapper[31830]: W0319 12:14:21.511974 31830 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 12:14:21.514624 master-0 kubenswrapper[31830]: W0319 12:14:21.511980 31830 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 12:14:21.514624 master-0 kubenswrapper[31830]: W0319 12:14:21.511986 31830 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 12:14:21.514624 master-0 kubenswrapper[31830]: W0319 12:14:21.511991 31830 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 12:14:21.514624 master-0 kubenswrapper[31830]: W0319 12:14:21.511996 31830 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 12:14:21.514624 master-0 kubenswrapper[31830]: W0319 12:14:21.512002 31830 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 12:14:21.514624 master-0 kubenswrapper[31830]: W0319 12:14:21.512007 31830 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 12:14:21.514624 master-0 kubenswrapper[31830]: W0319 12:14:21.512013 31830 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 12:14:21.514624 master-0 kubenswrapper[31830]: W0319 12:14:21.512019 31830 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 12:14:21.514624 master-0 kubenswrapper[31830]: W0319 12:14:21.512024 31830 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 12:14:21.514624 master-0 kubenswrapper[31830]: W0319 12:14:21.512030 31830 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 12:14:21.515739 master-0 kubenswrapper[31830]: W0319 12:14:21.512035 31830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 12:14:21.515739 master-0 kubenswrapper[31830]: W0319 12:14:21.512041 31830 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 12:14:21.515739 master-0 kubenswrapper[31830]: W0319 12:14:21.512046 31830 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 12:14:21.515739 master-0 kubenswrapper[31830]: W0319 12:14:21.512051 31830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 12:14:21.515739 master-0 kubenswrapper[31830]: W0319 12:14:21.512058 31830 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 12:14:21.515739 master-0 kubenswrapper[31830]: W0319 12:14:21.512066 31830 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 12:14:21.515739 master-0 kubenswrapper[31830]: W0319 12:14:21.512071 31830 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 12:14:21.515739 master-0 kubenswrapper[31830]: W0319 12:14:21.512078 31830 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 12:14:21.515739 master-0 kubenswrapper[31830]: W0319 12:14:21.512085 31830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 12:14:21.515739 master-0 kubenswrapper[31830]: W0319 12:14:21.512091 31830 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 12:14:21.515739 master-0 kubenswrapper[31830]: W0319 12:14:21.512097 31830 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 12:14:21.515739 master-0 kubenswrapper[31830]: W0319 12:14:21.512102 31830 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 12:14:21.515739 master-0 kubenswrapper[31830]: W0319 12:14:21.512107 31830 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 12:14:21.515739 master-0 kubenswrapper[31830]: W0319 12:14:21.512112 31830 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 12:14:21.515739 master-0 kubenswrapper[31830]: W0319 12:14:21.512118 31830 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 12:14:21.515739 master-0 kubenswrapper[31830]: W0319 12:14:21.512123 31830 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 12:14:21.515739 master-0 kubenswrapper[31830]: W0319 12:14:21.512128 31830 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 12:14:21.515739 master-0 kubenswrapper[31830]: W0319 12:14:21.512133 31830 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 12:14:21.515739 master-0 kubenswrapper[31830]: W0319 12:14:21.512138 31830 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 12:14:21.517097 master-0 kubenswrapper[31830]: W0319 12:14:21.512144 31830 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 12:14:21.517097 master-0 kubenswrapper[31830]: W0319 12:14:21.512149 31830 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 12:14:21.517097 master-0 kubenswrapper[31830]: W0319 12:14:21.512154 31830 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 12:14:21.517097 master-0 kubenswrapper[31830]: W0319 12:14:21.512159 31830 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 12:14:21.517097 master-0 kubenswrapper[31830]: W0319 12:14:21.512164 31830 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 12:14:21.517097 master-0 kubenswrapper[31830]: W0319 12:14:21.512169 31830 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 12:14:21.517097 master-0 kubenswrapper[31830]: W0319 12:14:21.512174 31830 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 12:14:21.517097 master-0 kubenswrapper[31830]: W0319 12:14:21.512180 31830 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 12:14:21.517097 master-0 kubenswrapper[31830]: W0319 12:14:21.512185 31830 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 12:14:21.517097 master-0 kubenswrapper[31830]: W0319 12:14:21.512190 31830 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 12:14:21.517097 master-0 kubenswrapper[31830]: W0319 12:14:21.512195 31830 feature_gate.go:330] unrecognized feature gate: Example Mar 19 12:14:21.517097 master-0 kubenswrapper[31830]: W0319 12:14:21.512201 31830 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 12:14:21.517097 master-0 kubenswrapper[31830]: W0319 12:14:21.512208 31830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 12:14:21.517097 master-0 kubenswrapper[31830]: W0319 12:14:21.512213 31830 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 12:14:21.517097 master-0 kubenswrapper[31830]: W0319 12:14:21.512218 31830 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 12:14:21.517097 master-0 kubenswrapper[31830]: W0319 12:14:21.512223 31830 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 12:14:21.517097 master-0 kubenswrapper[31830]: W0319 12:14:21.512228 31830 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 12:14:21.517097 master-0 kubenswrapper[31830]: W0319 12:14:21.512233 31830 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 12:14:21.517097 master-0 kubenswrapper[31830]: W0319 12:14:21.512238 31830 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 12:14:21.518060 master-0 kubenswrapper[31830]: W0319 12:14:21.512243 31830 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 12:14:21.518060 master-0 kubenswrapper[31830]: W0319 12:14:21.512248 31830 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 12:14:21.518060 master-0 kubenswrapper[31830]: W0319 12:14:21.512253 31830 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 12:14:21.518060 master-0 kubenswrapper[31830]: W0319 12:14:21.512258 31830 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 12:14:21.518060 master-0 kubenswrapper[31830]: W0319 12:14:21.512263 31830 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 12:14:21.518060 master-0 kubenswrapper[31830]: W0319 12:14:21.512268 31830 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 12:14:21.518060 master-0 kubenswrapper[31830]: W0319 12:14:21.512273 31830 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 12:14:21.518060 master-0 kubenswrapper[31830]: W0319 12:14:21.512278 31830 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 12:14:21.518060 master-0 kubenswrapper[31830]: W0319 12:14:21.512283 31830 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 12:14:21.518060 master-0 kubenswrapper[31830]: W0319 12:14:21.512288 31830 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 12:14:21.518060 master-0 kubenswrapper[31830]: W0319 12:14:21.512293 31830 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 12:14:21.518060 master-0 kubenswrapper[31830]: W0319 12:14:21.512298 31830 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 12:14:21.518060 master-0 kubenswrapper[31830]: W0319 12:14:21.512303 31830 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 12:14:21.518060 master-0 kubenswrapper[31830]: W0319 12:14:21.512309 31830 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 12:14:21.518060 master-0 kubenswrapper[31830]: I0319 12:14:21.512317 31830 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 19 12:14:21.518878 master-0 kubenswrapper[31830]: I0319 12:14:21.512534 31830 server.go:940] "Client rotation is on, will bootstrap in background" Mar 19 12:14:21.518878 master-0 kubenswrapper[31830]: I0319 12:14:21.514739 31830 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 19 12:14:21.518878 master-0 kubenswrapper[31830]: I0319 12:14:21.514855 31830 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 19 12:14:21.518878 master-0 kubenswrapper[31830]: I0319 12:14:21.515171 31830 server.go:997] "Starting client certificate rotation" Mar 19 12:14:21.518878 master-0 kubenswrapper[31830]: I0319 12:14:21.515187 31830 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 19 12:14:21.518878 master-0 kubenswrapper[31830]: I0319 12:14:21.515478 31830 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-20 11:43:21 +0000 UTC, rotation deadline is 2026-03-20 05:48:42.368380562 +0000 UTC Mar 19 12:14:21.518878 master-0 kubenswrapper[31830]: I0319 12:14:21.515581 31830 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h34m20.85280589s for next certificate rotation Mar 19 12:14:21.518878 master-0 kubenswrapper[31830]: I0319 12:14:21.516441 31830 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 19 12:14:21.518878 master-0 kubenswrapper[31830]: I0319 12:14:21.518167 31830 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 19 12:14:21.521645 master-0 kubenswrapper[31830]: I0319 12:14:21.521599 31830 log.go:25] "Validated CRI v1 runtime API" Mar 19 12:14:21.527348 master-0 kubenswrapper[31830]: I0319 12:14:21.527313 31830 log.go:25] "Validated CRI v1 image API" Mar 19 12:14:21.528354 master-0 kubenswrapper[31830]: I0319 12:14:21.528327 31830 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 19 12:14:21.548196 master-0 kubenswrapper[31830]: I0319 12:14:21.548041 31830 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 f39678f0-0749-4469-b061-899c5a9052e6:/dev/vda3] Mar 19 12:14:21.550698 master-0 kubenswrapper[31830]: I0319 12:14:21.548181 31830 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/051890867de8ff413fdae42afc2ad5867d80bb4189ee315587bdfb2254762fa5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/051890867de8ff413fdae42afc2ad5867d80bb4189ee315587bdfb2254762fa5/userdata/shm major:0 minor:339 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/06387b0da219a3280f534fa3f57451c18534c297d33ad18d4503e53efc4f6f2f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/06387b0da219a3280f534fa3f57451c18534c297d33ad18d4503e53efc4f6f2f/userdata/shm major:0 minor:153 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/099f1cf5ddb64458132dd6fe55ba3878ce79ff183de73a0ef9c8fa9295853b5c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/099f1cf5ddb64458132dd6fe55ba3878ce79ff183de73a0ef9c8fa9295853b5c/userdata/shm major:0 minor:638 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0c17be488f74c65475492714ea2841534c84f72d155a2152b6dab678c10b46b6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0c17be488f74c65475492714ea2841534c84f72d155a2152b6dab678c10b46b6/userdata/shm major:0 minor:455 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0dabb76ec554d4e59d0494fc5bb751b125c5d1b8f29112c6e51c360eb8f3c374/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0dabb76ec554d4e59d0494fc5bb751b125c5d1b8f29112c6e51c360eb8f3c374/userdata/shm major:0 minor:621 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/16106b77f2e7c1585811668327c4be2d10fe7576f2ed79d2b198fca95be86d2c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/16106b77f2e7c1585811668327c4be2d10fe7576f2ed79d2b198fca95be86d2c/userdata/shm major:0 minor:252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1bedd36b2e748d7ffe9c8b9ed3a8c9c7331d2765980332a3cebdddee8a321573/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1bedd36b2e748d7ffe9c8b9ed3a8c9c7331d2765980332a3cebdddee8a321573/userdata/shm major:0 minor:725 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1da3868b3838b62f3e5d20f215a32847d5bb12874480e83fc7036c9466a82c5e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1da3868b3838b62f3e5d20f215a32847d5bb12874480e83fc7036c9466a82c5e/userdata/shm major:0 minor:495 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1fc61313284938071ce89eb1211d46af435fab3cccf3e32e3b4afcbf9419655a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1fc61313284938071ce89eb1211d46af435fab3cccf3e32e3b4afcbf9419655a/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/20538e6325cc6dc9adb3e30dce1ce797ed61d07679d7f2cd71ef1bf8c18874ea/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/20538e6325cc6dc9adb3e30dce1ce797ed61d07679d7f2cd71ef1bf8c18874ea/userdata/shm major:0 minor:723 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/24de2a964d2fa28c5bff828df5f742d99916541dc1152f4dcdf6f4231784eba1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/24de2a964d2fa28c5bff828df5f742d99916541dc1152f4dcdf6f4231784eba1/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/27514f785ebf129e635b61742d2a50f4b4590a69d29ba2f3c58ee430e3465119/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/27514f785ebf129e635b61742d2a50f4b4590a69d29ba2f3c58ee430e3465119/userdata/shm major:0 minor:897 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/27ccdb8fe17b3c5cb9acf1759072b6837f5312b119b69e4b34ee0c362bd4382c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/27ccdb8fe17b3c5cb9acf1759072b6837f5312b119b69e4b34ee0c362bd4382c/userdata/shm major:0 minor:89 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/28d0f82641cafb71075882375625371208c9e0463ead97b0053c16e9ee43470f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/28d0f82641cafb71075882375625371208c9e0463ead97b0053c16e9ee43470f/userdata/shm major:0 minor:928 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2ca9e696adafe66b3ba3814f26ea9bb916ca5c1804785c0e742201ad82ee9c18/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2ca9e696adafe66b3ba3814f26ea9bb916ca5c1804785c0e742201ad82ee9c18/userdata/shm major:0 minor:274 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2ea49210674ab53911da00e8c007432ee001baf1726a3c4349603d4b14736471/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2ea49210674ab53911da00e8c007432ee001baf1726a3c4349603d4b14736471/userdata/shm major:0 minor:64 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/330def8aa1845ebd7a95a673279619d604275f079a7efa3f16b2060b0fd2594e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/330def8aa1845ebd7a95a673279619d604275f079a7efa3f16b2060b0fd2594e/userdata/shm major:0 minor:1064 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/33355c55e294585ceaa17697d7356477785bdaba3177d324b39df2dc095c31c6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/33355c55e294585ceaa17697d7356477785bdaba3177d324b39df2dc095c31c6/userdata/shm major:0 minor:635 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/37064f92bb167f0d220b06c690c09b197d0f10b42a8e406aad7f8d634bcea6be/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/37064f92bb167f0d220b06c690c09b197d0f10b42a8e406aad7f8d634bcea6be/userdata/shm major:0 minor:643 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3756314b5f9faad34dff96625b9ef78c27d73db523c30a3f82a5ea254d67fd72/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3756314b5f9faad34dff96625b9ef78c27d73db523c30a3f82a5ea254d67fd72/userdata/shm major:0 minor:895 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/37b898c3ae24210a5aa4f86ab00e075925f0f6e4fde94632405ba19b0f9e0d1d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/37b898c3ae24210a5aa4f86ab00e075925f0f6e4fde94632405ba19b0f9e0d1d/userdata/shm major:0 minor:254 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3e9cb8897ccc8cd32e99de4908536f646397f9314e55ffb6dadd385187e9f1b0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3e9cb8897ccc8cd32e99de4908536f646397f9314e55ffb6dadd385187e9f1b0/userdata/shm major:0 minor:841 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3f20a730c4d5f1f1345d78c2bd60c5b238848ecf855493b53e0f599fc51845ac/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3f20a730c4d5f1f1345d78c2bd60c5b238848ecf855493b53e0f599fc51845ac/userdata/shm major:0 minor:497 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/401877adce8e78dfdd3ac293a53a75da77fa4a3177086a087aa6915ac4d36604/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/401877adce8e78dfdd3ac293a53a75da77fa4a3177086a087aa6915ac4d36604/userdata/shm major:0 minor:113 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/43f216a933b60c080a956b5e1d05307037754c5207355d8b96b4c2f7227054f0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/43f216a933b60c080a956b5e1d05307037754c5207355d8b96b4c2f7227054f0/userdata/shm major:0 minor:541 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4cdef734b9abebf7ad3957d15cc0c1c6f03e77f6869e579c27076c986f6c0a2c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4cdef734b9abebf7ad3957d15cc0c1c6f03e77f6869e579c27076c986f6c0a2c/userdata/shm major:0 minor:958 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4d126177d3103b9726cb0abe507c291aeac9fb33c980d607daaa2352bbce8e96/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4d126177d3103b9726cb0abe507c291aeac9fb33c980d607daaa2352bbce8e96/userdata/shm major:0 minor:629 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5396ef64e03af5cd8fbb98838e00f4f08020d9b7b41c5ccef26950f1e41fec60/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5396ef64e03af5cd8fbb98838e00f4f08020d9b7b41c5ccef26950f1e41fec60/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/58d1369a13582afcb1d55c539b0bf53f7dd57cd88bda12b4a87bc5a6b8e84cbc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/58d1369a13582afcb1d55c539b0bf53f7dd57cd88bda12b4a87bc5a6b8e84cbc/userdata/shm major:0 minor:260 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/593c680a830380526e444778c9d64ee368aed54b01a56b5393d8626c11e75704/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/593c680a830380526e444778c9d64ee368aed54b01a56b5393d8626c11e75704/userdata/shm major:0 minor:819 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5971350293b565068e613eaa81b7b38f49914ad973eb8343f33aa9abaed290e9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5971350293b565068e613eaa81b7b38f49914ad973eb8343f33aa9abaed290e9/userdata/shm major:0 minor:716 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5a539aaaf2dd4db935a04de17d4edc2ce062fa7a5a29f257bfd8c8188731698f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5a539aaaf2dd4db935a04de17d4edc2ce062fa7a5a29f257bfd8c8188731698f/userdata/shm major:0 minor:415 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/629e57f409989b86433406dbc0486de42ee1d2a4a26b2835682900a861605e8f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/629e57f409989b86433406dbc0486de42ee1d2a4a26b2835682900a861605e8f/userdata/shm major:0 minor:379 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/63407ab3b9286932d0ec228766542c2a928958a160c225ce1f7624b3e7a02447/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/63407ab3b9286932d0ec228766542c2a928958a160c225ce1f7624b3e7a02447/userdata/shm major:0 minor:272 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/657e67ca992e83dd97b428ec2664479ed04815d8dada9aa63b0bd9e585d0e3d7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/657e67ca992e83dd97b428ec2664479ed04815d8dada9aa63b0bd9e585d0e3d7/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6b418b5a6ab7d2f0fbb7cd5733cda224a66315648fe46c18f09905494c67309d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6b418b5a6ab7d2f0fbb7cd5733cda224a66315648fe46c18f09905494c67309d/userdata/shm major:0 minor:812 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6d678386c9d8ee3ccaf97160a5d644fc4f5d17544c6fb3d29d199b1c5b6b5add/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6d678386c9d8ee3ccaf97160a5d644fc4f5d17544c6fb3d29d199b1c5b6b5add/userdata/shm major:0 minor:546 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/71366739cc36c89d457d62d7f1f48c8768fc7ba64a4206c9c873e79bda714a8a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/71366739cc36c89d457d62d7f1f48c8768fc7ba64a4206c9c873e79bda714a8a/userdata/shm major:0 minor:634 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/732ea3fa30562cbb548cbd63878cf98bf16844c3fd2ba6668a55873990319c2d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/732ea3fa30562cbb548cbd63878cf98bf16844c3fd2ba6668a55873990319c2d/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7da5b8963c0c07bf615297cea6af913ce19795e600e076c4d580e948922fa865/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7da5b8963c0c07bf615297cea6af913ce19795e600e076c4d580e948922fa865/userdata/shm major:0 minor:256 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7f1b2390d179c87af7aa642ae5d602040372528fd159e31c142302ed10484ef5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7f1b2390d179c87af7aa642ae5d602040372528fd159e31c142302ed10484ef5/userdata/shm major:0 minor:632 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/81e5dd60f8e8f398fbc94edc5ee4b7a7c46081fef1fa9b130b775ed3aebea712/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/81e5dd60f8e8f398fbc94edc5ee4b7a7c46081fef1fa9b130b775ed3aebea712/userdata/shm major:0 minor:840 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/84ed2f0d88ece07075010bba0c167b7f10255b8043408ff95f1958cee576a4a0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/84ed2f0d88ece07075010bba0c167b7f10255b8043408ff95f1958cee576a4a0/userdata/shm major:0 minor:264 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/89df1c468dcab6a092003dfdb9054af97d313c0e8ba73ec68b30b7001eab90ba/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/89df1c468dcab6a092003dfdb9054af97d313c0e8ba73ec68b30b7001eab90ba/userdata/shm major:0 minor:276 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8ac7f6216c5921740646509c9d1e443feacb80b056e20b3a4f138b334049ff2c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8ac7f6216c5921740646509c9d1e443feacb80b056e20b3a4f138b334049ff2c/userdata/shm major:0 minor:822 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8b160a1a52470caaf8eb5167c80599083e3f1829f2580cc4817859648d8bb802/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8b160a1a52470caaf8eb5167c80599083e3f1829f2580cc4817859648d8bb802/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/954ede16a95baa0dd18c714681dfe7d875a3e3012701640009a8298afe790b4b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/954ede16a95baa0dd18c714681dfe7d875a3e3012701640009a8298afe790b4b/userdata/shm major:0 minor:925 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9a72b8977a8a7f6da552724471a9890da5b8ee5f4a6fe88fb55492ca16eb4221/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9a72b8977a8a7f6da552724471a9890da5b8ee5f4a6fe88fb55492ca16eb4221/userdata/shm major:0 minor:441 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b31a84101a7e9f8571fe0abea4a9c0ac92d862991255d66df670219d8949bf71/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b31a84101a7e9f8571fe0abea4a9c0ac92d862991255d66df670219d8949bf71/userdata/shm major:0 minor:825 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b4bf04cf4a874cdad02ad51f153ce323d3cc5a93749aa40aeabd5ac11d70f65e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b4bf04cf4a874cdad02ad51f153ce323d3cc5a93749aa40aeabd5ac11d70f65e/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b5d29a971edd0c0a90849227d71d2a1720436090bfc1809b33b6d52cfd6a7ffe/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b5d29a971edd0c0a90849227d71d2a1720436090bfc1809b33b6d52cfd6a7ffe/userdata/shm major:0 minor:377 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b7dd57861a640edcd653a07f56af27e128f51a36c5d7dfe7a1115c64bac8ba80/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b7dd57861a640edcd653a07f56af27e128f51a36c5d7dfe7a1115c64bac8ba80/userdata/shm major:0 minor:1075 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b80d357d31adb7df8c525b85923de87b5edd8dd7bfe7187f3b2e54a41c8d8b6f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b80d357d31adb7df8c525b85923de87b5edd8dd7bfe7187f3b2e54a41c8d8b6f/userdata/shm major:0 minor:590 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b8ab4adb571de7e6d61b60e1752c759892824492154b5310933386ea2f807133/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b8ab4adb571de7e6d61b60e1752c759892824492154b5310933386ea2f807133/userdata/shm major:0 minor:927 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b9477b33d342b45771f3690cbbe221e1438e0d225ffd950edeb419c6de979401/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b9477b33d342b45771f3690cbbe221e1438e0d225ffd950edeb419c6de979401/userdata/shm major:0 minor:106 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/be807ecce9aec0f7633eaae2ed5203cb82f342ed739dc26f098d55766e987b78/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/be807ecce9aec0f7633eaae2ed5203cb82f342ed739dc26f098d55766e987b78/userdata/shm major:0 minor:1123 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bf281270c03af27a5f2d97eebdf0d4e36fa1955f5f7ca7b9f757a4d7f448ea9a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bf281270c03af27a5f2d97eebdf0d4e36fa1955f5f7ca7b9f757a4d7f448ea9a/userdata/shm major:0 minor:1068 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c364dba2c743db6a6431b4c04a672e744dc16c7056590a2f4b28394bd78f6fc7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c364dba2c743db6a6431b4c04a672e744dc16c7056590a2f4b28394bd78f6fc7/userdata/shm major:0 minor:1164 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ca37f4d8890aea843e2dd74f0a3fbd57188dcf29ebff0755845d7039996af375/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ca37f4d8890aea843e2dd74f0a3fbd57188dcf29ebff0755845d7039996af375/userdata/shm major:0 minor:640 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cafdcda3b6318eaf62d6677cace1d3a0a0dbcf1889d817ce08bcc768e4b05288/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cafdcda3b6318eaf62d6677cace1d3a0a0dbcf1889d817ce08bcc768e4b05288/userdata/shm major:0 minor:268 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d29fd7441baad9596ad5ac5569da64fe277e18af3046f4e5da7f49044fe8fd7f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d29fd7441baad9596ad5ac5569da64fe277e18af3046f4e5da7f49044fe8fd7f/userdata/shm major:0 minor:485 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d4e38c98fa8bce43dfe4e7719d598500071054bc18ba5987f14232cdc265f588/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d4e38c98fa8bce43dfe4e7719d598500071054bc18ba5987f14232cdc265f588/userdata/shm major:0 minor:627 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d6af7e6099bbf70f75032e24a3ecb1fbb8bf546e42f6f7c74e6f9a42396249e8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d6af7e6099bbf70f75032e24a3ecb1fbb8bf546e42f6f7c74e6f9a42396249e8/userdata/shm major:0 minor:261 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dc31fac048987256095251eb1c41dfbd7ba8f1030acd608588347d150bf4c3c7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dc31fac048987256095251eb1c41dfbd7ba8f1030acd608588347d150bf4c3c7/userdata/shm major:0 minor:889 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/de69f46fe324ea455cfa701ce77df2ff17c8f9f38f189dbc84ded004836d5af0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/de69f46fe324ea455cfa701ce77df2ff17c8f9f38f189dbc84ded004836d5af0/userdata/shm major:0 minor:109 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/de6a10425187cbc938b44bf02e39e9ceb0c27562adc9c491a8cdb29f071cbb62/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/de6a10425187cbc938b44bf02e39e9ceb0c27562adc9c491a8cdb29f071cbb62/userdata/shm major:0 minor:458 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e1150baa290a3898ec8c1b3b3de0ed9b6af20668ee360ed4984852f84f153bb0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e1150baa290a3898ec8c1b3b3de0ed9b6af20668ee360ed4984852f84f153bb0/userdata/shm major:0 minor:518 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e6ef8104a726a85f4fa80186a64ea3c00a2cbb1be2c668fb9e94709c10d980c0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e6ef8104a726a85f4fa80186a64ea3c00a2cbb1be2c668fb9e94709c10d980c0/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e95d63ff24c648e1680e38f824e92cec0fcea7bc20cdade312b98b0468aad916/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e95d63ff24c648e1680e38f824e92cec0fcea7bc20cdade312b98b0468aad916/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ea807ec97b5b85d57bfd1e0adda9e020d25ab20667140eb00ae9510d72b84498/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ea807ec97b5b85d57bfd1e0adda9e020d25ab20667140eb00ae9510d72b84498/userdata/shm major:0 minor:860 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ef65cfa8e397b0d9fb626793071be85235d45f48e759141f7e306d3f038d0b06/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ef65cfa8e397b0d9fb626793071be85235d45f48e759141f7e306d3f038d0b06/userdata/shm major:0 minor:599 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/efd1c78ff9997efb11562e8d2fb6b9b151d43775e34fa6be423195823f01520e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/efd1c78ff9997efb11562e8d2fb6b9b151d43775e34fa6be423195823f01520e/userdata/shm major:0 minor:809 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f37f04bee18930433857e4757f6c0b0cea46719c10be7aeeafbea9a7d2df628f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f37f04bee18930433857e4757f6c0b0cea46719c10be7aeeafbea9a7d2df628f/userdata/shm major:0 minor:456 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f381b85f9130b76eda5dc167d27eb69ac9b6f2de032bdb231577387d3f19b35d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f381b85f9130b76eda5dc167d27eb69ac9b6f2de032bdb231577387d3f19b35d/userdata/shm major:0 minor:97 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f40dd28398740e1b8b665d870680e26bbfe5f4e3541ded3a1a95c827cd013960/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f40dd28398740e1b8b665d870680e26bbfe5f4e3541ded3a1a95c827cd013960/userdata/shm major:0 minor:944 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fa112877e7809f3added7e93999d2d52089456dfb6885e6498c6e53ce0c53ded/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fa112877e7809f3added7e93999d2d52089456dfb6885e6498c6e53ce0c53ded/userdata/shm major:0 minor:828 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fcd57352498da84e6fbc9969ab5176b5b32433301a69ada5c5c0571371a536da/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fcd57352498da84e6fbc9969ab5176b5b32433301a69ada5c5c0571371a536da/userdata/shm major:0 minor:368 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fe4ada978b72bf0ece9f4bc3e07bb79fded8b5a5f73d4c83d93ade89f41d9473/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fe4ada978b72bf0ece9f4bc3e07bb79fded8b5a5f73d4c83d93ade89f41d9473/userdata/shm major:0 minor:637 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0316c374-f812-4e0a-8645-727e8372f16e/volumes/kubernetes.io~projected/kube-api-access-tvvk8:{mountpoint:/var/lib/kubelet/pods/0316c374-f812-4e0a-8645-727e8372f16e/volumes/kubernetes.io~projected/kube-api-access-tvvk8 major:0 minor:890 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06df1b1b-154e-46f9-aee0-79a137c6c928/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/06df1b1b-154e-46f9-aee0-79a137c6c928/volumes/kubernetes.io~projected/kube-api-access major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06df1b1b-154e-46f9-aee0-79a137c6c928/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/06df1b1b-154e-46f9-aee0-79a137c6c928/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06f67c28-34fd-4356-92f0-edd0986ad34e/volumes/kubernetes.io~projected/kube-api-access-bdpj4:{mountpoint:/var/lib/kubelet/pods/06f67c28-34fd-4356-92f0-edd0986ad34e/volumes/kubernetes.io~projected/kube-api-access-bdpj4 major:0 minor:278 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e/volumes/kubernetes.io~projected/kube-api-access-r9k5t:{mountpoint:/var/lib/kubelet/pods/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e/volumes/kubernetes.io~projected/kube-api-access-r9k5t major:0 minor:914 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e/volumes/kubernetes.io~secret/certs major:0 minor:913 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:908 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0ed7eded-1e67-49ad-9777-c2ed1e006ce3/volumes/kubernetes.io~projected/kube-api-access-jnp9l:{mountpoint:/var/lib/kubelet/pods/0ed7eded-1e67-49ad-9777-c2ed1e006ce3/volumes/kubernetes.io~projected/kube-api-access-jnp9l major:0 minor:115 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0f97d998-530c-4d9d-a030-ca1d9d2d4490/volumes/kubernetes.io~projected/kube-api-access-zntzt:{mountpoint:/var/lib/kubelet/pods/0f97d998-530c-4d9d-a030-ca1d9d2d4490/volumes/kubernetes.io~projected/kube-api-access-zntzt major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0f97d998-530c-4d9d-a030-ca1d9d2d4490/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/0f97d998-530c-4d9d-a030-ca1d9d2d4490/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1089ea24-add9-482e-9276-e6ded12052d7/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/1089ea24-add9-482e-9276-e6ded12052d7/volumes/kubernetes.io~projected/kube-api-access major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1089ea24-add9-482e-9276-e6ded12052d7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1089ea24-add9-482e-9276-e6ded12052d7/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/13503fef-09b2-4dbe-9537-a5b361e7b591/volumes/kubernetes.io~projected/kube-api-access-mgdlc:{mountpoint:/var/lib/kubelet/pods/13503fef-09b2-4dbe-9537-a5b361e7b591/volumes/kubernetes.io~projected/kube-api-access-mgdlc major:0 minor:584 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/13503fef-09b2-4dbe-9537-a5b361e7b591/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/13503fef-09b2-4dbe-9537-a5b361e7b591/volumes/kubernetes.io~secret/encryption-config major:0 minor:581 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/13503fef-09b2-4dbe-9537-a5b361e7b591/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/13503fef-09b2-4dbe-9537-a5b361e7b591/volumes/kubernetes.io~secret/etcd-client major:0 minor:583 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/13503fef-09b2-4dbe-9537-a5b361e7b591/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/13503fef-09b2-4dbe-9537-a5b361e7b591/volumes/kubernetes.io~secret/serving-cert major:0 minor:582 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/19de6601-10d4-4112-a21f-0398d2b160d1/volumes/kubernetes.io~projected/kube-api-access-6xpc2:{mountpoint:/var/lib/kubelet/pods/19de6601-10d4-4112-a21f-0398d2b160d1/volumes/kubernetes.io~projected/kube-api-access-6xpc2 major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/19de6601-10d4-4112-a21f-0398d2b160d1/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/19de6601-10d4-4112-a21f-0398d2b160d1/volumes/kubernetes.io~secret/cert major:0 minor:618 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/19de6601-10d4-4112-a21f-0398d2b160d1/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/19de6601-10d4-4112-a21f-0398d2b160d1/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:620 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1c2a33ba-76d0-4b81-a41d-9da16fd46209/volumes/kubernetes.io~projected/kube-api-access-k8n22:{mountpoint:/var/lib/kubelet/pods/1c2a33ba-76d0-4b81-a41d-9da16fd46209/volumes/kubernetes.io~projected/kube-api-access-k8n22 major:0 minor:1163 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1c2a33ba-76d0-4b81-a41d-9da16fd46209/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/1c2a33ba-76d0-4b81-a41d-9da16fd46209/volumes/kubernetes.io~secret/webhook-certs major:0 minor:1156 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2151eb84-177e-459c-be71-f48465323ac2/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/2151eb84-177e-459c-be71-f48465323ac2/volumes/kubernetes.io~projected/kube-api-access major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2151eb84-177e-459c-be71-f48465323ac2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/2151eb84-177e-459c-be71-f48465323ac2/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/284768b8-9d70-4cf7-bace-8adc6b587186/volumes/kubernetes.io~projected/kube-api-access-8p6vn:{mountpoint:/var/lib/kubelet/pods/284768b8-9d70-4cf7-bace-8adc6b587186/volumes/kubernetes.io~projected/kube-api-access-8p6vn major:0 minor:104 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/284768b8-9d70-4cf7-bace-8adc6b587186/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/284768b8-9d70-4cf7-bace-8adc6b587186/volumes/kubernetes.io~secret/metrics-tls major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b87f8c3-1898-46dd-bcac-e8f22f31e812/volumes/kubernetes.io~projected/kube-api-access-kbddm:{mountpoint:/var/lib/kubelet/pods/2b87f8c3-1898-46dd-bcac-e8f22f31e812/volumes/kubernetes.io~projected/kube-api-access-kbddm major:0 minor:689 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b87f8c3-1898-46dd-bcac-e8f22f31e812/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/2b87f8c3-1898-46dd-bcac-e8f22f31e812/volumes/kubernetes.io~secret/proxy-tls major:0 minor:688 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/311b8bab-6cee-406d-8e0e-5b18a743d5fa/volumes/kubernetes.io~projected/kube-api-access-hjfpq:{mountpoint:/var/lib/kubelet/pods/311b8bab-6cee-406d-8e0e-5b18a743d5fa/volumes/kubernetes.io~projected/kube-api-access-hjfpq major:0 minor:868 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/311b8bab-6cee-406d-8e0e-5b18a743d5fa/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/311b8bab-6cee-406d-8e0e-5b18a743d5fa/volumes/kubernetes.io~secret/proxy-tls major:0 minor:867 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3661faaa-2c9d-4fcd-a41f-71aa71a2e464/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/3661faaa-2c9d-4fcd-a41f-71aa71a2e464/volumes/kubernetes.io~projected/kube-api-access major:0 minor:695 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3661faaa-2c9d-4fcd-a41f-71aa71a2e464/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3661faaa-2c9d-4fcd-a41f-71aa71a2e464/volumes/kubernetes.io~secret/serving-cert major:0 minor:694 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/36e5fec9-7fb5-4460-8bb4-4b9e36fae978/volumes/kubernetes.io~projected/kube-api-access-z9hck:{mountpoint:/var/lib/kubelet/pods/36e5fec9-7fb5-4460-8bb4-4b9e36fae978/volumes/kubernetes.io~projected/kube-api-access-z9hck major:0 minor:347 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/36e5fec9-7fb5-4460-8bb4-4b9e36fae978/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/36e5fec9-7fb5-4460-8bb4-4b9e36fae978/volumes/kubernetes.io~secret/cert major:0 minor:342 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/398bcaca-1bea-4633-a78f-717e3d015ddd/volumes/kubernetes.io~projected/kube-api-access-fhqhb:{mountpoint:/var/lib/kubelet/pods/398bcaca-1bea-4633-a78f-717e3d015ddd/volumes/kubernetes.io~projected/kube-api-access-fhqhb major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/398bcaca-1bea-4633-a78f-717e3d015ddd/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/398bcaca-1bea-4633-a78f-717e3d015ddd/volumes/kubernetes.io~secret/metrics-certs major:0 minor:619 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a6b082a-649b-43f6-8e24-cf222873fe39/volumes/kubernetes.io~projected/kube-api-access-srbt4:{mountpoint:/var/lib/kubelet/pods/3a6b082a-649b-43f6-8e24-cf222873fe39/volumes/kubernetes.io~projected/kube-api-access-srbt4 major:0 minor:833 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a6b082a-649b-43f6-8e24-cf222873fe39/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3a6b082a-649b-43f6-8e24-cf222873fe39/volumes/kubernetes.io~secret/serving-cert major:0 minor:827 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4264e82c-387f-4aa6-9ef6-b7beb61e098c/volumes/kubernetes.io~projected/kube-api-access-8wfsr:{mountpoint:/var/lib/kubelet/pods/4264e82c-387f-4aa6-9ef6-b7beb61e098c/volumes/kubernetes.io~projected/kube-api-access-8wfsr major:0 minor:787 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4264e82c-387f-4aa6-9ef6-b7beb61e098c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/4264e82c-387f-4aa6-9ef6-b7beb61e098c/volumes/kubernetes.io~secret/serving-cert major:0 minor:731 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44469a78-9300-4260-89e9-ea939de1357b/volumes/kubernetes.io~projected/kube-api-access-t7zpw:{mountpoint:/var/lib/kubelet/pods/44469a78-9300-4260-89e9-ea939de1357b/volumes/kubernetes.io~projected/kube-api-access-t7zpw major:0 minor:811 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44469a78-9300-4260-89e9-ea939de1357b/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/44469a78-9300-4260-89e9-ea939de1357b/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:806 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4800b72f-7e54-4069-b771-87fb459eeb78/volumes/kubernetes.io~projected/kube-api-access-4lkzv:{mountpoint:/var/lib/kubelet/pods/4800b72f-7e54-4069-b771-87fb459eeb78/volumes/kubernetes.io~projected/kube-api-access-4lkzv major:0 minor:605 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5238840f-3bef-43ad-ae68-ac187f073019/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/5238840f-3bef-43ad-ae68-ac187f073019/volumes/kubernetes.io~projected/ca-certs major:0 minor:494 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5238840f-3bef-43ad-ae68-ac187f073019/volumes/kubernetes.io~projected/kube-api-access-vxdts:{mountpoint:/var/lib/kubelet/pods/5238840f-3bef-43ad-ae68-ac187f073019/volumes/kubernetes.io~projected/kube-api-access-vxdts major:0 minor:493 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/616dbb32-6b65-4e44-a217-6b1be2844cc9/volumes/kubernetes.io~projected/kube-api-access-7g6zz:{mountpoint:/var/lib/kubelet/pods/616dbb32-6b65-4e44-a217-6b1be2844cc9/volumes/kubernetes.io~projected/kube-api-access-7g6zz major:0 minor:380 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/63c12a89-1b49-4eba-8f5a-551b10d2246b/volumes/kubernetes.io~projected/kube-api-access-bst2w:{mountpoint:/var/lib/kubelet/pods/63c12a89-1b49-4eba-8f5a-551b10d2246b/volumes/kubernetes.io~projected/kube-api-access-bst2w major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/63c12a89-1b49-4eba-8f5a-551b10d2246b/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/63c12a89-1b49-4eba-8f5a-551b10d2246b/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:443 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/63c12a89-1b49-4eba-8f5a-551b10d2246b/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/63c12a89-1b49-4eba-8f5a-551b10d2246b/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:453 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/661b8957-a890-4032-9e57-45e2e0b35249/volumes/kubernetes.io~projected/kube-api-access-8hq8f:{mountpoint:/var/lib/kubelet/pods/661b8957-a890-4032-9e57-45e2e0b35249/volumes/kubernetes.io~projected/kube-api-access-8hq8f major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/661b8957-a890-4032-9e57-45e2e0b35249/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/661b8957-a890-4032-9e57-45e2e0b35249/volumes/kubernetes.io~secret/serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/667757ee-2670-4019-ad93-156521d3c2e7/volumes/kubernetes.io~projected/kube-api-access-rc94p:{mountpoint:/var/lib/kubelet/pods/667757ee-2670-4019-ad93-156521d3c2e7/volumes/kubernetes.io~projected/kube-api-access-rc94p major:0 minor:790 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/667757ee-2670-4019-ad93-156521d3c2e7/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/667757ee-2670-4019-ad93-156521d3c2e7/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:763 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6863b35c-44ac-4333-97b5-e8e38b440a20/volumes/kubernetes.io~projected/kube-api-access-ddl8k:{mountpoint:/var/lib/kubelet/pods/6863b35c-44ac-4333-97b5-e8e38b440a20/volumes/kubernetes.io~projected/kube-api-access-ddl8k major:0 minor:402 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6863b35c-44ac-4333-97b5-e8e38b440a20/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/6863b35c-44ac-4333-97b5-e8e38b440a20/volumes/kubernetes.io~secret/signing-key major:0 minor:401 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6db3fcbe-0dbf-464f-944b-62427173c8d3/volumes/kubernetes.io~projected/kube-api-access-lllml:{mountpoint:/var/lib/kubelet/pods/6db3fcbe-0dbf-464f-944b-62427173c8d3/volumes/kubernetes.io~projected/kube-api-access-lllml major:0 minor:1117 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6db3fcbe-0dbf-464f-944b-62427173c8d3/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/6db3fcbe-0dbf-464f-944b-62427173c8d3/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1114 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6db3fcbe-0dbf-464f-944b-62427173c8d3/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/6db3fcbe-0dbf-464f-944b-62427173c8d3/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1116 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6db3fcbe-0dbf-464f-944b-62427173c8d3/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/6db3fcbe-0dbf-464f-944b-62427173c8d3/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1115 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7044a7b3-4fac-40af-a31c-054a1a1db26b/volumes/kubernetes.io~projected/kube-api-access-shfs6:{mountpoint:/var/lib/kubelet/pods/7044a7b3-4fac-40af-a31c-054a1a1db26b/volumes/kubernetes.io~projected/kube-api-access-shfs6 major:0 minor:105 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7241bf11-192e-47db-9d80-2324938ed34c/volumes/kubernetes.io~projected/kube-api-access-s5mkm:{mountpoint:/var/lib/kubelet/pods/7241bf11-192e-47db-9d80-2324938ed34c/volumes/kubernetes.io~projected/kube-api-access-s5mkm major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7241bf11-192e-47db-9d80-2324938ed34c/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/7241bf11-192e-47db-9d80-2324938ed34c/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:614 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7383e647-63b0-452d-a39b-02ad27a9b053/volumes/kubernetes.io~projected/kube-api-access-2xz8h:{mountpoint:/var/lib/kubelet/pods/7383e647-63b0-452d-a39b-02ad27a9b053/volumes/kubernetes.io~projected/kube-api-access-2xz8h major:0 minor:588 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b2ecb08-a0f9-4127-967c-7087dea4c0f6/volumes/kubernetes.io~projected/kube-api-access-dxw6t:{mountpoint:/var/lib/kubelet/pods/7b2ecb08-a0f9-4127-967c-7087dea4c0f6/volumes/kubernetes.io~projected/kube-api-access-dxw6t major:0 minor:863 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b2ecb08-a0f9-4127-967c-7087dea4c0f6/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/7b2ecb08-a0f9-4127-967c-7087dea4c0f6/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:848 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58/volumes/kubernetes.io~projected/kube-api-access-vm9zf:{mountpoint:/var/lib/kubelet/pods/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58/volumes/kubernetes.io~projected/kube-api-access-vm9zf major:0 minor:923 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58/volumes/kubernetes.io~secret/federate-client-tls:{mountpoint:/var/lib/kubelet/pods/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58/volumes/kubernetes.io~secret/federate-client-tls major:0 minor:749 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58/volumes/kubernetes.io~secret/secret-telemeter-client:{mountpoint:/var/lib/kubelet/pods/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58/volumes/kubernetes.io~secret/secret-telemeter-client major:0 minor:362 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config major:0 minor:916 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58/volumes/kubernetes.io~secret/telemeter-client-tls:{mountpoint:/var/lib/kubelet/pods/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58/volumes/kubernetes.io~secret/telemeter-client-tls major:0 minor:373 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/82b98dca-59f9-42be-94ca-4a2a2b6fea0f/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/82b98dca-59f9-42be-94ca-4a2a2b6fea0f/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/82b98dca-59f9-42be-94ca-4a2a2b6fea0f/volumes/kubernetes.io~projected/kube-api-access-c5bmd:{mountpoint:/var/lib/kubelet/pods/82b98dca-59f9-42be-94ca-4a2a2b6fea0f/volumes/kubernetes.io~projected/kube-api-access-c5bmd major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/82b98dca-59f9-42be-94ca-4a2a2b6fea0f/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/82b98dca-59f9-42be-94ca-4a2a2b6fea0f/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:454 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8414b6b0-ee16-47a5-982b-ee58b136cfcf/volumes/kubernetes.io~projected/kube-api-access-864rg:{mountpoint:/var/lib/kubelet/pods/8414b6b0-ee16-47a5-982b-ee58b136cfcf/volumes/kubernetes.io~projected/kube-api-access-864rg major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8414b6b0-ee16-47a5-982b-ee58b136cfcf/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/8414b6b0-ee16-47a5-982b-ee58b136cfcf/volumes/kubernetes.io~secret/webhook-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/86884445-e29b-492b-8810-b63b938b9170/volumes/kubernetes.io~projected/kube-api-access-5kcbw:{mountpoint:/var/lib/kubelet/pods/86884445-e29b-492b-8810-b63b938b9170/volumes/kubernetes.io~projected/kube-api-access-5kcbw major:0 minor:953 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/86884445-e29b-492b-8810-b63b938b9170/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/86884445-e29b-492b-8810-b63b938b9170/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:952 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/86884445-e29b-492b-8810-b63b938b9170/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/86884445-e29b-492b-8810-b63b938b9170/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:951 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04/volumes/kubernetes.io~projected/kube-api-access-hwfg5:{mountpoint:/var/lib/kubelet/pods/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04/volumes/kubernetes.io~projected/kube-api-access-hwfg5 major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04/volumes/kubernetes.io~secret/srv-cert major:0 minor:613 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/882fd952-1914-47be-96bf-cac6341ca877/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/882fd952-1914-47be-96bf-cac6341ca877/volumes/kubernetes.io~secret/tls-certificates major:0 minor:885 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91112ce6-4f9d-44c1-a4e7-fea126554bcf/volumes/kubernetes.io~projected/kube-api-access-8hrkb:{mountpoint:/var/lib/kubelet/pods/91112ce6-4f9d-44c1-a4e7-fea126554bcf/volumes/kubernetes.io~projected/kube-api-access-8hrkb major:0 minor:888 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91112ce6-4f9d-44c1-a4e7-fea126554bcf/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/91112ce6-4f9d-44c1-a4e7-fea126554bcf/volumes/kubernetes.io~secret/default-certificate major:0 minor:886 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91112ce6-4f9d-44c1-a4e7-fea126554bcf/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/91112ce6-4f9d-44c1-a4e7-fea126554bcf/volumes/kubernetes.io~secret/metrics-certs major:0 minor:879 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91112ce6-4f9d-44c1-a4e7-fea126554bcf/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/91112ce6-4f9d-44c1-a4e7-fea126554bcf/volumes/kubernetes.io~secret/stats-auth major:0 minor:887 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/919daf8d-763a-44bc-8916-86b425a27cbd/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/919daf8d-763a-44bc-8916-86b425a27cbd/volumes/kubernetes.io~projected/ca-certs major:0 minor:490 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/919daf8d-763a-44bc-8916-86b425a27cbd/volumes/kubernetes.io~projected/kube-api-access-8brwr:{mountpoint:/var/lib/kubelet/pods/919daf8d-763a-44bc-8916-86b425a27cbd/volumes/kubernetes.io~projected/kube-api-access-8brwr major:0 minor:492 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/919daf8d-763a-44bc-8916-86b425a27cbd/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/919daf8d-763a-44bc-8916-86b425a27cbd/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:384 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/944eac68-e72b-4aed-b5dc-d7d9703178a3/volumes/kubernetes.io~projected/kube-api-access-m2mdn:{mountpoint:/var/lib/kubelet/pods/944eac68-e72b-4aed-b5dc-d7d9703178a3/volumes/kubernetes.io~projected/kube-api-access-m2mdn major:0 minor:318 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9702fc8c-4fe0-413b-b2d4-db23021d42b8/volumes/kubernetes.io~projected/kube-api-access-tpdts:{mountpoint:/var/lib/kubelet/pods/9702fc8c-4fe0-413b-b2d4-db23021d42b8/volumes/kubernetes.io~projected/kube-api-access-tpdts major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9702fc8c-4fe0-413b-b2d4-db23021d42b8/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/9702fc8c-4fe0-413b-b2d4-db23021d42b8/volumes/kubernetes.io~secret/etcd-client major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9702fc8c-4fe0-413b-b2d4-db23021d42b8/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/9702fc8c-4fe0-413b-b2d4-db23021d42b8/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/979ba8cc-5a7b-4188-bf9e-c22d810888e9/volumes/kubernetes.io~projected/kube-api-access-28ljd:{mountpoint:/var/lib/kubelet/pods/979ba8cc-5a7b-4188-bf9e-c22d810888e9/volumes/kubernetes.io~projected/kube-api-access-28ljd major:0 minor:491 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/979ba8cc-5a7b-4188-bf9e-c22d810888e9/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/979ba8cc-5a7b-4188-bf9e-c22d810888e9/volumes/kubernetes.io~secret/encryption-config major:0 minor:489 fsType:tmpfs blo Mar 19 12:14:21.551679 master-0 kubenswrapper[31830]: ckSize:0} /var/lib/kubelet/pods/979ba8cc-5a7b-4188-bf9e-c22d810888e9/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/979ba8cc-5a7b-4188-bf9e-c22d810888e9/volumes/kubernetes.io~secret/etcd-client major:0 minor:488 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/979ba8cc-5a7b-4188-bf9e-c22d810888e9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/979ba8cc-5a7b-4188-bf9e-c22d810888e9/volumes/kubernetes.io~secret/serving-cert major:0 minor:487 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d2db220-4d5b-4819-a910-b186e1e9fb3e/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/9d2db220-4d5b-4819-a910-b186e1e9fb3e/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d2db220-4d5b-4819-a910-b186e1e9fb3e/volumes/kubernetes.io~projected/kube-api-access-wshb2:{mountpoint:/var/lib/kubelet/pods/9d2db220-4d5b-4819-a910-b186e1e9fb3e/volumes/kubernetes.io~projected/kube-api-access-wshb2 major:0 minor:152 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d2db220-4d5b-4819-a910-b186e1e9fb3e/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/9d2db220-4d5b-4819-a910-b186e1e9fb3e/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:129 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9ed2dbd1-aec4-4009-917a-933533912ab5/volumes/kubernetes.io~projected/kube-api-access-gsk9d:{mountpoint:/var/lib/kubelet/pods/9ed2dbd1-aec4-4009-917a-933533912ab5/volumes/kubernetes.io~projected/kube-api-access-gsk9d major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9ed2dbd1-aec4-4009-917a-933533912ab5/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/9ed2dbd1-aec4-4009-917a-933533912ab5/volumes/kubernetes.io~secret/serving-cert major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a7747954-a222-4809-8656-818203b55ee8/volumes/kubernetes.io~projected/kube-api-access-khv2z:{mountpoint:/var/lib/kubelet/pods/a7747954-a222-4809-8656-818203b55ee8/volumes/kubernetes.io~projected/kube-api-access-khv2z major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a9d191d1-631d-4091-af8b-382283c18a5a/volumes/kubernetes.io~projected/kube-api-access-cq9p4:{mountpoint:/var/lib/kubelet/pods/a9d191d1-631d-4091-af8b-382283c18a5a/volumes/kubernetes.io~projected/kube-api-access-cq9p4 major:0 minor:1060 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a9d191d1-631d-4091-af8b-382283c18a5a/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/a9d191d1-631d-4091-af8b-382283c18a5a/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1053 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a9d191d1-631d-4091-af8b-382283c18a5a/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/a9d191d1-631d-4091-af8b-382283c18a5a/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1056 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab54833d-e57b-479d-b171-68155f6566f1/volumes/kubernetes.io~projected/kube-api-access-gl6d7:{mountpoint:/var/lib/kubelet/pods/ab54833d-e57b-479d-b171-68155f6566f1/volumes/kubernetes.io~projected/kube-api-access-gl6d7 major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab54833d-e57b-479d-b171-68155f6566f1/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/ab54833d-e57b-479d-b171-68155f6566f1/volumes/kubernetes.io~secret/metrics-tls major:0 minor:432 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ad327a59-7879-4215-bb95-3f2be64cb97f/volumes/kubernetes.io~projected/kube-api-access-9fgj5:{mountpoint:/var/lib/kubelet/pods/ad327a59-7879-4215-bb95-3f2be64cb97f/volumes/kubernetes.io~projected/kube-api-access-9fgj5 major:0 minor:786 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ad327a59-7879-4215-bb95-3f2be64cb97f/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/ad327a59-7879-4215-bb95-3f2be64cb97f/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:736 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aef8e03f-0363-4e13-b7ca-4fa871d77c62/volumes/kubernetes.io~projected/kube-api-access-x252z:{mountpoint:/var/lib/kubelet/pods/aef8e03f-0363-4e13-b7ca-4fa871d77c62/volumes/kubernetes.io~projected/kube-api-access-x252z major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aef8e03f-0363-4e13-b7ca-4fa871d77c62/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/aef8e03f-0363-4e13-b7ca-4fa871d77c62/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b0f5939c-48b1-4d6c-9712-9128a78d603b/volumes/kubernetes.io~projected/kube-api-access-6tqdb:{mountpoint:/var/lib/kubelet/pods/b0f5939c-48b1-4d6c-9712-9128a78d603b/volumes/kubernetes.io~projected/kube-api-access-6tqdb major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b0f5939c-48b1-4d6c-9712-9128a78d603b/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/b0f5939c-48b1-4d6c-9712-9128a78d603b/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:612 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b80027fd-7b39-477a-a337-ff9bb08e7eeb/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/b80027fd-7b39-477a-a337-ff9bb08e7eeb/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b80027fd-7b39-477a-a337-ff9bb08e7eeb/volumes/kubernetes.io~projected/kube-api-access-hs4jf:{mountpoint:/var/lib/kubelet/pods/b80027fd-7b39-477a-a337-ff9bb08e7eeb/volumes/kubernetes.io~projected/kube-api-access-hs4jf major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b80027fd-7b39-477a-a337-ff9bb08e7eeb/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/b80027fd-7b39-477a-a337-ff9bb08e7eeb/volumes/kubernetes.io~secret/metrics-tls major:0 minor:617 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bb1000ab-4419-43ce-b1b7-8f43413e017f/volumes/kubernetes.io~projected/kube-api-access-6hk8l:{mountpoint:/var/lib/kubelet/pods/bb1000ab-4419-43ce-b1b7-8f43413e017f/volumes/kubernetes.io~projected/kube-api-access-6hk8l major:0 minor:1059 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bb1000ab-4419-43ce-b1b7-8f43413e017f/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/bb1000ab-4419-43ce-b1b7-8f43413e017f/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1057 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bb1000ab-4419-43ce-b1b7-8f43413e017f/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/bb1000ab-4419-43ce-b1b7-8f43413e017f/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1058 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6/volumes/kubernetes.io~projected/kube-api-access-jnd9c:{mountpoint:/var/lib/kubelet/pods/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6/volumes/kubernetes.io~projected/kube-api-access-jnd9c major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6/volumes/kubernetes.io~secret/srv-cert major:0 minor:615 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be4349fa-5c67-4135-80a7-b8a694553662/volumes/kubernetes.io~projected/kube-api-access-jbzj2:{mountpoint:/var/lib/kubelet/pods/be4349fa-5c67-4135-80a7-b8a694553662/volumes/kubernetes.io~projected/kube-api-access-jbzj2 major:0 minor:818 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be4349fa-5c67-4135-80a7-b8a694553662/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/be4349fa-5c67-4135-80a7-b8a694553662/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:816 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be4349fa-5c67-4135-80a7-b8a694553662/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/be4349fa-5c67-4135-80a7-b8a694553662/volumes/kubernetes.io~secret/webhook-cert major:0 minor:817 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/beb562de-402b-4d9f-b5ed-090b60847a95/volumes/kubernetes.io~projected/kube-api-access-9mr6d:{mountpoint:/var/lib/kubelet/pods/beb562de-402b-4d9f-b5ed-090b60847a95/volumes/kubernetes.io~projected/kube-api-access-9mr6d major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/beb562de-402b-4d9f-b5ed-090b60847a95/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/beb562de-402b-4d9f-b5ed-090b60847a95/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:616 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf226d89-450d-4876-a113-345632b94ee9/volumes/kubernetes.io~projected/kube-api-access-wcxqj:{mountpoint:/var/lib/kubelet/pods/bf226d89-450d-4876-a113-345632b94ee9/volumes/kubernetes.io~projected/kube-api-access-wcxqj major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf226d89-450d-4876-a113-345632b94ee9/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/bf226d89-450d-4876-a113-345632b94ee9/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c2dbd8b3-0e02-4747-a166-80aa6a94b060/volumes/kubernetes.io~projected/kube-api-access-npc2t:{mountpoint:/var/lib/kubelet/pods/c2dbd8b3-0e02-4747-a166-80aa6a94b060/volumes/kubernetes.io~projected/kube-api-access-npc2t major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c2dbd8b3-0e02-4747-a166-80aa6a94b060/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/c2dbd8b3-0e02-4747-a166-80aa6a94b060/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c52bbbe7-bc16-432f-a471-bc561083a853/volumes/kubernetes.io~projected/kube-api-access-4ztf7:{mountpoint:/var/lib/kubelet/pods/c52bbbe7-bc16-432f-a471-bc561083a853/volumes/kubernetes.io~projected/kube-api-access-4ztf7 major:0 minor:714 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cf6b6560-1731-4fb1-b3c2-8257002842d6/volumes/kubernetes.io~projected/kube-api-access-64twc:{mountpoint:/var/lib/kubelet/pods/cf6b6560-1731-4fb1-b3c2-8257002842d6/volumes/kubernetes.io~projected/kube-api-access-64twc major:0 minor:864 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cf6b6560-1731-4fb1-b3c2-8257002842d6/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/cf6b6560-1731-4fb1-b3c2-8257002842d6/volumes/kubernetes.io~secret/cert major:0 minor:854 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d3017b5e-178e-49de-89d2-817a18398203/volumes/kubernetes.io~projected/kube-api-access-b6wm6:{mountpoint:/var/lib/kubelet/pods/d3017b5e-178e-49de-89d2-817a18398203/volumes/kubernetes.io~projected/kube-api-access-b6wm6 major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d3017b5e-178e-49de-89d2-817a18398203/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d3017b5e-178e-49de-89d2-817a18398203/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d3541cbe-3be0-40d3-89d2-b5937b6a8f47/volumes/kubernetes.io~projected/kube-api-access-pv6bc:{mountpoint:/var/lib/kubelet/pods/d3541cbe-3be0-40d3-89d2-b5937b6a8f47/volumes/kubernetes.io~projected/kube-api-access-pv6bc major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d3541cbe-3be0-40d3-89d2-b5937b6a8f47/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/d3541cbe-3be0-40d3-89d2-b5937b6a8f47/volumes/kubernetes.io~secret/proxy-tls major:0 minor:611 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d975e831-7348-41b9-9622-f4a503674c38/volumes/kubernetes.io~projected/kube-api-access-86r6z:{mountpoint:/var/lib/kubelet/pods/d975e831-7348-41b9-9622-f4a503674c38/volumes/kubernetes.io~projected/kube-api-access-86r6z major:0 minor:336 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f/volumes/kubernetes.io~projected/kube-api-access-h5n89:{mountpoint:/var/lib/kubelet/pods/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f/volumes/kubernetes.io~projected/kube-api-access-h5n89 major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da9becfb-a504-4ef7-92ed-cd2db439d5db/volumes/kubernetes.io~projected/kube-api-access-lvzcn:{mountpoint:/var/lib/kubelet/pods/da9becfb-a504-4ef7-92ed-cd2db439d5db/volumes/kubernetes.io~projected/kube-api-access-lvzcn major:0 minor:832 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da9becfb-a504-4ef7-92ed-cd2db439d5db/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/da9becfb-a504-4ef7-92ed-cd2db439d5db/volumes/kubernetes.io~secret/serving-cert major:0 minor:826 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/de39c80c-acfa-4bc1-a844-95b170169b44/volumes/kubernetes.io~projected/kube-api-access-6x2v6:{mountpoint:/var/lib/kubelet/pods/de39c80c-acfa-4bc1-a844-95b170169b44/volumes/kubernetes.io~projected/kube-api-access-6x2v6 major:0 minor:922 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/de39c80c-acfa-4bc1-a844-95b170169b44/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/de39c80c-acfa-4bc1-a844-95b170169b44/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:921 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/de39c80c-acfa-4bc1-a844-95b170169b44/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/de39c80c-acfa-4bc1-a844-95b170169b44/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1074 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e559e487-18b0-4622-92fa-d06e7397b312/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/e559e487-18b0-4622-92fa-d06e7397b312/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:557 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e559e487-18b0-4622-92fa-d06e7397b312/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/e559e487-18b0-4622-92fa-d06e7397b312/volumes/kubernetes.io~empty-dir/tmp major:0 minor:559 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e559e487-18b0-4622-92fa-d06e7397b312/volumes/kubernetes.io~projected/kube-api-access-c4p7s:{mountpoint:/var/lib/kubelet/pods/e559e487-18b0-4622-92fa-d06e7397b312/volumes/kubernetes.io~projected/kube-api-access-c4p7s major:0 minor:560 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ee3529ac-6135-438b-9334-40c63c1fbd3d/volumes/kubernetes.io~projected/kube-api-access-c8hpg:{mountpoint:/var/lib/kubelet/pods/ee3529ac-6135-438b-9334-40c63c1fbd3d/volumes/kubernetes.io~projected/kube-api-access-c8hpg major:0 minor:862 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ee3529ac-6135-438b-9334-40c63c1fbd3d/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/ee3529ac-6135-438b-9334-40c63c1fbd3d/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:859 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f05dca6c-7626-4970-a869-4208ff5605a2/volumes/kubernetes.io~projected/kube-api-access-5fz85:{mountpoint:/var/lib/kubelet/pods/f05dca6c-7626-4970-a869-4208ff5605a2/volumes/kubernetes.io~projected/kube-api-access-5fz85 major:0 minor:709 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f08c5930-44f0-48e4-80dd-2563f2733b2f/volumes/kubernetes.io~projected/kube-api-access-h84l9:{mountpoint:/var/lib/kubelet/pods/f08c5930-44f0-48e4-80dd-2563f2733b2f/volumes/kubernetes.io~projected/kube-api-access-h84l9 major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f08c5930-44f0-48e4-80dd-2563f2733b2f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f08c5930-44f0-48e4-80dd-2563f2733b2f/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f236a5ab-b400-46fc-94ee-1fff476d6458/volumes/kubernetes.io~projected/kube-api-access-ps4k8:{mountpoint:/var/lib/kubelet/pods/f236a5ab-b400-46fc-94ee-1fff476d6458/volumes/kubernetes.io~projected/kube-api-access-ps4k8 major:0 minor:593 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f236a5ab-b400-46fc-94ee-1fff476d6458/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/f236a5ab-b400-46fc-94ee-1fff476d6458/volumes/kubernetes.io~secret/metrics-tls major:0 minor:592 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fd40498c-f50a-408c-9a50-5d85ae666124/volumes/kubernetes.io~projected/kube-api-access-2rmw5:{mountpoint:/var/lib/kubelet/pods/fd40498c-f50a-408c-9a50-5d85ae666124/volumes/kubernetes.io~projected/kube-api-access-2rmw5 major:0 minor:810 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fd40498c-f50a-408c-9a50-5d85ae666124/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/fd40498c-f50a-408c-9a50-5d85ae666124/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:799 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fe245927-c937-4ec7-ab83-4900bade72cf/volumes/kubernetes.io~projected/kube-api-access-s4hsp:{mountpoint:/var/lib/kubelet/pods/fe245927-c937-4ec7-ab83-4900bade72cf/volumes/kubernetes.io~projected/kube-api-access-s4hsp major:0 minor:103 fsType:tmpfs blockSize:0} overlay_0-1001:{mountpoint:/var/lib/containers/storage/overlay/95a059651f0bfd9a5cb5b1384424e2899f6f4f6bd0001eef56f01cfa148e4ab3/merged major:0 minor:1001 fsType:overlay blockSize:0} overlay_0-1003:{mountpoint:/var/lib/containers/storage/overlay/3eaf050a3da33bc06f10fce700c4eb86ee71e1befdfd317d3f46b3317dfb3b4c/merged major:0 minor:1003 fsType:overlay blockSize:0} overlay_0-1005:{mountpoint:/var/lib/containers/storage/overlay/2a04b702d23bfc0543f88cfacbe8ee882891c830bb47e409ed94cf3c1531f366/merged major:0 minor:1005 fsType:overlay blockSize:0} overlay_0-1007:{mountpoint:/var/lib/containers/storage/overlay/cffd67cacf6c0e58d8d2fb114a349fb8c1c7c78686f28d9716b3be3669b0a24f/merged major:0 minor:1007 fsType:overlay blockSize:0} overlay_0-1009:{mountpoint:/var/lib/containers/storage/overlay/5eb9b3bf534592700f062777da06a17a174566bfb90a51af8d8f3c6ee35d211a/merged major:0 minor:1009 fsType:overlay blockSize:0} overlay_0-1011:{mountpoint:/var/lib/containers/storage/overlay/bebf58af5526f0034eaea632c1aef92d7314ee6e3213fd2377ce78430c340006/merged major:0 minor:1011 fsType:overlay blockSize:0} overlay_0-1015:{mountpoint:/var/lib/containers/storage/overlay/4bd5ea66971d4d16d7186dac3f1dd69f59f357f9879f5f8d77f552859c388171/merged major:0 minor:1015 fsType:overlay blockSize:0} overlay_0-1018:{mountpoint:/var/lib/containers/storage/overlay/8524820c5d686bacd5850aa45f3eda6f18cd801d8e4d335abb4fdfd82e6eaaaf/merged major:0 minor:1018 fsType:overlay blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/10d00d78f1ae0864fe4dc4496ac57e479d3c172d43c3d405ca73cc5bad25bc00/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-1047:{mountpoint:/var/lib/containers/storage/overlay/ae998b654bd5b6ff59ee4c85196b127af0699b6ab79f8c44c9c09bdbe2155273/merged major:0 minor:1047 fsType:overlay blockSize:0} overlay_0-1051:{mountpoint:/var/lib/containers/storage/overlay/51e20f40e536316e3369f9b34164c66154c81f2def1e6427638eb7b2cadf2dca/merged major:0 minor:1051 fsType:overlay blockSize:0} overlay_0-1063:{mountpoint:/var/lib/containers/storage/overlay/56dc66b6dee93f37a9c414914bd549762bfc3740fa7a3c63607a66b4def30e92/merged major:0 minor:1063 fsType:overlay blockSize:0} overlay_0-1072:{mountpoint:/var/lib/containers/storage/overlay/2c8d0ced1593c8e3c08b4d5d63a0e306cb4e7ed80ea40806d8de6e6401c92557/merged major:0 minor:1072 fsType:overlay blockSize:0} overlay_0-1077:{mountpoint:/var/lib/containers/storage/overlay/95ae593254425533c1fe564b09ce7dfd132aec515467f55a87ba756d72def632/merged major:0 minor:1077 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/ecf692be3b78290dcdf4c82e2eb5e2ed7c6e331ee23889990fe4ca7a85f983a0/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-1082:{mountpoint:/var/lib/containers/storage/overlay/f82c6557a2ca13f9519bcdf6e55828694c2ceef93998db6a3f1c183ef46031dc/merged major:0 minor:1082 fsType:overlay blockSize:0} overlay_0-1084:{mountpoint:/var/lib/containers/storage/overlay/8e6fafe4921e8e9146abebb2109d4e10d76a696264df4b244d7e210ce2fcb1ef/merged major:0 minor:1084 fsType:overlay blockSize:0} overlay_0-1086:{mountpoint:/var/lib/containers/storage/overlay/0ca57fcf1153843086772c0d70a70c8e9d5ad21d289f5da54950c7ff0e0c4a5f/merged major:0 minor:1086 fsType:overlay blockSize:0} overlay_0-1092:{mountpoint:/var/lib/containers/storage/overlay/128789b605dfda8eba9a0cc767dac178bdc1ada799bd4bacacdfa134b21a934b/merged major:0 minor:1092 fsType:overlay blockSize:0} overlay_0-1097:{mountpoint:/var/lib/containers/storage/overlay/98aec57c86445cbe9f32c3a460a09542c5fe44c0dccb7e07f9fd677d37fa1eae/merged major:0 minor:1097 fsType:overlay blockSize:0} overlay_0-1099:{mountpoint:/var/lib/containers/storage/overlay/841158e3e347c8dd5cb5813c6b17e67bb4c15fd521e5cd12e4bd7dbe22416e29/merged major:0 minor:1099 fsType:overlay blockSize:0} overlay_0-1107:{mountpoint:/var/lib/containers/storage/overlay/8bab902ee39fce440eb21e48b533b9c18cb90baada393162f5630ab842b4e395/merged major:0 minor:1107 fsType:overlay blockSize:0} overlay_0-111:{mountpoint:/var/lib/containers/storage/overlay/e220523a2d53fd495bcbb7a62de408ad62cb4e62a31cd38c77272b0b8a1a140d/merged major:0 minor:111 fsType:overlay blockSize:0} overlay_0-1113:{mountpoint:/var/lib/containers/storage/overlay/b22e666232a70b00635b849fcd316d55354d8525ace02eda9f764dab93c40ff0/merged major:0 minor:1113 fsType:overlay blockSize:0} overlay_0-1122:{mountpoint:/var/lib/containers/storage/overlay/660fad5fd2eab9fb4782503e4e0e5da547692a343ffacf3615bc5b848ca50359/merged major:0 minor:1122 fsType:overlay blockSize:0} overlay_0-1129:{mountpoint:/var/lib/containers/storage/overlay/20e4a626d23a6e2a5f36cbce25e4ecac65c6ed7bbf99b0c31bd2f0bcacb356df/merged major:0 minor:1129 fsType:overlay blockSize:0} overlay_0-1131:{mountpoint:/var/lib/containers/storage/overlay/e42963c0736dd986006f228cf47cd43316fcb6ef70f332d634b2701527607ab0/merged major:0 minor:1131 fsType:overlay blockSize:0} overlay_0-1141:{mountpoint:/var/lib/containers/storage/overlay/1149496ecf58428a65d9ad38031ed686b8ae2d704410652001e0a899ce11b538/merged major:0 minor:1141 fsType:overlay blockSize:0} overlay_0-1153:{mountpoint:/var/lib/containers/storage/overlay/34951dcc7c7ce8868e76814823c9328da717d0baea3097aa5fd9c055f14c9998/merged major:0 minor:1153 fsType:overlay blockSize:0} overlay_0-1155:{mountpoint:/var/lib/containers/storage/overlay/b9e8a97809e9526825509160d54bbccdf8188e35827a3baabd8e2e2337676169/merged major:0 minor:1155 fsType:overlay blockSize:0} overlay_0-1162:{mountpoint:/var/lib/containers/storage/overlay/83f829e0a42704bb61a15cb5c9e41a61273bf9818927643d3bd473d7f994bb1d/merged major:0 minor:1162 fsType:overlay blockSize:0} overlay_0-1166:{mountpoint:/var/lib/containers/storage/overlay/337df36257edad68b0df16557745bfa69a1a5ebff38414124aa2dfcbbe323e4c/merged major:0 minor:1166 fsType:overlay blockSize:0} overlay_0-1168:{mountpoint:/var/lib/containers/storage/overlay/d166c59fbd3cae563ddd9b8d49d722c732214b6080026f98cf4d57c254a8c8ec/merged major:0 minor:1168 fsType:overlay blockSize:0} overlay_0-1170:{mountpoint:/var/lib/containers/storage/overlay/b1ba4d6b3f3cf5c01cdddbac4739ceedcfab1f89ff009d2f86a433b080d0d640/merged major:0 minor:1170 fsType:overlay blockSize:0} overlay_0-1176:{mountpoint:/var/lib/containers/storage/overlay/43b7e6c3df8d2ac8bb88e213a405f454642dd0e9577812adccfddd0d8ff6eeeb/merged major:0 minor:1176 fsType:overlay blockSize:0} overlay_0-118:{mountpoint:/var/lib/containers/storage/overlay/07acfa2a39d5bc122fb37dcb332cfeea303862f384d9ee3419daea97fe78bfbb/merged major:0 minor:118 fsType:overlay blockSize:0} overlay_0-1193:{mountpoint:/var/lib/containers/storage/overlay/ff64654a1ed0cd82affd082d1e6deb3b93938d66c22b77cf2c32a715274860e5/merged major:0 minor:1193 fsType:overlay blockSize:0} overlay_0-1195:{mountpoint:/var/lib/containers/storage/overlay/62eb76ccbd3393b211624fe3c361e27111647517f8b82481fad0bda6259441c9/merged major:0 minor:1195 fsType:overlay blockSize:0} overlay_0-1197:{mountpoint:/var/lib/containers/storage/overlay/69bc2fe28a9ef3faac475c875fa7500e0ce624040f8c7bb4a01a6f8feb742f53/merged major:0 minor:1197 fsType:overlay blockSize:0} overlay_0-1199:{mountpoint:/var/lib/containers/storage/overlay/e1e26ff25481458266a9cf3c483c87ad80269e87e16d574251ab6132d7bbb5ad/merged major:0 minor:1199 fsType:overlay blockSize:0} overlay_0-1204:{mountpoint:/var/lib/containers/storage/overlay/9d9f4d02b6745914921d1d8d6f1fc6d67dd7b6b37da1d16fe7268f3df4c7eaf9/merged major:0 minor:1204 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/c9d27d37f7250150bb839a44efdef83fd3fe90ecf8b77edae7070b1c4c09b61c/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-1212:{mountpoint:/var/lib/containers/storage/overlay/e47c02d1e8a329a4dbeba4c765645bf5865d65542b00b35fd072e48516515dd2/merged major:0 minor:1212 fsType:overlay blockSize:0} overlay_0-126:{mountpoint:/var/lib/containers/storage/overlay/892bc13c78e401901d8ef365c496647bded5b98498dcbbd68b084ac315b52874/merged major:0 minor:126 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/1a0443ac3276617f024da16855b77b2d50a065b295368280197bc91740653702/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/30111bbfd50c9149bc58099f2617fed8c2cbebd6170279ec78105f34c281f5da/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/18402c8b507950c24090f28fae386548feb6559d374343c512e10be10a6a1fc3/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/dd6946b2ec2fbda7561c7fba2f9e0ce23c4fa24b048da3b3bb0615eec482a321/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/3189aa9e324372384c4a981110ed6d876bd75bd5f8e5879b4aee5ba2adcfc60a/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/16e470c26a576e79333622a0a04cff0fe4e3237592b1b84794db47ef3b34c213/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-159:{mountpoint:/var/lib/containers/storage/overlay/c11a01ba879dea4153e06a1a909867325e2c1a78cef9e3cf49ff2a5aaa94d6ed/merged major:0 minor:159 fsType:overlay blockSize:0} overlay_0-161:{mountpoint:/var/lib/containers/storage/overlay/c126e75cc76d7741fa1c10d4164db1fb89575b0051e443d0ba6a21994cc61ea1/merged major:0 minor:161 fsType:overlay blockSize:0} overlay_0-167:{mountpoint:/var/lib/containers/storage/overlay/edc082dfa2d788394fabf6bf6b43dd6e4c61ea81bf2c7f104b917488d65de141/merged major:0 minor:167 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/07590a31a39b68d54b622df0025917b371f9f8791471e669656122714f90bf01/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/cabc7d191441052a51f0a03367865ea04dd8d019563e22eb22ad697b824549b1/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/39c3ff17de2c5500f0e3d32a115f2f13c97fc8e7d4be67721009d8ecf2df78ea/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/4115b27a698a7323a28c415de21d18f4d81d86eb323bcc4225550c78d95319d4/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/d027220cf8dd405aa9efa9fd32991ae77175a41244240dc10b49fc6790f3e225/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-199:{mountpoint:/var/lib/containers/storage/overlay/6e9e87da54ea6e2b54f28a020cb3bf87ea2dbfc11c7c966d6483631db751f52e/merged major:0 minor:199 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/792ab5476b2326d07eed54e40f3482cddbac14ff0e5c8f8aa418707be175a286/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/ba2af495bce4842a7131a6af8de4564f2f191e25cadab2ba84b3d494ea83d702/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-281:{mountpoint:/var/lib/containers/storage/overlay/507de9fa5e57dd4db66c193bca13709ce214418fe56eba86f35217b6174489b1/merged major:0 minor:281 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/76c433a023f5caf6b6825a4198b8a972b04260c82c10e943e3ad850d540a6768/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/8af32abfcee24b35421c2da474a6ce0f227b98d05347260f5506ec966f4252d1/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-288:{mountpoint:/var/lib/containers/storage/overlay/6e3bfe7bc51c866bc6402550199d82db681e19b7bc85d552da6e35d3ab3060a5/merged major:0 minor:288 fsType:overlay blockSize:0} overlay_0-291:{mountpoint:/var/lib/containers/storage/overlay/a5feaaca37e3604d80fdef2bc2eb10e33c23809313321f5da391124020e520c5/merged major:0 minor:291 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/d579692b80bea7cabc02437fdcb0f5b4990eb5f4892adf3c2562b6882f8815c2/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/244f0ddd816c23163b6832046b4023b0b351c0a88a0a6a77d7efe809b4abc347/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/2cd4b4c692480765f89f325e56df5214c4f04e433dd202c8c9eb0b9f63623ad3/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/f162bee02582aac1385f70865b0cc62e5c454e55953c11cb6255342f5601bc77/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/feb4d03c8d2600bf10ded9cefe7585cdb8dab377aa8d31248b8d46d8081f66c5/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/1b0a243e6fd60ad62daaba3bdbc5798534f082c959bad5db7d2d12a93f03ea57/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/9a5450203615ac3984920733127f8fe769ef97b9de1d708f94db96e802c27e46/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/00d2c3dbb87af8b1a65d795cfa4cc2b317f2fa2f3b3ce65ebb35364e2df8fb83/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/edf11c44c47fef2e84ea2bdee603223b55f3653ab26fa0e9593f233c29cc1d68/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/ad4a479deab264fc9426aae1fe6fadd2c7e69d0600713bdec7a6594012a91d89/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/ac99af6d4f3761e3a650909c7a092c9e1579a952cfd72a3a75ac094c75277f76/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-317:{mountpoint:/var/lib/containers/storage/overlay/a98e359fa10bc9e6ad40930b976a13b6c126fec6d912004428056bb377a835e4/merged major:0 minor:317 fsType:overlay blockSize:0} overlay_0-321:{mountpoint:/var/lib/containers/storage/overlay/32d1dacb38e411498c23e5160ba1078c46b3e02f45ab63256b5632895e217a61/merged major:0 minor:321 fsType:overlay blockSize:0} overlay_0-327:{mountpoint:/var/lib/containers/storage/overlay/5f66b1d84d33bfc3bd2d85f75fd60925148d51a3f1956e813ed1e27399fd27c7/merged major:0 minor:327 fsType:overlay blockSize:0} overlay_0-340:{mountpoint:/var/lib/containers/storage/overlay/f2fd7b5b6d39d1d644a6038b7ee5542748dd198c94e866014699336efd324758/merged major:0 minor:340 fsType:overlay blockSize:0} overlay_0-353:{mountpoint:/var/lib/containers/storage/overlay/1c330fb4dd859467828960001b689f46ac2b2bcecef1fcf425d8a8ef078df753/merged major:0 minor:353 fsType:overlay blockSize:0} overlay_0-355:{mountpoint:/var/lib/containers/storage/overlay/934794ea7cb38f035c184d9894ed9bd1628c255b7e35e5fa34a52c2bce017091/merged major:0 minor:355 fsType:overlay blockSize:0} overlay_0-356:{mountpoint:/var/lib/containers/storage/overlay/9e48c8da8132d5720a0656b8e9b07987c878d1214c075fc9838ea8540ac8c1f1/merged major:0 minor:356 fsType:overlay blockSize:0} overlay_0-359:{mountpoint:/var/lib/containers/storage/overlay/a7f84e061e5ba93fe2a39674ffe2d534bbc7afeefba8d960276a63180eec37d9/merged major:0 minor:359 fsType:overlay blockSize:0} overlay_0-370:{mountpoint:/var/lib/containers/storage/overlay/905bcea6feff73e9c66bff60349ec853187e5502812e31945fadd38d62966847/merged major:0 minor:370 fsType:overlay blockSize:0} overlay_0-375:{mountpoint:/var/lib/containers/storage/overlay/f738ac383a549b74c8aab391f7588bde264d8530a2bf1a31e1ccb8ea934e16ea/merged major:0 minor:375 fsType:overlay blockSize:0} overlay_0-381:{mountpoint:/var/lib/containers/storage/overlay/86e3668304c584789825b20870ad8c4c1de41be47eda009ac7459d2b6d221bdb/merged major:0 minor:381 fsType:overlay blockSize:0} overlay_0-385:{mountpoint:/var/lib/containers/storage/overlay/2781432990f596fa6ae395e2fcbae3025b9954226262d36b0c33fa2c06c81705/merged major:0 minor:385 fsType:overlay blockSize:0} overlay_0-387:{mountpoint:/var/lib/containers/storage/overlay/d6738cea276530dbfb4d56ad29bf853917810d70ad869e0c27325eed34028cba/merged major:0 minor:387 fsType:overlay blockSize:0} overlay_0-389:{mountpoint:/var/lib/containers/storage/overlay/0466e1e70924283a1575face446682b9c4fcd20c765d12c43d0cb5e354817b58/merged major:0 minor:389 fsType:overlay blockSize:0} overlay_0-403:{mountpoint:/var/lib/containers/storage/overlay/40a213fe5f6487410955b7ddc8bbda48e009e0cbd43e5746f20f05855d0e3ac8/merged major:0 minor:403 fsType:overlay blockSize:0} overlay_0-404:{mountpoint:/var/lib/containers/storage/overlay/52a3ed764da2100139cd7eb2da68502a727dd00de3ba7881610224bfae9a117a/merged major:0 minor:404 fsType:overlay blockSize:0} overlay_0-406:{mountpoint:/var/lib/containers/storage/overlay/d7d255aeffd995acb0faf1fe1bbd562f5fe07356542f8a400e979b8d5d77e731/merged major:0 minor:406 fsType:overlay blockSize:0} overlay_0-409:{mountpoint:/var/lib/containers/storage/overlay/2c11987718af5159b786543713468d29a4e10fe24411ea9219adde63423980a7/merged major:0 minor:409 fsType:overlay blockSize:0} overlay_0-417:{mountpoint:/var/lib/containers/storage/overlay/d016bfd4b7f9622de45f0088a9d294f3f56c69724aa8f4468db58d5054812b03/merged major:0 minor:417 fsType:overlay blockSize:0} overlay_0-424:{mountpoint:/var/lib/containers/storage/overlay/6a342cc307b3762597a791ba575211fc06be437a4814944c9f647bca1ca9667a/merged major:0 minor:424 fsType:overlay blockSize:0} overlay_0-425:{mountpoint:/var/lib/containers/storage/overlay/f497ff0aeb96956e5cb764b73685532a90c3261256a78870d17ef5f7c0bab644/merged major:0 minor:425 fsType:overlay blockSize:0} overlay_0-428:{mountpoint:/var/lib/containers/storage/overlay/73d65639a3c820dd79939653cc3b86c61046a88d27c6e25f49336c65e34fe4fc/merged major:0 minor:428 fsType:overlay blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/9a1402447cca1c8f29baf7bc9c228342345f18b2670037d4671e68f154462f35/merged major:0 minor:43 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/d1c4b6154d30e92be89d73b5e56a0c2bbb0c62781267b5666720784916185e95/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-446:{mountpoint:/var/lib/containers/storage/overlay/ad17cb5eb140afbae8e68c8a247ba51669406e19a09d53d0efa794886f663b2c/merged major:0 minor:446 fsType:overlay blockSize:0} overlay_0-463:{mountpoint:/var/lib/containers/storage/overlay/bd6aef9bcab5e5338d7c78966a00d6a04c05b7a7c1ed66f5a19db8d3093e0a56/merged major:0 minor:463 fsType:overlay blockSize:0} overlay_0-466:{mountpoint:/var/lib/containers/storage/overlay/b8df045ff0a793fa3c7581aa04ee517ae2a233316fd7192b2ba3a36f04f8bf96/merged major:0 minor:466 fsType:overlay blockSize:0} overlay_0-468:{mountpoint:/var/lib/containers/storage/overlay/d9f575bdef85397aaaa37881943deb61ba37bffdf570457233c68db8c01a2d10/merged major:0 minor:468 fsType:overlay blockSize:0} overlay_0-470:{mountpoint:/var/lib/containers/storage/overlay/71925034d1978e2fa699805ac5d87b96708ac54caabbd0b2da99cb8c71e94d2e/merged major:0 minor:470 fsType:overlay blockSize:0} overlay_0-477:{mountpoint:/var/lib/containers/storage/overlay/6d8ee5bbb09482a4ff057cc8d4a25794dc8a53319b89d0d22efd949e07a6ef7a/merged major:0 minor:477 fsType:overlay blockSize:0} overlay_0-479:{mountpoint:/var/lib/containers/storage/overlay/2ce818cab39643edbe10a10e171410fbbbd1dd2ea899b12c6ed859e8bcc83577/merged major:0 minor:479 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/c733271b9faa9e0fd3c2854f565a9c1f864dd03d42758d5c25995acb22670553/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-50:{mountpoint:/var/lib/containers/storage/overlay/0056e5fc826501a5cc574e8144dfe98d5f907688d2c7c00df1a1e18f2bf34dd3/merged major:0 minor:50 fsType:overlay blockSize:0} overlay_0-500:{mountpoint:/var/lib/containers/storage/overlay/3840752b637b517061466ba78a33ef2f0bfb342cd43df8dae1e0097a3b5b8735/merged major:0 minor:500 fsType:overlay blockSize:0} overlay_0-502:{mountpoint:/var/lib/containers/storage/overlay/457ede0db6951c2827bb902c4771f5cc2b2cc1df577655572efa035290e12d11/merged major:0 minor:502 fsType:overlay blockSize:0} overlay_0-504:{mountpoint:/var/lib/containers/storage/overlay/dfbb4e4e17b0e6ff2bf548e05101d76fe722efe62d5cbdd750b7158b1b398016/merged major:0 minor:504 fsType:overlay blockSize:0} overlay_0-506:{mountpoint:/var/lib/containers/storage/overlay/c0836d609b0da575223b4ecc410792dbfe7672d9db4847cf7f2c6a942b69af16/merged major:0 minor:506 fsType:overlay blockSize:0} overlay_0-510:{mountpoint:/var/lib/containers/storage/overlay/ac4435a72fb8916ba52b1e9ea9d112714587cce8a50be1a5a53ad4fb5c73c940/merged major:0 minor:510 fsType:overlay blockSize:0} overlay_0-512:{mountpoint:/var/lib/containers/storage/overlay/b15b99ed2aac4e1f8611832aa037b664636ea83435d2b84e6953b21af2b788b6/merged major:0 minor:512 fsType:overlay blockSize:0} overlay_0-519:{mountpoint:/var/lib/containers/storage/overlay/5c9db92c8ac730b7bd81711145a16f0e7aa47c74d4bc10b740c4c2aa325ab2dc/merged major:0 minor:519 fsType:overlay blockSize:0} overlay_0-530:{mountpoint:/var/lib/containers/storage/overlay/a05a89dc0760369889fec29826ef8895c9a70f1c2fb3e06d8a4b16dce70d69a5/merged major:0 minor:530 fsType:overlay blockSize:0} overlay_0-531:{mountpoint:/var/lib/containers/storage/overlay/3a4651568af63218b036c3b74aa1015476aa7cb496c798f9764b8251f394698c/merged major:0 minor:531 fsType:overlay blockSize:0} overlay_0-533:{mountpoint:/var/lib/containers/storage/overlay/cc48ba4919d6e5d12815e19e6d67ce33ffe867410bdd3de14e7ddec598698898/merged major:0 minor:533 fsType:overlay blockSize:0} overlay_0-537:{mountpoint:/var/lib/containers/storage/overlay/d6aa7f8ee628b53a42986ea424e23d67a15ef8fd6c2682ea6a1f8c39c0bcdc22/merged major:0 minor:537 fsType:overlay blockSize:0} overlay_0-539:{mountpoint:/var/lib/containers/storage/overlay/84e66f753ac9b026227e4b8de098b63805b38d9aaa87d29730cf361ede20c14f/merged major:0 minor:539 fsType:overlay blockSize:0} overlay_0-54:{mountpoint:/var/lib/containers/storage/overlay/da9ef2ad35740a4dd51177d804d88635fe6e3a1b4a68dd1ef74696229e4ad15b/merged major:0 minor:54 fsType:overlay blockSize:0} overlay_0-558:{mountpoint:/var/lib/containers/storage/overlay/02e7daf2f863eafd90c666064451e46cd06e5a0c50708bacd59758e94fd6dfd3/merged major:0 minor:558 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/2f934ae096a8a46a93144cacea0583c3fa7b62058f3eee62c4a2a340fee31f40/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-563:{mountpoint:/var/lib/containers/storage/overlay/beb2944dd1e7b102dbeb05787da3736d4e6e4145a7b0eb234f126aece95210e8/merged major:0 minor:563 fsType:overlay blockSize:0} overlay_0-565:{mountpoint:/var/lib/containers/storage/overlay/ac675dfae9546cd538a3779612040832e47360180cc11c92b2a0e2359f134644/merged major:0 minor:565 fsType:overlay blockSize:0} overlay_0-568:{mountpoint:/var/lib/containers/storage/overlay/b8085caa7f9afb77ead63caef6dbe1a9b8e8ce48d702f5596ce6d0eff3efb9a7/merged major:0 minor:568 fsType:overlay blockSize:0} overlay_0-573:{mountpoint:/var/lib/containers/storage/overlay/ecfd6c848ef08ea1afe32067267b9a894dced12c6c2f3c53d9a31a25a1566073/merged major:0 minor:573 fsType:overlay blockSize:0} overlay_0-575:{mountpoint:/var/lib/containers/storage/overlay/b97903300b1a7c62d874cb59f4d91b97993a3bbd7e96bc18787b8adc1b6c5898/merged major:0 minor:575 fsType:overlay blockSize:0} overlay_0-576:{mountpoint:/var/lib/containers/storage/overlay/5e9f034912578607980729c3390f37be145c6ffdc15419a18e519ae47d75bbda/merged major:0 minor:576 fsType:overlay blockSize:0} overlay_0-589:{mountpoint:/var/lib/containers/storage/overlay/970476c5c08a25a8522dd5174d3014d6641e6f3e5a35fa2165d222e0be0b7c60/merged major:0 minor:589 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/839ad18f8688f749da4183d2bd10be445080e4eda4cd7343cba440599a8e7564/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-602:{mountpoint:/var/lib/containers/storage/overlay/b21448f3e7a6b38d1cb90931c93afd48c960c62695f26d9ac0cd6bba79215c63/merged major:0 minor:602 fsType:overlay blockSize:0} overlay_0-606:{mountpoint:/var/lib/containers/storage/overlay/42d4782add4ed6ca75fc319b9cacd5a7eb875982c734cd1184c365ccf033be82/merged major:0 minor:606 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/e878506489a867e4df9a8543d4c5cac4d865bbfa18f5479ed296fbc28ecea735/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-623:{mountpoint:/var/lib/containers/storage/overlay/f421ab7170aded81679e9585852093539ba66d24372c8c76ee40803f19b04b59/merged major:0 minor:623 fsType:overlay blockSize:0} overlay_0-625:{mountpoint:/var/lib/containers/storage/overlay/0a278eaa1db2b5fb88dc2d647dc2e871c22359d45f46bd6de28234e0ae94d4f3/merged major:0 minor:625 fsType:overlay blockSize:0} overlay_0-630:{mountpoint:/var/lib/containers/storage/overlay/49b463f10691524a78c0b8aa88937343a75bbad5daffa616e66dcad6112bacdd/merged major:0 minor:630 fsType:overlay blockSize:0} overlay_0-636:{mountpoint:/var/lib/containers/storage/overlay/b003cd5c30df819908302e86ce85b4425f9e8a0c8531580cb028f0e1ff55421e/merged major:0 minor:636 fsType:overlay blockSize:0} overlay_0-652:{mountpoint:/var/lib/containers/storage/overlay/18b7f0db2fd789226273d6435f7cd97fca6215d46c2652ea2e3b156d365ef1c0/merged major:0 minor:652 fsType:overlay blockSize:0} overlay_0-654:{mountpoint:/var/lib/containers/storage/overlay/36b02b08fa51cfec14bc02698ec816624ea85e4790268879f6902f0e04d9134b/merged major:0 minor:654 fsType:overlay blockSize:0} overlay_0-658:{mountpoint:/var/lib/containers/storage/overlay/4eac385f4540879fc16a4af9f611632cbf093d29ea69784e66ed552416fe0d24/merged major:0 minor:658 fsType:overlay blockSize:0} overlay_0-660:{mountpoint:/var/lib/containers/storage/overlay/36332e6b56d1bb8264a75b06564a9f03834013614f2fd7ec6162cf38be321c01/merged major:0 minor:660 fsType:overlay blockSize:0} overlay_0-662:{mountpoint:/var/lib/containers/storage/overlay/9dfcc0cd52237166f74e85078deeebc942535270a595780b6cee9cc7a6e2782f/merged major:0 minor:662 fsType:overlay blockSize:0} overlay_0-664:{mountpoint:/var/lib/containers/storage/overlay/1cb067205ddf2f1b13e42bf04235a90370fcd6f87a22dc2f22ff4a104d456bbb/merged major:0 minor:664 fsType:overlay blockSize:0} overlay_0-666:{mountpoint:/var/lib/containers/storage/overlay/75c501ad020ca7efc86a5a3f9338214dc69f0ace70eef6a4cf5d299251dcf397/merged major:0 minor:666 fsType:overlay blockSize:0} overlay_0-668:{mountpoint:/var/lib/containers/storage/overlay/7d71a16747aa90877593027f8b8188d49ecfafda2e8b6ca60753fbd56c033e06/merged major:0 minor:668 fsType:overlay blockSize:0} overlay_0-670:{mountpoint:/var/lib/containers/storage/overlay/6675ff23ad9c6dfcf894a6a598fc6168f089d1dcbc9258fd54845c5f0acbb781/merged major:0 minor:670 fsType:overlay blockSize:0} overlay_0-672:{mountpoint:/var/lib/containers/storage/overlay/7b083d419cef7ca84177af0eb19ddf59b19d090b37b7299ee6d8f3d5fb1c1f6f/merged major:0 minor:672 fsType:overlay blockSize:0} overlay_0-674:{mountpoint:/var/lib/containers/storage/overlay/9580dbdc644bd99e25d3607dd83f2f7b46f709626095cb397ab3e4463d21ba1c/merged major:0 minor:674 fsType:overlay blockSize:0} overlay_0-69:{mountpoint:/var/lib/containers/storage/overlay/2504d3086472a5a4ce03d98e37d14880f4d863cbcdb1bc433ac149985ed35c45/merged major:0 minor:69 fsType:overlay blockSize:0} overlay_0-692:{mountpoint:/var/lib/containers/storage/overlay/e181899ef9d76a3582af74730b51c559cb87318ed1cd3b9d4ddfcc7d9cecbb74/merged major:0 minor:692 fsType:overlay blockSize:0} overlay_0-697:{mountpoint:/var/lib/containers/storage/overlay/080fd798119f03474ef930209e78c9ba3bd86772b2142ad2de29ebd4a3b67253/merged major:0 minor:697 fsType:overlay blockSize:0} overlay_0-70:{mountpoint:/var/lib/containers/storage/overlay/dbf1bf34594834cfe0cadef93db60cd663c5d64a38d6e5aa50ca02d659397596/merged major:0 minor:70 fsType:overlay blockSize:0} overlay_0-704:{mountpoint:/var/lib/containers/storage/overlay/3d0815f5af84337262d31c92e00c9be6d225d332716371f987ffedf5f442be62/merged major:0 minor:704 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/08f3d81ce0cf1e09d18a485afd6c137422e1e542477faf927d7cbfe149225772/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-711:{mountpoint:/var/lib/containers/storage/overlay/e7031d3fa95ebe6bab9e7723ebd695229fe65575a2c7417437a2e6da058c3b9f/merged major:0 minor:711 fsType:overlay blockSize:0} overlay_0-713:{mountpoint:/var/lib/containers/storage/overlay/977d89195f0cee2cc49f919568d88aa54b86758c7463594f1601cd4f6211f51f/merged major:0 minor:713 fsType:overlay blockSize:0} overlay_0-718:{mountpoint:/var/lib/containers/storage/overlay/9b50afa114da14be938ff9dd50141db4ac85475a339bc67b7c351385b06db211/merged major:0 minor:718 fsType:overlay blockSize:0} overlay_0-727:{mountpoint:/var/lib/containers/storage/overlay/562b907b5a63dc4c93261186442cf92530a3347f30a6293b967c54adc018bf35/merged major:0 minor:727 fsType:overlay blockSize:0} overlay_0-729:{mountpoint:/var/lib/containers/storage/overlay/400c9ddadc22eae09ffe7d0ca432cee33b0f98f5a4c84dbe97b70b1739efa675/merged major:0 minor:729 fsType:overlay blockSize:0} overlay_0-733:{mountpoint:/var/lib/containers/storage/overlay/498f21d06fd1b95ca54045de7d8c4eebc9d6f5afabadf4a7e186b5dd39d7ee8e/merged major:0 minor:733 fsType:overlay blockSize:0} overlay_0-738:{mountpoint:/var/lib/containers/storage/overlay/60755fa14836fb37f4798761f60e56659e19a11eddda66192cbfc8c17a050ab6/merged major:0 minor:738 fsType:overlay blockSize:0} overlay_0-742:{mountpoint:/var/lib/containers/storage/overlay/7b3cf2ce6afaf496997610e5322b86c6ca03df90c0da87d7d4b9d378f9deacc1/merged major:0 minor:742 fsType:overlay blockSize:0} overlay_0-744:{mountpoint:/var/lib/containers/storage/overlay/82358331ab9e8a01d9aa00ac77ea3836bed9d3e51ab57c906d4105ef993041af/merged major:0 minor:744 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/f4572da2accecd032a123745f75ee701bb6892beda3afa4b5c9935cc296105c2/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-765:{mountpoint:/var/lib/containers/storage/overlay/8845d8e5794d1f872aa12dc15c400fff4c630103b9966bd555d5ead61dd297b4/merged major:0 minor:765 fsType:overlay blockSize:0} overlay_0-77:{mountpoint:/var/lib/containers/storage/overlay/1a4f00753f65ce54c63756117ad243eea4981f2dbf34f5b4365fb525899c149a/merged major:0 minor:77 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/b5f3d87f7b381cc1d6cb2fe4bb5d23527521deb0cdc84bec3d58be5dae50384a/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-783:{mountpoint:/var/lib/containers/storage/overlay/15cf27c8bd9635a792979cfc2349336e12a786c1763c1232c7f84ba0f45adf53/merged major:0 minor:783 fsType:overlay blockSize:0} overlay_0-788:{mountpoint:/var/lib/containers/storage/overlay/c32cf5c079bd5fccf95f3980e9218a0686492e96f4a456db76cc91592075a2ee/merged major:0 minor:788 fsType:overlay blockSize:0} overlay_0-791:{mountpoint:/var/lib/containers/storage/overlay/98169f0067ab4d5b6560c8fed090c296c91c14caabe5709c2963df06e17ba1fe/merged major:0 minor:791 fsType:overlay blockSize:0} overlay_0-800:{mountpoint:/var/lib/containers/storage/overlay/ea20438f78efdabcd41ac4be3d1d091747bb372a537b832f1db44c698f1745db/merged major:0 minor:800 fsType:overlay blockSize:0} overlay_0-807:{mountpoint:/var Mar 19 12:14:21.552486 master-0 kubenswrapper[31830]: /lib/containers/storage/overlay/f3e2b9c9ea841978378160679d6279b321878b73ad37a85379eac1f9d3228448/merged major:0 minor:807 fsType:overlay blockSize:0} overlay_0-823:{mountpoint:/var/lib/containers/storage/overlay/19c200fe56594f0c0cdd34b8c4446fa5fb697abd6db48be9e2b49167a2115a8a/merged major:0 minor:823 fsType:overlay blockSize:0} overlay_0-831:{mountpoint:/var/lib/containers/storage/overlay/7fd664c6927eb4db06d389afcd178962bf34054c75d578806bb20c6a8ff05b34/merged major:0 minor:831 fsType:overlay blockSize:0} overlay_0-842:{mountpoint:/var/lib/containers/storage/overlay/ac5faa8349629f0e34f47465b39b9964687c42f55fa6abc0cc780f8bdf016718/merged major:0 minor:842 fsType:overlay blockSize:0} overlay_0-844:{mountpoint:/var/lib/containers/storage/overlay/f0c3ddcf98d8d941df278b950957a595eef0220f96fa659ae93944ebd5bcb1b4/merged major:0 minor:844 fsType:overlay blockSize:0} overlay_0-846:{mountpoint:/var/lib/containers/storage/overlay/221d01dea5ad2c123b19f4322036b7452d572929ef4a7a7ba5b073858bb96e10/merged major:0 minor:846 fsType:overlay blockSize:0} overlay_0-865:{mountpoint:/var/lib/containers/storage/overlay/053c10253ea9a3627a167e3bf23a1a508bc76cdb8b311db6ac14fa901b3ff1ca/merged major:0 minor:865 fsType:overlay blockSize:0} overlay_0-869:{mountpoint:/var/lib/containers/storage/overlay/8673191ef677d888a16f66cc310643bdcf4ca90eff4e43323f064aeb3bf744a0/merged major:0 minor:869 fsType:overlay blockSize:0} overlay_0-871:{mountpoint:/var/lib/containers/storage/overlay/363527e29ac5e62cda5f48fd8af88a8843065ccd2f4a5b58eea26f585563e476/merged major:0 minor:871 fsType:overlay blockSize:0} overlay_0-877:{mountpoint:/var/lib/containers/storage/overlay/22feb6260f8a2b41f03e6158d3513d34c9fb6b7c3395f5c77083685f1614ba7f/merged major:0 minor:877 fsType:overlay blockSize:0} overlay_0-883:{mountpoint:/var/lib/containers/storage/overlay/d73ad1d86ed7042d2e6b8012a6fee401545e6a29ec50bad5c32932f97ebdfaf8/merged major:0 minor:883 fsType:overlay blockSize:0} overlay_0-892:{mountpoint:/var/lib/containers/storage/overlay/4177d95afa4248e8c2578eed1198a9efe6ed164abd6d68cbf4ae3d12bd7d8eae/merged major:0 minor:892 fsType:overlay blockSize:0} overlay_0-899:{mountpoint:/var/lib/containers/storage/overlay/81557f06ac704b40196efbcffa75bd9e62fbd83b9748b3f2e0f20fa51c0a2ac5/merged major:0 minor:899 fsType:overlay blockSize:0} overlay_0-902:{mountpoint:/var/lib/containers/storage/overlay/3f2ba709a275be0774b3ada02428d56f5a8f8bf799eea72063a48d8e54fe99d8/merged major:0 minor:902 fsType:overlay blockSize:0} overlay_0-904:{mountpoint:/var/lib/containers/storage/overlay/f3ef7c7f238f1218aebb03809ed11fe7dff5679bb4e493dc9fa89a7f35ce47f2/merged major:0 minor:904 fsType:overlay blockSize:0} overlay_0-906:{mountpoint:/var/lib/containers/storage/overlay/bace4b757c692902ddf4c72258c967e90977abea5a5c55e93d10f417bfb72c20/merged major:0 minor:906 fsType:overlay blockSize:0} overlay_0-917:{mountpoint:/var/lib/containers/storage/overlay/ffd0f3c19dfaaa23dbdde324cd534e835db5a27c51864a5a4cb01817532f04af/merged major:0 minor:917 fsType:overlay blockSize:0} overlay_0-919:{mountpoint:/var/lib/containers/storage/overlay/07a6dfeb2b8c430023dba8e5b5552e5d175638c2069f01f432cbd5dbae670336/merged major:0 minor:919 fsType:overlay blockSize:0} overlay_0-924:{mountpoint:/var/lib/containers/storage/overlay/795739f661ff0eb3e46cab33ee4daf7a7f4fd789c98254e6fccfa9dd3a16ea64/merged major:0 minor:924 fsType:overlay blockSize:0} overlay_0-931:{mountpoint:/var/lib/containers/storage/overlay/f38a62a91490aa1139954d92ae93d58acc31809bb02070792ac9ac7b1344eaec/merged major:0 minor:931 fsType:overlay blockSize:0} overlay_0-940:{mountpoint:/var/lib/containers/storage/overlay/11e472a38d41790e7ec8a9169282da21b530ed1f326f1ed6437f92908167906a/merged major:0 minor:940 fsType:overlay blockSize:0} overlay_0-949:{mountpoint:/var/lib/containers/storage/overlay/7a018bda374180ac826d894ca3e0653876aaa0a7d1af8695ba48e8d040523a06/merged major:0 minor:949 fsType:overlay blockSize:0} overlay_0-95:{mountpoint:/var/lib/containers/storage/overlay/204cd888d411c7ae71cd14c0e1a1a9923381b2c1f8a7efbdad1b7fa26fbb032f/merged major:0 minor:95 fsType:overlay blockSize:0} overlay_0-960:{mountpoint:/var/lib/containers/storage/overlay/cbc99970a10d106e7008fc8a54f19b2fbd961afc38adedf7f5fc9370d5a9a0b7/merged major:0 minor:960 fsType:overlay blockSize:0} overlay_0-964:{mountpoint:/var/lib/containers/storage/overlay/9b44ae0961ba4b1dae8c5479cbb82fa49c8333001d82a86cd16edf2b6de4941f/merged major:0 minor:964 fsType:overlay blockSize:0} overlay_0-967:{mountpoint:/var/lib/containers/storage/overlay/cf140993b8c49ae7ecd4020e5fe5ead7392da9220ab6c5121054ee849c31b80f/merged major:0 minor:967 fsType:overlay blockSize:0} overlay_0-969:{mountpoint:/var/lib/containers/storage/overlay/2d04299d7e9cb266d912f9957534b5fa5facbad7f2ecb3e26cd6abe27f564e86/merged major:0 minor:969 fsType:overlay blockSize:0} overlay_0-973:{mountpoint:/var/lib/containers/storage/overlay/89d99f4f37138fb753d98259251c1ca51d9da2467796e5b85bf446c9f56ca0c8/merged major:0 minor:973 fsType:overlay blockSize:0} overlay_0-975:{mountpoint:/var/lib/containers/storage/overlay/8908cdd32e353e8b42ee0df8dd7d23dfbe4cc8178fef994ef86c9cbf480294fe/merged major:0 minor:975 fsType:overlay blockSize:0} overlay_0-982:{mountpoint:/var/lib/containers/storage/overlay/5f727656d8172ed974a0a705000fb2037887abf0a59898e4b62b7aab7764bf1c/merged major:0 minor:982 fsType:overlay blockSize:0} overlay_0-984:{mountpoint:/var/lib/containers/storage/overlay/c512d1c23766c96b3b8b4c1a2b1a3009eba62d8de433f8d4fca5ecf5f1af59c9/merged major:0 minor:984 fsType:overlay blockSize:0} overlay_0-998:{mountpoint:/var/lib/containers/storage/overlay/55f6351fdd3cfd54a7b902cb9fbd3c64649e9f4f2193c3d8f01e017d8e22c3d1/merged major:0 minor:998 fsType:overlay blockSize:0} overlay_0-999:{mountpoint:/var/lib/containers/storage/overlay/7094722f2bbbe2d1f4fe2c01ae1adaa8395e45f704abba53b08a89db1ecaff58/merged major:0 minor:999 fsType:overlay blockSize:0}] Mar 19 12:14:21.603294 master-0 kubenswrapper[31830]: I0319 12:14:21.600220 31830 manager.go:217] Machine: {Timestamp:2026-03-19 12:14:21.598658107 +0000 UTC m=+0.147618831 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:42c922df40e540ac85bfc55dec643ba0 SystemUUID:42c922df-40e5-40ac-85bf-c55dec643ba0 BootID:56867831-7a09-49d8-8c88-5a315bbf793a Filesystems:[{Device:/run/containers/storage/overlay-containers/5971350293b565068e613eaa81b7b38f49914ad973eb8343f33aa9abaed290e9/userdata/shm DeviceMajor:0 DeviceMinor:716 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58/volumes/kubernetes.io~secret/telemeter-client-tls DeviceMajor:0 DeviceMinor:373 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/63c12a89-1b49-4eba-8f5a-551b10d2246b/volumes/kubernetes.io~projected/kube-api-access-bst2w DeviceMajor:0 DeviceMinor:238 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-664 DeviceMajor:0 DeviceMinor:664 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c52bbbe7-bc16-432f-a471-bc561083a853/volumes/kubernetes.io~projected/kube-api-access-4ztf7 DeviceMajor:0 DeviceMinor:714 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-512 DeviceMajor:0 DeviceMinor:512 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d3541cbe-3be0-40d3-89d2-b5937b6a8f47/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:611 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1204 DeviceMajor:0 DeviceMinor:1204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cafdcda3b6318eaf62d6677cace1d3a0a0dbcf1889d817ce08bcc768e4b05288/userdata/shm DeviceMajor:0 DeviceMinor:268 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d4e38c98fa8bce43dfe4e7719d598500071054bc18ba5987f14232cdc265f588/userdata/shm DeviceMajor:0 DeviceMinor:627 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/fd40498c-f50a-408c-9a50-5d85ae666124/volumes/kubernetes.io~projected/kube-api-access-2rmw5 DeviceMajor:0 DeviceMinor:810 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:613 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-973 DeviceMajor:0 DeviceMinor:973 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1047 DeviceMajor:0 DeviceMinor:1047 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5396ef64e03af5cd8fbb98838e00f4f08020d9b7b41c5ccef26950f1e41fec60/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-468 DeviceMajor:0 DeviceMinor:468 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4264e82c-387f-4aa6-9ef6-b7beb61e098c/volumes/kubernetes.io~projected/kube-api-access-8wfsr DeviceMajor:0 DeviceMinor:787 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-69 DeviceMajor:0 DeviceMinor:69 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/89df1c468dcab6a092003dfdb9054af97d313c0e8ba73ec68b30b7001eab90ba/userdata/shm DeviceMajor:0 DeviceMinor:276 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/71366739cc36c89d457d62d7f1f48c8768fc7ba64a4206c9c873e79bda714a8a/userdata/shm DeviceMajor:0 DeviceMinor:634 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/63407ab3b9286932d0ec228766542c2a928958a160c225ce1f7624b3e7a02447/userdata/shm DeviceMajor:0 DeviceMinor:272 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/28d0f82641cafb71075882375625371208c9e0463ead97b0053c16e9ee43470f/userdata/shm DeviceMajor:0 DeviceMinor:928 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-50 DeviceMajor:0 DeviceMinor:50 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-403 DeviceMajor:0 DeviceMinor:403 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1077 DeviceMajor:0 DeviceMinor:1077 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c2dbd8b3-0e02-4747-a166-80aa6a94b060/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-159 DeviceMajor:0 DeviceMinor:159 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/667757ee-2670-4019-ad93-156521d3c2e7/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:763 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-982 DeviceMajor:0 DeviceMinor:982 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1195 DeviceMajor:0 DeviceMinor:1195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1089ea24-add9-482e-9276-e6ded12052d7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/b80027fd-7b39-477a-a337-ff9bb08e7eeb/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:228 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/cf6b6560-1731-4fb1-b3c2-8257002842d6/volumes/kubernetes.io~projected/kube-api-access-64twc DeviceMajor:0 DeviceMinor:864 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1097 DeviceMajor:0 DeviceMinor:1097 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-902 DeviceMajor:0 DeviceMinor:902 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1131 DeviceMajor:0 DeviceMinor:1131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1199 DeviceMajor:0 DeviceMinor:1199 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-718 DeviceMajor:0 DeviceMinor:718 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b31a84101a7e9f8571fe0abea4a9c0ac92d862991255d66df670219d8949bf71/userdata/shm DeviceMajor:0 DeviceMinor:825 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/cf6b6560-1731-4fb1-b3c2-8257002842d6/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:854 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-111 DeviceMajor:0 DeviceMinor:111 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e6ef8104a726a85f4fa80186a64ea3c00a2cbb1be2c668fb9e94709c10d980c0/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0dabb76ec554d4e59d0494fc5bb751b125c5d1b8f29112c6e51c360eb8f3c374/userdata/shm DeviceMajor:0 DeviceMinor:621 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-742 DeviceMajor:0 DeviceMinor:742 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1007 DeviceMajor:0 DeviceMinor:1007 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b80d357d31adb7df8c525b85923de87b5edd8dd7bfe7187f3b2e54a41c8d8b6f/userdata/shm DeviceMajor:0 DeviceMinor:590 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-697 DeviceMajor:0 DeviceMinor:697 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-877 DeviceMajor:0 DeviceMinor:877 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/732ea3fa30562cbb548cbd63878cf98bf16844c3fd2ba6668a55873990319c2d/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-356 DeviceMajor:0 DeviceMinor:356 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f236a5ab-b400-46fc-94ee-1fff476d6458/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:592 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-711 DeviceMajor:0 DeviceMinor:711 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1168 DeviceMajor:0 DeviceMinor:1168 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/aef8e03f-0363-4e13-b7ca-4fa871d77c62/volumes/kubernetes.io~projected/kube-api-access-x252z DeviceMajor:0 DeviceMinor:247 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/44469a78-9300-4260-89e9-ea939de1357b/volumes/kubernetes.io~projected/kube-api-access-t7zpw DeviceMajor:0 DeviceMinor:811 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b5d29a971edd0c0a90849227d71d2a1720436090bfc1809b33b6d52cfd6a7ffe/userdata/shm DeviceMajor:0 DeviceMinor:377 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2b87f8c3-1898-46dd-bcac-e8f22f31e812/volumes/kubernetes.io~projected/kube-api-access-kbddm DeviceMajor:0 DeviceMinor:689 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-975 DeviceMajor:0 DeviceMinor:975 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0c17be488f74c65475492714ea2841534c84f72d155a2152b6dab678c10b46b6/userdata/shm DeviceMajor:0 DeviceMinor:455 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1001 DeviceMajor:0 DeviceMinor:1001 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bb1000ab-4419-43ce-b1b7-8f43413e017f/volumes/kubernetes.io~projected/kube-api-access-6hk8l DeviceMajor:0 DeviceMinor:1059 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/aef8e03f-0363-4e13-b7ca-4fa871d77c62/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/4264e82c-387f-4aa6-9ef6-b7beb61e098c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:731 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3e9cb8897ccc8cd32e99de4908536f646397f9314e55ffb6dadd385187e9f1b0/userdata/shm DeviceMajor:0 DeviceMinor:841 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/06f67c28-34fd-4356-92f0-edd0986ad34e/volumes/kubernetes.io~projected/kube-api-access-bdpj4 DeviceMajor:0 DeviceMinor:278 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/efd1c78ff9997efb11562e8d2fb6b9b151d43775e34fa6be423195823f01520e/userdata/shm DeviceMajor:0 DeviceMinor:809 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d975e831-7348-41b9-9622-f4a503674c38/volumes/kubernetes.io~projected/kube-api-access-86r6z DeviceMajor:0 DeviceMinor:336 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3756314b5f9faad34dff96625b9ef78c27d73db523c30a3f82a5ea254d67fd72/userdata/shm DeviceMajor:0 DeviceMinor:895 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a9d191d1-631d-4091-af8b-382283c18a5a/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1056 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-355 DeviceMajor:0 DeviceMinor:355 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-446 DeviceMajor:0 DeviceMinor:446 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f37f04bee18930433857e4757f6c0b0cea46719c10be7aeeafbea9a7d2df628f/userdata/shm DeviceMajor:0 DeviceMinor:456 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-823 DeviceMajor:0 DeviceMinor:823 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/beb562de-402b-4d9f-b5ed-090b60847a95/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:616 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/284768b8-9d70-4cf7-bace-8adc6b587186/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:98 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9702fc8c-4fe0-413b-b2d4-db23021d42b8/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:223 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6/volumes/kubernetes.io~projected/kube-api-access-jnd9c DeviceMajor:0 DeviceMinor:250 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9a72b8977a8a7f6da552724471a9890da5b8ee5f4a6fe88fb55492ca16eb4221/userdata/shm DeviceMajor:0 DeviceMinor:441 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/398bcaca-1bea-4633-a78f-717e3d015ddd/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:619 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-636 DeviceMajor:0 DeviceMinor:636 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3a6b082a-649b-43f6-8e24-cf222873fe39/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:827 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/284768b8-9d70-4cf7-bace-8adc6b587186/volumes/kubernetes.io~projected/kube-api-access-8p6vn DeviceMajor:0 DeviceMinor:104 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9702fc8c-4fe0-413b-b2d4-db23021d42b8/volumes/kubernetes.io~projected/kube-api-access-tpdts DeviceMajor:0 DeviceMinor:251 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/919daf8d-763a-44bc-8916-86b425a27cbd/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:384 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1da3868b3838b62f3e5d20f215a32847d5bb12874480e83fc7036c9466a82c5e/userdata/shm DeviceMajor:0 DeviceMinor:495 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7b2ecb08-a0f9-4127-967c-7087dea4c0f6/volumes/kubernetes.io~projected/kube-api-access-dxw6t DeviceMajor:0 DeviceMinor:863 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/81e5dd60f8e8f398fbc94edc5ee4b7a7c46081fef1fa9b130b775ed3aebea712/userdata/shm DeviceMajor:0 DeviceMinor:840 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4cdef734b9abebf7ad3957d15cc0c1c6f03e77f6869e579c27076c986f6c0a2c/userdata/shm DeviceMajor:0 DeviceMinor:958 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bf226d89-450d-4876-a113-345632b94ee9/volumes/kubernetes.io~projected/kube-api-access-wcxqj DeviceMajor:0 DeviceMinor:125 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9d2db220-4d5b-4819-a910-b186e1e9fb3e/volumes/kubernetes.io~projected/kube-api-access-wshb2 DeviceMajor:0 DeviceMinor:152 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3f20a730c4d5f1f1345d78c2bd60c5b238848ecf855493b53e0f599fc51845ac/userdata/shm DeviceMajor:0 DeviceMinor:497 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-54 DeviceMajor:0 DeviceMinor:54 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-960 DeviceMajor:0 DeviceMinor:960 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1129 DeviceMajor:0 DeviceMinor:1129 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f08c5930-44f0-48e4-80dd-2563f2733b2f/volumes/kubernetes.io~projected/kube-api-access-h84l9 DeviceMajor:0 DeviceMinor:230 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-949 DeviceMajor:0 DeviceMinor:949 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6db3fcbe-0dbf-464f-944b-62427173c8d3/volumes/kubernetes.io~projected/kube-api-access-lllml DeviceMajor:0 DeviceMinor:1117 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:916 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/b80027fd-7b39-477a-a337-ff9bb08e7eeb/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:617 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/44469a78-9300-4260-89e9-ea939de1357b/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:806 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-479 DeviceMajor:0 DeviceMinor:479 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1113 DeviceMajor:0 DeviceMinor:1113 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fe4ada978b72bf0ece9f4bc3e07bb79fded8b5a5f73d4c83d93ade89f41d9473/userdata/shm DeviceMajor:0 DeviceMinor:637 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-865 DeviceMajor:0 DeviceMinor:865 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-788 DeviceMajor:0 DeviceMinor:788 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-791 DeviceMajor:0 DeviceMinor:791 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/kubelet/pods/86884445-e29b-492b-8810-b63b938b9170/volumes/kubernetes.io~projected/kube-api-access-5kcbw DeviceMajor:0 DeviceMinor:953 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1072 DeviceMajor:0 DeviceMinor:1072 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-519 DeviceMajor:0 DeviceMinor:519 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1089ea24-add9-482e-9276-e6ded12052d7/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:236 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/979ba8cc-5a7b-4188-bf9e-c22d810888e9/volumes/kubernetes.io~projected/kube-api-access-28ljd DeviceMajor:0 DeviceMinor:491 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1063 DeviceMajor:0 DeviceMinor:1063 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1212 DeviceMajor:0 DeviceMinor:1212 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e559e487-18b0-4622-92fa-d06e7397b312/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:557 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7f1b2390d179c87af7aa642ae5d602040372528fd159e31c142302ed10484ef5/userdata/shm DeviceMajor:0 DeviceMinor:632 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6d678386c9d8ee3ccaf97160a5d644fc4f5d17544c6fb3d29d199b1c5b6b5add/userdata/shm DeviceMajor:0 DeviceMinor:546 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-999 DeviceMajor:0 DeviceMinor:999 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-95 DeviceMajor:0 DeviceMinor:95 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d2db220-4d5b-4819-a910-b186e1e9fb3e/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-573 DeviceMajor:0 DeviceMinor:573 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/882fd952-1914-47be-96bf-cac6341ca877/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:885 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0ed7eded-1e67-49ad-9777-c2ed1e006ce3/volumes/kubernetes.io~projected/kube-api-access-jnp9l DeviceMajor:0 DeviceMinor:115 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/979ba8cc-5a7b-4188-bf9e-c22d810888e9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:487 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/13503fef-09b2-4dbe-9537-a5b361e7b591/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:583 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/f05dca6c-7626-4970-a869-4208ff5605a2/volumes/kubernetes.io~projected/kube-api-access-5fz85 DeviceMajor:0 DeviceMinor:709 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d3017b5e-178e-49de-89d2-817a18398203/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7241bf11-192e-47db-9d80-2324938ed34c/volumes/kubernetes.io~projected/kube-api-access-s5mkm DeviceMajor:0 DeviceMinor:231 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-466 DeviceMajor:0 DeviceMinor:466 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/de6a10425187cbc938b44bf02e39e9ceb0c27562adc9c491a8cdb29f071cbb62/userdata/shm DeviceMajor:0 DeviceMinor:458 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/37064f92bb167f0d220b06c690c09b197d0f10b42a8e406aad7f8d634bcea6be/userdata/shm DeviceMajor:0 DeviceMinor:643 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-660 DeviceMajor:0 DeviceMinor:660 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/593c680a830380526e444778c9d64ee368aed54b01a56b5393d8626c11e75704/userdata/shm DeviceMajor:0 DeviceMinor:819 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1193 DeviceMajor:0 DeviceMinor:1193 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-713 DeviceMajor:0 DeviceMinor:713 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8b160a1a52470caaf8eb5167c80599083e3f1829f2580cc4817859648d8bb802/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-662 DeviceMajor:0 DeviceMinor:662 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2b87f8c3-1898-46dd-bcac-e8f22f31e812/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:688 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ad327a59-7879-4215-bb95-3f2be64cb97f/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:736 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-630 DeviceMajor:0 DeviceMinor:630 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-425 DeviceMajor:0 DeviceMinor:425 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/13503fef-09b2-4dbe-9537-a5b361e7b591/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:582 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-606 DeviceMajor:0 DeviceMinor:606 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-477 DeviceMajor:0 DeviceMinor:477 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6db3fcbe-0dbf-464f-944b-62427173c8d3/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1114 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-654 DeviceMajor:0 DeviceMinor:654 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-998 DeviceMajor:0 DeviceMinor:998 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1197 DeviceMajor:0 DeviceMinor:1197 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ab54833d-e57b-479d-b171-68155f6566f1/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:432 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-674 DeviceMajor:0 DeviceMinor:674 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:913 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/6db3fcbe-0dbf-464f-944b-62427173c8d3/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1116 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/fe245927-c937-4ec7-ab83-4900bade72cf/volumes/kubernetes.io~projected/kube-api-access-s4hsp DeviceMajor:0 DeviceMinor:103 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2ca9e696adafe66b3ba3814f26ea9bb916ca5c1804785c0e742201ad82ee9c18/userdata/shm DeviceMajor:0 DeviceMinor:274 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/e559e487-18b0-4622-92fa-d06e7397b312/volumes/kubernetes.io~projected/kube-api-access-c4p7s DeviceMajor:0 DeviceMinor:560 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/401877adce8e78dfdd3ac293a53a75da77fa4a3177086a087aa6915ac4d36604/userdata/shm DeviceMajor:0 DeviceMinor:113 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/82b98dca-59f9-42be-94ca-4a2a2b6fea0f/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:244 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-842 DeviceMajor:0 DeviceMinor:842 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6863b35c-44ac-4333-97b5-e8e38b440a20/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:401 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-470 DeviceMajor:0 DeviceMinor:470 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/86884445-e29b-492b-8810-b63b938b9170/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:952 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d29fd7441baad9596ad5ac5569da64fe277e18af3046f4e5da7f49044fe8fd7f/userdata/shm DeviceMajor:0 DeviceMinor:485 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/91112ce6-4f9d-44c1-a4e7-fea126554bcf/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:886 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/330def8aa1845ebd7a95a673279619d604275f079a7efa3f16b2060b0fd2594e/userdata/shm DeviceMajor:0 DeviceMinor:1064 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1092 DeviceMajor:0 DeviceMinor:1092 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-576 DeviceMajor:0 DeviceMinor:576 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-370 DeviceMajor:0 DeviceMinor:370 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/be4349fa-5c67-4135-80a7-b8a694553662/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:816 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-321 DeviceMajor:0 DeviceMinor:321 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-409 DeviceMajor:0 DeviceMinor:409 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1051 DeviceMajor:0 DeviceMinor:1051 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/de39c80c-acfa-4bc1-a844-95b170169b44/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1074 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d3541cbe-3be0-40d3-89d2-b5937b6a8f47/volumes/kubernetes.io~projected/kube-api-access-pv6bc DeviceMajor:0 DeviceMinor:239 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-417 DeviceMajor:0 DeviceMinor:417 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-727 DeviceMajor:0 DeviceMinor:727 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fd40498c-f50a-408c-9a50-5d85ae666124/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:799 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/be4349fa-5c67-4135-80a7-b8a694553662/volumes/kubernetes.io~projected/kube-api-access-jbzj2 DeviceMajor:0 DeviceMinor:818 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:908 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1122 DeviceMajor:0 DeviceMinor:1122 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8414b6b0-ee16-47a5-982b-ee58b136cfcf/volumes/kubernetes.io~projected/kube-api-access-864rg DeviceMajor:0 DeviceMinor:139 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/f08c5930-44f0-48e4-80dd-2563f2733b2f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2151eb84-177e-459c-be71-f48465323ac2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/37b898c3ae24210a5aa4f86ab00e075925f0f6e4fde94632405ba19b0f9e0d1d/userdata/shm DeviceMajor:0 DeviceMinor:254 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ad327a59-7879-4215-bb95-3f2be64cb97f/volumes/kubernetes.io~projected/kube-api-access-9fgj5 DeviceMajor:0 DeviceMinor:786 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-353 DeviceMajor:0 DeviceMinor:353 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-563 DeviceMajor:0 DeviceMinor:563 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3a6b082a-649b-43f6-8e24-cf222873fe39/volumes/kubernetes.io~projected/kube-api-access-srbt4 DeviceMajor:0 DeviceMinor:833 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-917 DeviceMajor:0 DeviceMinor:917 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/20538e6325cc6dc9adb3e30dce1ce797ed61d07679d7f2cd71ef1bf8c18874ea/userdata/shm DeviceMajor:0 DeviceMinor:723 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a9d191d1-631d-4091-af8b-382283c18a5a/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1053 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1166 DeviceMajor:0 DeviceMinor:1166 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b0f5939c-48b1-4d6c-9712-9128a78d603b/volumes/kubernetes.io~projected/kube-api-access-6tqdb DeviceMajor:0 DeviceMinor:237 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-281 DeviceMajor:0 DeviceMinor:281 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/661b8957-a890-4032-9e57-45e2e0b35249/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9ed2dbd1-aec4-4009-917a-933533912ab5/volumes/kubernetes.io~projected/kube-api-access-gsk9d DeviceMajor:0 DeviceMinor:240 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d6af7e6099bbf70f75032e24a3ecb1fbb8bf546e42f6f7c74e6f9a42396249e8/userdata/shm DeviceMajor:0 DeviceMinor:261 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-463 DeviceMajor:0 DeviceMinor:463 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1c2a33ba-76d0-4b81-a41d-9da16fd46209/volumes/kubernetes.io~projected/kube-api-access-k8n22 DeviceMajor:0 DeviceMinor:1163 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7da5b8963c0c07bf615297cea6af913ce19795e600e076c4d580e948922fa865/userdata/shm DeviceMajor:0 DeviceMinor:256 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-428 DeviceMajor:0 DeviceMinor:428 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-765 DeviceMajor:0 DeviceMinor:765 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e95d63ff24c648e1680e38f824e92cec0fcea7bc20cdade312b98b0468aad916/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-500 DeviceMajor:0 DeviceMinor:500 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/13503fef-09b2-4dbe-9537-a5b361e7b591/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:581 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-502 DeviceMajor:0 DeviceMinor:502 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b7dd57861a640edcd653a07f56af27e128f51a36c5d7dfe7a1115c64bac8ba80/userdata/shm DeviceMajor:0 DeviceMinor:1075 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f/volumes/kubernetes.io~projected/kube-api-access-h5n89 DeviceMajor:0 DeviceMinor:226 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/13503fef-09b2-4dbe-9537-a5b361e7b591/volumes/kubernetes.io~projected/kube-api-access-mgdlc DeviceMajor:0 DeviceMinor:584 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-906 DeviceMajor:0 DeviceMinor:906 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bf281270c03af27a5f2d97eebdf0d4e36fa1955f5f7ca7b9f757a4d7f448ea9a/userdata/shm DeviceMajor:0 DeviceMinor:1068 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-77 DeviceMajor:0 DeviceMinor:77 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fcd57352498da84e6fbc9969ab5176b5b32433301a69ada5c5c0571371a536da/userdata/shm DeviceMajor:0 DeviceMinor:368 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-385 DeviceMajor:0 DeviceMinor:385 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-800 DeviceMajor:0 DeviceMinor:800 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-892 DeviceMajor:0 DeviceMinor:892 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/86884445-e29b-492b-8810-b63b938b9170/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:951 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-199 DeviceMajor:0 DeviceMinor:199 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/19de6601-10d4-4112-a21f-0398d2b160d1/volumes/kubernetes.io~projected/kube-api-access-6xpc2 DeviceMajor:0 DeviceMinor:249 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-658 DeviceMajor:0 DeviceMinor:658 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/de69f46fe324ea455cfa701ce77df2ff17c8f9f38f189dbc84ded004836d5af0/userdata/shm DeviceMajor:0 DeviceMinor:109 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/667757ee-2670-4019-ad93-156521d3c2e7/volumes/kubernetes.io~projected/kube-api-access-rc94p DeviceMajor:0 DeviceMinor:790 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/9702fc8c-4fe0-413b-b2d4-db23021d42b8/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/e559e487-18b0-4622-92fa-d06e7397b312/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:559 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7b2ecb08-a0f9-4127-967c-7087dea4c0f6/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:848 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-692 DeviceMajor:0 DeviceMinor:692 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1162 DeviceMajor:0 DeviceMinor:1162 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1bedd36b2e748d7ffe9c8b9ed3a8c9c7331d2765980332a3cebdddee8a321573/userdata/shm DeviceMajor:0 DeviceMinor:725 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9ed2dbd1-aec4-4009-917a-933533912ab5/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:224 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2151eb84-177e-459c-be71-f48465323ac2/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:234 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-506 DeviceMajor:0 DeviceMinor:506 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/311b8bab-6cee-406d-8e0e-5b18a743d5fa/volumes/kubernetes.io~projected/kube-api-access-hjfpq DeviceMajor:0 DeviceMinor:868 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/84ed2f0d88ece07075010bba0c167b7f10255b8043408ff95f1958cee576a4a0/userdata/shm DeviceMajor:0 DeviceMinor:264 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/4800b72f-7e54-4069-b771-87fb459eeb78/volumes/kubernetes.io~projected/kube-api-access-4lkzv DeviceMajor:0 DeviceMinor:605 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/3661faaa-2c9d-4fcd-a41f-71aa71a2e464/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:695 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58/volumes/kubernetes.io~secret/secret-telemeter-client DeviceMajor:0 DeviceMinor:362 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b9477b33d342b45771f3690cbbe221e1438e0d225ffd950edeb419c6de979401/userdata/shm DeviceMajor:0 DeviceMinor:106 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-404 DeviceMajor:0 DeviceMinor:404 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/36e5fec9-7fb5-4460-8bb4-4b9e36fae978/volumes/kubernetes.io~projected/kube-api-access-z9hck DeviceMajor:0 DeviceMinor:347 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/16106b77f2e7c1585811668327c4be2d10fe7576f2ed79d2b198fca95be86d2c/userdata/shm DeviceMajor:0 DeviceMinor:252 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-899 DeviceMajor:0 DeviceMinor:899 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1003 DeviceMajor:0 DeviceMinor:1003 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bb1000ab-4419-43ce-b1b7-8f43413e017f/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1057 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/661b8957-a890-4032-9e57-45e2e0b35249/volumes/kubernetes.io~projected/kube-api-access-8hq8f DeviceMajor:0 DeviceMinor:227 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-537 DeviceMajor:0 DeviceMinor:537 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b0f5939c-48b1-4d6c-9712-9128a78d603b/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:612 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-969 DeviceMajor:0 DeviceMinor:969 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1c2a33ba-76d0-4b81-a41d-9da16fd46209/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:1156 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/5238840f-3bef-43ad-ae68-ac187f073019/volumes/kubernetes.io~projected/kube-api-access-vxdts DeviceMajor:0 DeviceMinor:493 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/3661faaa-2c9d-4fcd-a41f-71aa71a2e464/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:694 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-844 DeviceMajor:0 DeviceMinor:844 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6db3fcbe-0dbf-464f-944b-62427173c8d3/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1115 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/311b8bab-6cee-406d-8e0e-5b18a743d5fa/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:867 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-883 DeviceMajor:0 DeviceMinor:883 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-967 DeviceMajor:0 DeviceMinor:967 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/da9becfb-a504-4ef7-92ed-cd2db439d5db/volumes/kubernetes.io~projected/kube-api-access-lvzcn DeviceMajor:0 DeviceMinor:832 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e/volumes/kubernetes.io~projected/kube-api-access-r9k5t DeviceMajor:0 DeviceMinor:914 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-424 DeviceMajor:0 DeviceMinor:424 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d2db220-4d5b-4819-a910-b186e1e9fb3e/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:129 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/629e57f409989b86433406dbc0486de42ee1d2a4a26b2835682900a861605e8f/userdata/shm DeviceMajor:0 DeviceMinor:379 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-167 DeviceMajor:0 DeviceMinor:167 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1170 DeviceMajor:0 DeviceMinor:1170 Mar 19 12:14:21.603787 master-0 kubenswrapper[31830]: Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7044a7b3-4fac-40af-a31c-054a1a1db26b/volumes/kubernetes.io~projected/kube-api-access-shfs6 DeviceMajor:0 DeviceMinor:105 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/63c12a89-1b49-4eba-8f5a-551b10d2246b/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:443 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-70 DeviceMajor:0 DeviceMinor:70 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1155 DeviceMajor:0 DeviceMinor:1155 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/91112ce6-4f9d-44c1-a4e7-fea126554bcf/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:879 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/27ccdb8fe17b3c5cb9acf1759072b6837f5312b119b69e4b34ee0c362bd4382c/userdata/shm DeviceMajor:0 DeviceMinor:89 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-126 DeviceMajor:0 DeviceMinor:126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-964 DeviceMajor:0 DeviceMinor:964 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-931 DeviceMajor:0 DeviceMinor:931 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-327 DeviceMajor:0 DeviceMinor:327 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6b418b5a6ab7d2f0fbb7cd5733cda224a66315648fe46c18f09905494c67309d/userdata/shm DeviceMajor:0 DeviceMinor:812 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/5238840f-3bef-43ad-ae68-ac187f073019/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:494 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-924 DeviceMajor:0 DeviceMinor:924 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1141 DeviceMajor:0 DeviceMinor:1141 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c2dbd8b3-0e02-4747-a166-80aa6a94b060/volumes/kubernetes.io~projected/kube-api-access-npc2t DeviceMajor:0 DeviceMinor:243 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-783 DeviceMajor:0 DeviceMinor:783 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-666 DeviceMajor:0 DeviceMinor:666 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a7747954-a222-4809-8656-818203b55ee8/volumes/kubernetes.io~projected/kube-api-access-khv2z DeviceMajor:0 DeviceMinor:225 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/657e67ca992e83dd97b428ec2664479ed04815d8dada9aa63b0bd9e585d0e3d7/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-288 DeviceMajor:0 DeviceMinor:288 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-670 DeviceMajor:0 DeviceMinor:670 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1018 DeviceMajor:0 DeviceMinor:1018 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1086 DeviceMajor:0 DeviceMinor:1086 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c364dba2c743db6a6431b4c04a672e744dc16c7056590a2f4b28394bd78f6fc7/userdata/shm DeviceMajor:0 DeviceMinor:1164 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/91112ce6-4f9d-44c1-a4e7-fea126554bcf/volumes/kubernetes.io~projected/kube-api-access-8hrkb DeviceMajor:0 DeviceMinor:888 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f40dd28398740e1b8b665d870680e26bbfe5f4e3541ded3a1a95c827cd013960/userdata/shm DeviceMajor:0 DeviceMinor:944 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/06df1b1b-154e-46f9-aee0-79a137c6c928/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/63c12a89-1b49-4eba-8f5a-551b10d2246b/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:453 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/24de2a964d2fa28c5bff828df5f742d99916541dc1152f4dcdf6f4231784eba1/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4d126177d3103b9726cb0abe507c291aeac9fb33c980d607daaa2352bbce8e96/userdata/shm DeviceMajor:0 DeviceMinor:629 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6863b35c-44ac-4333-97b5-e8e38b440a20/volumes/kubernetes.io~projected/kube-api-access-ddl8k DeviceMajor:0 DeviceMinor:402 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-359 DeviceMajor:0 DeviceMinor:359 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1176 DeviceMajor:0 DeviceMinor:1176 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-831 DeviceMajor:0 DeviceMinor:831 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/27514f785ebf129e635b61742d2a50f4b4590a69d29ba2f3c58ee430e3465119/userdata/shm DeviceMajor:0 DeviceMinor:897 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1107 DeviceMajor:0 DeviceMinor:1107 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-558 DeviceMajor:0 DeviceMinor:558 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1fc61313284938071ce89eb1211d46af435fab3cccf3e32e3b4afcbf9419655a/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-375 DeviceMajor:0 DeviceMinor:375 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-672 DeviceMajor:0 DeviceMinor:672 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f381b85f9130b76eda5dc167d27eb69ac9b6f2de032bdb231577387d3f19b35d/userdata/shm DeviceMajor:0 DeviceMinor:97 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/beb562de-402b-4d9f-b5ed-090b60847a95/volumes/kubernetes.io~projected/kube-api-access-9mr6d DeviceMajor:0 DeviceMinor:229 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-575 DeviceMajor:0 DeviceMinor:575 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-406 DeviceMajor:0 DeviceMinor:406 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1082 DeviceMajor:0 DeviceMinor:1082 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7383e647-63b0-452d-a39b-02ad27a9b053/volumes/kubernetes.io~projected/kube-api-access-2xz8h DeviceMajor:0 DeviceMinor:588 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-984 DeviceMajor:0 DeviceMinor:984 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ca37f4d8890aea843e2dd74f0a3fbd57188dcf29ebff0755845d7039996af375/userdata/shm DeviceMajor:0 DeviceMinor:640 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-668 DeviceMajor:0 DeviceMinor:668 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-733 DeviceMajor:0 DeviceMinor:733 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-510 DeviceMajor:0 DeviceMinor:510 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-602 DeviceMajor:0 DeviceMinor:602 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b80027fd-7b39-477a-a337-ff9bb08e7eeb/volumes/kubernetes.io~projected/kube-api-access-hs4jf DeviceMajor:0 DeviceMinor:242 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/82b98dca-59f9-42be-94ca-4a2a2b6fea0f/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:454 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:615 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1015 DeviceMajor:0 DeviceMinor:1015 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/be807ecce9aec0f7633eaae2ed5203cb82f342ed739dc26f098d55766e987b78/userdata/shm DeviceMajor:0 DeviceMinor:1123 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/f236a5ab-b400-46fc-94ee-1fff476d6458/volumes/kubernetes.io~projected/kube-api-access-ps4k8 DeviceMajor:0 DeviceMinor:593 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1005 DeviceMajor:0 DeviceMinor:1005 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-589 DeviceMajor:0 DeviceMinor:589 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0f97d998-530c-4d9d-a030-ca1d9d2d4490/volumes/kubernetes.io~projected/kube-api-access-zntzt DeviceMajor:0 DeviceMinor:241 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-387 DeviceMajor:0 DeviceMinor:387 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/979ba8cc-5a7b-4188-bf9e-c22d810888e9/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:488 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-623 DeviceMajor:0 DeviceMinor:623 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04/volumes/kubernetes.io~projected/kube-api-access-hwfg5 DeviceMajor:0 DeviceMinor:235 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/58d1369a13582afcb1d55c539b0bf53f7dd57cd88bda12b4a87bc5a6b8e84cbc/userdata/shm DeviceMajor:0 DeviceMinor:260 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-704 DeviceMajor:0 DeviceMinor:704 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fa112877e7809f3added7e93999d2d52089456dfb6885e6498c6e53ce0c53ded/userdata/shm DeviceMajor:0 DeviceMinor:828 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-652 DeviceMajor:0 DeviceMinor:652 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58/volumes/kubernetes.io~secret/federate-client-tls DeviceMajor:0 DeviceMinor:749 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/da9becfb-a504-4ef7-92ed-cd2db439d5db/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:826 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/33355c55e294585ceaa17697d7356477785bdaba3177d324b39df2dc095c31c6/userdata/shm DeviceMajor:0 DeviceMinor:635 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1011 DeviceMajor:0 DeviceMinor:1011 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/398bcaca-1bea-4633-a78f-717e3d015ddd/volumes/kubernetes.io~projected/kube-api-access-fhqhb DeviceMajor:0 DeviceMinor:123 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/099f1cf5ddb64458132dd6fe55ba3878ce79ff183de73a0ef9c8fa9295853b5c/userdata/shm DeviceMajor:0 DeviceMinor:638 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e1150baa290a3898ec8c1b3b3de0ed9b6af20668ee360ed4984852f84f153bb0/userdata/shm DeviceMajor:0 DeviceMinor:518 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0316c374-f812-4e0a-8645-727e8372f16e/volumes/kubernetes.io~projected/kube-api-access-tvvk8 DeviceMajor:0 DeviceMinor:890 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-738 DeviceMajor:0 DeviceMinor:738 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/de39c80c-acfa-4bc1-a844-95b170169b44/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:921 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2ea49210674ab53911da00e8c007432ee001baf1726a3c4349603d4b14736471/userdata/shm DeviceMajor:0 DeviceMinor:64 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-744 DeviceMajor:0 DeviceMinor:744 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ab54833d-e57b-479d-b171-68155f6566f1/volumes/kubernetes.io~projected/kube-api-access-gl6d7 DeviceMajor:0 DeviceMinor:232 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-625 DeviceMajor:0 DeviceMinor:625 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/91112ce6-4f9d-44c1-a4e7-fea126554bcf/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:887 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/954ede16a95baa0dd18c714681dfe7d875a3e3012701640009a8298afe790b4b/userdata/shm DeviceMajor:0 DeviceMinor:925 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0f97d998-530c-4d9d-a030-ca1d9d2d4490/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1084 DeviceMajor:0 DeviceMinor:1084 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-340 DeviceMajor:0 DeviceMinor:340 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ea807ec97b5b85d57bfd1e0adda9e020d25ab20667140eb00ae9510d72b84498/userdata/shm DeviceMajor:0 DeviceMinor:860 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ee3529ac-6135-438b-9334-40c63c1fbd3d/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:859 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/919daf8d-763a-44bc-8916-86b425a27cbd/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:490 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/919daf8d-763a-44bc-8916-86b425a27cbd/volumes/kubernetes.io~projected/kube-api-access-8brwr DeviceMajor:0 DeviceMinor:492 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-533 DeviceMajor:0 DeviceMinor:533 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-729 DeviceMajor:0 DeviceMinor:729 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-869 DeviceMajor:0 DeviceMinor:869 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-530 DeviceMajor:0 DeviceMinor:530 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-291 DeviceMajor:0 DeviceMinor:291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/616dbb32-6b65-4e44-a217-6b1be2844cc9/volumes/kubernetes.io~projected/kube-api-access-7g6zz DeviceMajor:0 DeviceMinor:380 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58/volumes/kubernetes.io~projected/kube-api-access-vm9zf DeviceMajor:0 DeviceMinor:923 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8414b6b0-ee16-47a5-982b-ee58b136cfcf/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:138 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-568 DeviceMajor:0 DeviceMinor:568 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-904 DeviceMajor:0 DeviceMinor:904 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bb1000ab-4419-43ce-b1b7-8f43413e017f/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1058 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/06df1b1b-154e-46f9-aee0-79a137c6c928/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:233 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-807 DeviceMajor:0 DeviceMinor:807 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-161 DeviceMajor:0 DeviceMinor:161 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ef65cfa8e397b0d9fb626793071be85235d45f48e759141f7e306d3f038d0b06/userdata/shm DeviceMajor:0 DeviceMinor:599 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/051890867de8ff413fdae42afc2ad5867d80bb4189ee315587bdfb2254762fa5/userdata/shm DeviceMajor:0 DeviceMinor:339 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-118 DeviceMajor:0 DeviceMinor:118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-317 DeviceMajor:0 DeviceMinor:317 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-381 DeviceMajor:0 DeviceMinor:381 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5a539aaaf2dd4db935a04de17d4edc2ce062fa7a5a29f257bfd8c8188731698f/userdata/shm DeviceMajor:0 DeviceMinor:415 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/19de6601-10d4-4112-a21f-0398d2b160d1/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:620 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/be4349fa-5c67-4135-80a7-b8a694553662/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:817 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ee3529ac-6135-438b-9334-40c63c1fbd3d/volumes/kubernetes.io~projected/kube-api-access-c8hpg DeviceMajor:0 DeviceMinor:862 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/979ba8cc-5a7b-4188-bf9e-c22d810888e9/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:489 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-504 DeviceMajor:0 DeviceMinor:504 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7241bf11-192e-47db-9d80-2324938ed34c/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:614 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-871 DeviceMajor:0 DeviceMinor:871 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/06387b0da219a3280f534fa3f57451c18534c297d33ad18d4503e53efc4f6f2f/userdata/shm DeviceMajor:0 DeviceMinor:153 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/944eac68-e72b-4aed-b5dc-d7d9703178a3/volumes/kubernetes.io~projected/kube-api-access-m2mdn DeviceMajor:0 DeviceMinor:318 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-389 DeviceMajor:0 DeviceMinor:389 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-940 DeviceMajor:0 DeviceMinor:940 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1009 DeviceMajor:0 DeviceMinor:1009 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b8ab4adb571de7e6d61b60e1752c759892824492154b5310933386ea2f807133/userdata/shm DeviceMajor:0 DeviceMinor:927 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a9d191d1-631d-4091-af8b-382283c18a5a/volumes/kubernetes.io~projected/kube-api-access-cq9p4 DeviceMajor:0 DeviceMinor:1060 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/36e5fec9-7fb5-4460-8bb4-4b9e36fae978/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:342 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/82b98dca-59f9-42be-94ca-4a2a2b6fea0f/volumes/kubernetes.io~projected/kube-api-access-c5bmd DeviceMajor:0 DeviceMinor:248 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-539 DeviceMajor:0 DeviceMinor:539 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-846 DeviceMajor:0 DeviceMinor:846 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1153 DeviceMajor:0 DeviceMinor:1153 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b4bf04cf4a874cdad02ad51f153ce323d3cc5a93749aa40aeabd5ac11d70f65e/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/19de6601-10d4-4112-a21f-0398d2b160d1/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:618 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-919 DeviceMajor:0 DeviceMinor:919 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/de39c80c-acfa-4bc1-a844-95b170169b44/volumes/kubernetes.io~projected/kube-api-access-6x2v6 DeviceMajor:0 DeviceMinor:922 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dc31fac048987256095251eb1c41dfbd7ba8f1030acd608588347d150bf4c3c7/userdata/shm DeviceMajor:0 DeviceMinor:889 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8ac7f6216c5921740646509c9d1e443feacb80b056e20b3a4f138b334049ff2c/userdata/shm DeviceMajor:0 DeviceMinor:822 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1099 DeviceMajor:0 DeviceMinor:1099 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-565 DeviceMajor:0 DeviceMinor:565 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bf226d89-450d-4876-a113-345632b94ee9/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d3017b5e-178e-49de-89d2-817a18398203/volumes/kubernetes.io~projected/kube-api-access-b6wm6 DeviceMajor:0 DeviceMinor:245 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-531 DeviceMajor:0 DeviceMinor:531 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/43f216a933b60c080a956b5e1d05307037754c5207355d8b96b4c2f7227054f0/userdata/shm DeviceMajor:0 DeviceMinor:541 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:051890867de8ff4 MacAddress:22:14:26:00:96:37 Speed:10000 Mtu:8900} {Name:099f1cf5ddb6445 MacAddress:62:c1:ee:bf:14:4d Speed:10000 Mtu:8900} {Name:0c17be488f74c65 MacAddress:7a:f4:ad:06:99:c9 Speed:10000 Mtu:8900} {Name:16106b77f2e7c15 MacAddress:86:4d:6d:cd:dd:23 Speed:10000 Mtu:8900} {Name:1bedd36b2e748d7 MacAddress:7a:d6:8e:03:12:b5 Speed:10000 Mtu:8900} {Name:1da3868b3838b62 MacAddress:7a:7e:2e:79:f8:0c Speed:10000 Mtu:8900} {Name:1fc613132849380 MacAddress:fe:80:4c:2b:6d:b8 Speed:10000 Mtu:8900} {Name:20538e6325cc6dc MacAddress:8e:42:87:47:36:3f Speed:10000 Mtu:8900} {Name:24de2a964d2fa28 MacAddress:0e:ea:01:17:d7:a7 Speed:10000 Mtu:8900} {Name:28d0f82641cafb7 MacAddress:6e:a9:8f:c8:6b:b6 Speed:10000 Mtu:8900} {Name:2ca9e696adafe66 MacAddress:02:5f:4c:90:6f:0d Speed:10000 Mtu:8900} {Name:33355c55e294585 MacAddress:0e:65:2c:e6:6f:f6 Speed:10000 Mtu:8900} {Name:37064f92bb167f0 MacAddress:ae:9f:a6:dc:49:6f Speed:10000 Mtu:8900} {Name:3756314b5f9faad MacAddress:52:16:3b:55:ef:b4 Speed:10000 Mtu:8900} {Name:37b898c3ae24210 MacAddress:2e:71:06:d2:63:bd Speed:10000 Mtu:8900} {Name:3e9cb8897ccc8cd MacAddress:0a:52:6b:f4:26:f6 Speed:10000 Mtu:8900} {Name:3f20a730c4d5f1f MacAddress:da:e7:d4:20:e3:2b Speed:10000 Mtu:8900} {Name:4cdef734b9abebf MacAddress:f6:38:3a:6a:c3:76 Speed:10000 Mtu:8900} {Name:4d126177d3103b9 MacAddress:3e:b4:b2:0a:1b:02 Speed:10000 Mtu:8900} {Name:58d1369a13582af MacAddress:e2:ef:f6:c3:18:3f Speed:10000 Mtu:8900} {Name:593c680a8303805 MacAddress:06:05:b1:2d:53:1d Speed:10000 Mtu:8900} {Name:5971350293b5650 MacAddress:86:1b:06:d7:7e:c7 Speed:10000 Mtu:8900} {Name:5a539aaaf2dd4db MacAddress:8e:26:1f:0e:c7:48 Speed:10000 Mtu:8900} {Name:629e57f409989b8 MacAddress:6a:9d:61:74:c2:f4 Speed:10000 Mtu:8900} {Name:63407ab3b928693 MacAddress:66:e1:49:14:69:14 Speed:10000 Mtu:8900} {Name:657e67ca992e83d MacAddress:86:df:40:e2:f8:7d Speed:10000 Mtu:8900} {Name:6b418b5a6ab7d2f MacAddress:fa:82:3c:65:86:7e Speed:10000 Mtu:8900} {Name:71366739cc36c89 MacAddress:76:ea:4b:51:a7:d2 Speed:10000 Mtu:8900} {Name:7da5b8963c0c07b MacAddress:d2:60:6a:35:68:d8 Speed:10000 Mtu:8900} {Name:7f1b2390d179c87 MacAddress:a2:47:2e:d6:c5:ed Speed:10000 Mtu:8900} {Name:81e5dd60f8e8f39 MacAddress:e6:4c:2b:44:16:e6 Speed:10000 Mtu:8900} {Name:84ed2f0d88ece07 MacAddress:f6:69:ce:7d:f9:e8 Speed:10000 Mtu:8900} {Name:89df1c468dcab6a MacAddress:5e:62:53:79:20:c1 Speed:10000 Mtu:8900} {Name:8ac7f6216c59217 MacAddress:9e:46:c6:97:6b:2b Speed:10000 Mtu:8900} {Name:954ede16a95baa0 MacAddress:9e:37:e3:cc:99:6a Speed:10000 Mtu:8900} {Name:9a72b8977a8a7f6 MacAddress:be:d6:1a:b9:96:10 Speed:10000 Mtu:8900} {Name:b31a84101a7e9f8 MacAddress:2e:ae:c2:c7:e4:c1 Speed:10000 Mtu:8900} {Name:b5d29a971edd0c0 MacAddress:8a:ba:68:b3:95:23 Speed:10000 Mtu:8900} {Name:b7dd57861a640ed MacAddress:3a:e6:13:37:b4:07 Speed:10000 Mtu:8900} {Name:b80d357d31adb7d MacAddress:1a:9b:a2:4b:e1:ea Speed:10000 Mtu:8900} {Name:be807ecce9aec0f MacAddress:2e:5b:9a:d2:a9:c1 Speed:10000 Mtu:8900} {Name:bf281270c03af27 MacAddress:32:89:30:a2:9e:04 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:2a:8b:8d:f6:5c:b3 Speed:0 Mtu:8900} {Name:c364dba2c743db6 MacAddress:3a:ab:11:97:08:f5 Speed:10000 Mtu:8900} {Name:ca37f4d8890aea8 MacAddress:7a:b6:41:f4:37:9f Speed:10000 Mtu:8900} {Name:cafdcda3b6318ea MacAddress:5e:f4:f2:e3:2a:8e Speed:10000 Mtu:8900} {Name:d29fd7441baad95 MacAddress:aa:61:fe:9c:15:f2 Speed:10000 Mtu:8900} {Name:d4e38c98fa8bce4 MacAddress:2e:b5:df:de:39:e2 Speed:10000 Mtu:8900} {Name:d6af7e6099bbf70 MacAddress:6e:6d:c8:ae:c6:c8 Speed:10000 Mtu:8900} {Name:dc31fac04898725 MacAddress:fe:67:11:71:3b:1a Speed:10000 Mtu:8900} {Name:de6a10425187cbc MacAddress:52:2d:fc:da:13:3b Speed:10000 Mtu:8900} {Name:ea807ec97b5b85d MacAddress:4e:60:ca:ca:5d:4f Speed:10000 Mtu:8900} {Name:ef65cfa8e397b0d MacAddress:3e:67:fa:66:13:63 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:0b:8e:2e Speed:-1 Mtu:9000} {Name:f37f04bee189304 MacAddress:12:5a:a7:3f:44:dc Speed:10000 Mtu:8900} {Name:f40dd28398740e1 MacAddress:22:7c:c3:8c:e5:dd Speed:10000 Mtu:8900} {Name:fa112877e7809f3 MacAddress:1a:6f:b6:f8:0a:20 Speed:10000 Mtu:8900} {Name:fcd57352498da84 MacAddress:42:95:40:db:53:02 Speed:10000 Mtu:8900} {Name:fe4ada978b72bf0 MacAddress:0e:7c:fa:f2:90:b9 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:be:b5:64:e8:21:b9 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 19 12:14:21.603787 master-0 kubenswrapper[31830]: I0319 12:14:21.603213 31830 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 19 12:14:21.603787 master-0 kubenswrapper[31830]: I0319 12:14:21.603395 31830 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 19 12:14:21.603787 master-0 kubenswrapper[31830]: I0319 12:14:21.603769 31830 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 19 12:14:21.604257 master-0 kubenswrapper[31830]: I0319 12:14:21.604074 31830 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 12:14:21.604449 master-0 kubenswrapper[31830]: I0319 12:14:21.604133 31830 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 12:14:21.604537 master-0 kubenswrapper[31830]: I0319 12:14:21.604500 31830 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 12:14:21.604574 master-0 kubenswrapper[31830]: I0319 12:14:21.604536 31830 container_manager_linux.go:303] "Creating device plugin manager" Mar 19 12:14:21.604574 master-0 kubenswrapper[31830]: I0319 12:14:21.604555 31830 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 19 12:14:21.604633 master-0 kubenswrapper[31830]: I0319 12:14:21.604601 31830 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 19 12:14:21.604680 master-0 kubenswrapper[31830]: I0319 12:14:21.604661 31830 state_mem.go:36] "Initialized new in-memory state store" Mar 19 12:14:21.604883 master-0 kubenswrapper[31830]: I0319 12:14:21.604854 31830 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 19 12:14:21.604979 master-0 kubenswrapper[31830]: I0319 12:14:21.604961 31830 kubelet.go:418] "Attempting to sync node with API server" Mar 19 12:14:21.605012 master-0 kubenswrapper[31830]: I0319 12:14:21.604989 31830 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 12:14:21.605048 master-0 kubenswrapper[31830]: I0319 12:14:21.605013 31830 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 19 12:14:21.605048 master-0 kubenswrapper[31830]: I0319 12:14:21.605034 31830 kubelet.go:324] "Adding apiserver pod source" Mar 19 12:14:21.605111 master-0 kubenswrapper[31830]: I0319 12:14:21.605061 31830 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 12:14:21.606826 master-0 kubenswrapper[31830]: I0319 12:14:21.606755 31830 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 19 12:14:21.611586 master-0 kubenswrapper[31830]: I0319 12:14:21.611536 31830 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 19 12:14:21.615162 master-0 kubenswrapper[31830]: I0319 12:14:21.615128 31830 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 12:14:21.615530 master-0 kubenswrapper[31830]: I0319 12:14:21.615488 31830 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 19 12:14:21.615954 master-0 kubenswrapper[31830]: I0319 12:14:21.615925 31830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 19 12:14:21.616021 master-0 kubenswrapper[31830]: I0319 12:14:21.615966 31830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 19 12:14:21.616021 master-0 kubenswrapper[31830]: I0319 12:14:21.615984 31830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 19 12:14:21.616021 master-0 kubenswrapper[31830]: I0319 12:14:21.615998 31830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 19 12:14:21.616021 master-0 kubenswrapper[31830]: I0319 12:14:21.616010 31830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 19 12:14:21.616132 master-0 kubenswrapper[31830]: I0319 12:14:21.616022 31830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 19 12:14:21.616132 master-0 kubenswrapper[31830]: I0319 12:14:21.616036 31830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 19 12:14:21.616132 master-0 kubenswrapper[31830]: I0319 12:14:21.616087 31830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 19 12:14:21.616132 master-0 kubenswrapper[31830]: I0319 12:14:21.616104 31830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 19 12:14:21.616243 master-0 kubenswrapper[31830]: I0319 12:14:21.616147 31830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 19 12:14:21.616243 master-0 kubenswrapper[31830]: I0319 12:14:21.616215 31830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 19 12:14:21.616243 master-0 kubenswrapper[31830]: I0319 12:14:21.616240 31830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 19 12:14:21.616339 master-0 kubenswrapper[31830]: I0319 12:14:21.616280 31830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 19 12:14:21.617400 master-0 kubenswrapper[31830]: I0319 12:14:21.617368 31830 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 19 12:14:21.620457 master-0 kubenswrapper[31830]: I0319 12:14:21.620413 31830 server.go:1280] "Started kubelet" Mar 19 12:14:21.621522 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 19 12:14:21.622853 master-0 kubenswrapper[31830]: I0319 12:14:21.622751 31830 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 12:14:21.622917 master-0 kubenswrapper[31830]: I0319 12:14:21.622900 31830 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 19 12:14:21.623044 master-0 kubenswrapper[31830]: I0319 12:14:21.622962 31830 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 12:14:21.624275 master-0 kubenswrapper[31830]: I0319 12:14:21.624240 31830 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 12:14:21.625886 master-0 kubenswrapper[31830]: I0319 12:14:21.625713 31830 server.go:449] "Adding debug handlers to kubelet server" Mar 19 12:14:21.636883 master-0 kubenswrapper[31830]: I0319 12:14:21.636769 31830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 19 12:14:21.636883 master-0 kubenswrapper[31830]: I0319 12:14:21.636841 31830 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 12:14:21.637129 master-0 kubenswrapper[31830]: I0319 12:14:21.636890 31830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-20 11:43:21 +0000 UTC, rotation deadline is 2026-03-20 08:52:08.765649211 +0000 UTC Mar 19 12:14:21.637129 master-0 kubenswrapper[31830]: I0319 12:14:21.636974 31830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h37m47.128677778s for next certificate rotation Mar 19 12:14:21.637129 master-0 kubenswrapper[31830]: I0319 12:14:21.637005 31830 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 19 12:14:21.637129 master-0 kubenswrapper[31830]: I0319 12:14:21.637012 31830 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 19 12:14:21.637129 master-0 kubenswrapper[31830]: I0319 12:14:21.637039 31830 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 19 12:14:21.638730 master-0 kubenswrapper[31830]: I0319 12:14:21.638699 31830 factory.go:55] Registering systemd factory Mar 19 12:14:21.638730 master-0 kubenswrapper[31830]: I0319 12:14:21.638729 31830 factory.go:221] Registration of the systemd container factory successfully Mar 19 12:14:21.639320 master-0 kubenswrapper[31830]: I0319 12:14:21.639261 31830 factory.go:153] Registering CRI-O factory Mar 19 12:14:21.639363 master-0 kubenswrapper[31830]: I0319 12:14:21.639330 31830 factory.go:221] Registration of the crio container factory successfully Mar 19 12:14:21.639465 master-0 kubenswrapper[31830]: I0319 12:14:21.639439 31830 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 19 12:14:21.639505 master-0 kubenswrapper[31830]: I0319 12:14:21.639473 31830 factory.go:103] Registering Raw factory Mar 19 12:14:21.639505 master-0 kubenswrapper[31830]: I0319 12:14:21.639494 31830 manager.go:1196] Started watching for new ooms in manager Mar 19 12:14:21.639721 master-0 kubenswrapper[31830]: I0319 12:14:21.639689 31830 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 19 12:14:21.641076 master-0 kubenswrapper[31830]: I0319 12:14:21.641053 31830 manager.go:319] Starting recovery of all containers Mar 19 12:14:21.659202 master-0 kubenswrapper[31830]: I0319 12:14:21.659131 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91112ce6-4f9d-44c1-a4e7-fea126554bcf" volumeName="kubernetes.io/configmap/91112ce6-4f9d-44c1-a4e7-fea126554bcf-service-ca-bundle" seLinuxMountContext="" Mar 19 12:14:21.659202 master-0 kubenswrapper[31830]: I0319 12:14:21.659195 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d2db220-4d5b-4819-a910-b186e1e9fb3e" volumeName="kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovnkube-script-lib" seLinuxMountContext="" Mar 19 12:14:21.659442 master-0 kubenswrapper[31830]: I0319 12:14:21.659213 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7747954-a222-4809-8656-818203b55ee8" volumeName="kubernetes.io/projected/a7747954-a222-4809-8656-818203b55ee8-kube-api-access-khv2z" seLinuxMountContext="" Mar 19 12:14:21.659442 master-0 kubenswrapper[31830]: I0319 12:14:21.659234 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b0f5939c-48b1-4d6c-9712-9128a78d603b" volumeName="kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics" seLinuxMountContext="" Mar 19 12:14:21.659442 master-0 kubenswrapper[31830]: I0319 12:14:21.659254 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdcdb23d-ef1f-45e2-b9ac-7abf707637b6" volumeName="kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert" seLinuxMountContext="" Mar 19 12:14:21.659442 master-0 kubenswrapper[31830]: I0319 12:14:21.659269 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="63c12a89-1b49-4eba-8f5a-551b10d2246b" volumeName="kubernetes.io/configmap/63c12a89-1b49-4eba-8f5a-551b10d2246b-trusted-ca" seLinuxMountContext="" Mar 19 12:14:21.659442 master-0 kubenswrapper[31830]: I0319 12:14:21.659290 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="63c12a89-1b49-4eba-8f5a-551b10d2246b" volumeName="kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert" seLinuxMountContext="" Mar 19 12:14:21.659442 master-0 kubenswrapper[31830]: I0319 12:14:21.659304 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7044a7b3-4fac-40af-a31c-054a1a1db26b" volumeName="kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-cni-sysctl-allowlist" seLinuxMountContext="" Mar 19 12:14:21.659442 master-0 kubenswrapper[31830]: I0319 12:14:21.659361 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="919daf8d-763a-44bc-8916-86b425a27cbd" volumeName="kubernetes.io/secret/919daf8d-763a-44bc-8916-86b425a27cbd-catalogserver-certs" seLinuxMountContext="" Mar 19 12:14:21.659442 master-0 kubenswrapper[31830]: I0319 12:14:21.659381 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="398bcaca-1bea-4633-a78f-717e3d015ddd" volumeName="kubernetes.io/projected/398bcaca-1bea-4633-a78f-717e3d015ddd-kube-api-access-fhqhb" seLinuxMountContext="" Mar 19 12:14:21.659442 master-0 kubenswrapper[31830]: I0319 12:14:21.659397 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="667757ee-2670-4019-ad93-156521d3c2e7" volumeName="kubernetes.io/projected/667757ee-2670-4019-ad93-156521d3c2e7-kube-api-access-rc94p" seLinuxMountContext="" Mar 19 12:14:21.659693 master-0 kubenswrapper[31830]: I0319 12:14:21.659414 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6db3fcbe-0dbf-464f-944b-62427173c8d3" volumeName="kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Mar 19 12:14:21.659693 master-0 kubenswrapper[31830]: I0319 12:14:21.659492 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d2db220-4d5b-4819-a910-b186e1e9fb3e" volumeName="kubernetes.io/projected/9d2db220-4d5b-4819-a910-b186e1e9fb3e-kube-api-access-wshb2" seLinuxMountContext="" Mar 19 12:14:21.659693 master-0 kubenswrapper[31830]: I0319 12:14:21.659516 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be4349fa-5c67-4135-80a7-b8a694553662" volumeName="kubernetes.io/empty-dir/be4349fa-5c67-4135-80a7-b8a694553662-tmpfs" seLinuxMountContext="" Mar 19 12:14:21.659693 master-0 kubenswrapper[31830]: I0319 12:14:21.659531 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fd40498c-f50a-408c-9a50-5d85ae666124" volumeName="kubernetes.io/projected/fd40498c-f50a-408c-9a50-5d85ae666124-kube-api-access-2rmw5" seLinuxMountContext="" Mar 19 12:14:21.659693 master-0 kubenswrapper[31830]: I0319 12:14:21.659546 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6863b35c-44ac-4333-97b5-e8e38b440a20" volumeName="kubernetes.io/projected/6863b35c-44ac-4333-97b5-e8e38b440a20-kube-api-access-ddl8k" seLinuxMountContext="" Mar 19 12:14:21.659693 master-0 kubenswrapper[31830]: I0319 12:14:21.659561 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c80f8d0-ee9b-4a4d-ba92-e241b2552e58" volumeName="kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-trusted-ca-bundle" seLinuxMountContext="" Mar 19 12:14:21.659693 master-0 kubenswrapper[31830]: I0319 12:14:21.659576 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e559e487-18b0-4622-92fa-d06e7397b312" volumeName="kubernetes.io/projected/e559e487-18b0-4622-92fa-d06e7397b312-kube-api-access-c4p7s" seLinuxMountContext="" Mar 19 12:14:21.659693 master-0 kubenswrapper[31830]: I0319 12:14:21.659610 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ed7eded-1e67-49ad-9777-c2ed1e006ce3" volumeName="kubernetes.io/empty-dir/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-utilities" seLinuxMountContext="" Mar 19 12:14:21.659693 master-0 kubenswrapper[31830]: I0319 12:14:21.659625 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4264e82c-387f-4aa6-9ef6-b7beb61e098c" volumeName="kubernetes.io/configmap/4264e82c-387f-4aa6-9ef6-b7beb61e098c-service-ca-bundle" seLinuxMountContext="" Mar 19 12:14:21.659693 master-0 kubenswrapper[31830]: I0319 12:14:21.659638 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aef8e03f-0363-4e13-b7ca-4fa871d77c62" volumeName="kubernetes.io/secret/aef8e03f-0363-4e13-b7ca-4fa871d77c62-serving-cert" seLinuxMountContext="" Mar 19 12:14:21.659693 master-0 kubenswrapper[31830]: I0319 12:14:21.659650 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f05dca6c-7626-4970-a869-4208ff5605a2" volumeName="kubernetes.io/projected/f05dca6c-7626-4970-a869-4208ff5605a2-kube-api-access-5fz85" seLinuxMountContext="" Mar 19 12:14:21.659693 master-0 kubenswrapper[31830]: I0319 12:14:21.659664 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fd40498c-f50a-408c-9a50-5d85ae666124" volumeName="kubernetes.io/secret/fd40498c-f50a-408c-9a50-5d85ae666124-machine-approver-tls" seLinuxMountContext="" Mar 19 12:14:21.659693 master-0 kubenswrapper[31830]: I0319 12:14:21.659683 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5238840f-3bef-43ad-ae68-ac187f073019" volumeName="kubernetes.io/empty-dir/5238840f-3bef-43ad-ae68-ac187f073019-cache" seLinuxMountContext="" Mar 19 12:14:21.660215 master-0 kubenswrapper[31830]: I0319 12:14:21.659695 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="882fd952-1914-47be-96bf-cac6341ca877" volumeName="kubernetes.io/secret/882fd952-1914-47be-96bf-cac6341ca877-tls-certificates" seLinuxMountContext="" Mar 19 12:14:21.660215 master-0 kubenswrapper[31830]: I0319 12:14:21.659744 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9d191d1-631d-4091-af8b-382283c18a5a" volumeName="kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-tls" seLinuxMountContext="" Mar 19 12:14:21.660215 master-0 kubenswrapper[31830]: I0319 12:14:21.659767 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f236a5ab-b400-46fc-94ee-1fff476d6458" volumeName="kubernetes.io/configmap/f236a5ab-b400-46fc-94ee-1fff476d6458-config-volume" seLinuxMountContext="" Mar 19 12:14:21.660215 master-0 kubenswrapper[31830]: I0319 12:14:21.659790 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb1000ab-4419-43ce-b1b7-8f43413e017f" volumeName="kubernetes.io/configmap/bb1000ab-4419-43ce-b1b7-8f43413e017f-metrics-client-ca" seLinuxMountContext="" Mar 19 12:14:21.660215 master-0 kubenswrapper[31830]: I0319 12:14:21.659919 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab54833d-e57b-479d-b171-68155f6566f1" volumeName="kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls" seLinuxMountContext="" Mar 19 12:14:21.660215 master-0 kubenswrapper[31830]: I0319 12:14:21.659934 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3017b5e-178e-49de-89d2-817a18398203" volumeName="kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-service-ca-bundle" seLinuxMountContext="" Mar 19 12:14:21.660215 master-0 kubenswrapper[31830]: I0319 12:14:21.659953 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee3529ac-6135-438b-9334-40c63c1fbd3d" volumeName="kubernetes.io/projected/ee3529ac-6135-438b-9334-40c63c1fbd3d-kube-api-access-c8hpg" seLinuxMountContext="" Mar 19 12:14:21.660215 master-0 kubenswrapper[31830]: I0319 12:14:21.659973 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee3529ac-6135-438b-9334-40c63c1fbd3d" volumeName="kubernetes.io/secret/ee3529ac-6135-438b-9334-40c63c1fbd3d-cloud-controller-manager-operator-tls" seLinuxMountContext="" Mar 19 12:14:21.660215 master-0 kubenswrapper[31830]: I0319 12:14:21.659986 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe245927-c937-4ec7-ab83-4900bade72cf" volumeName="kubernetes.io/projected/fe245927-c937-4ec7-ab83-4900bade72cf-kube-api-access-s4hsp" seLinuxMountContext="" Mar 19 12:14:21.660215 master-0 kubenswrapper[31830]: I0319 12:14:21.660039 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="13503fef-09b2-4dbe-9537-a5b361e7b591" volumeName="kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-trusted-ca-bundle" seLinuxMountContext="" Mar 19 12:14:21.660215 master-0 kubenswrapper[31830]: I0319 12:14:21.660053 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b87f8c3-1898-46dd-bcac-e8f22f31e812" volumeName="kubernetes.io/secret/2b87f8c3-1898-46dd-bcac-e8f22f31e812-proxy-tls" seLinuxMountContext="" Mar 19 12:14:21.660215 master-0 kubenswrapper[31830]: I0319 12:14:21.660066 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c80f8d0-ee9b-4a4d-ba92-e241b2552e58" volumeName="kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-client-tls" seLinuxMountContext="" Mar 19 12:14:21.660215 master-0 kubenswrapper[31830]: I0319 12:14:21.660079 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86884445-e29b-492b-8810-b63b938b9170" volumeName="kubernetes.io/configmap/86884445-e29b-492b-8810-b63b938b9170-metrics-client-ca" seLinuxMountContext="" Mar 19 12:14:21.660215 master-0 kubenswrapper[31830]: I0319 12:14:21.660091 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5238840f-3bef-43ad-ae68-ac187f073019" volumeName="kubernetes.io/projected/5238840f-3bef-43ad-ae68-ac187f073019-kube-api-access-vxdts" seLinuxMountContext="" Mar 19 12:14:21.660215 master-0 kubenswrapper[31830]: I0319 12:14:21.660108 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6db3fcbe-0dbf-464f-944b-62427173c8d3" volumeName="kubernetes.io/empty-dir/6db3fcbe-0dbf-464f-944b-62427173c8d3-audit-log" seLinuxMountContext="" Mar 19 12:14:21.660215 master-0 kubenswrapper[31830]: I0319 12:14:21.660122 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="979ba8cc-5a7b-4188-bf9e-c22d810888e9" volumeName="kubernetes.io/configmap/979ba8cc-5a7b-4188-bf9e-c22d810888e9-etcd-serving-ca" seLinuxMountContext="" Mar 19 12:14:21.660215 master-0 kubenswrapper[31830]: I0319 12:14:21.660135 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb1000ab-4419-43ce-b1b7-8f43413e017f" volumeName="kubernetes.io/configmap/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Mar 19 12:14:21.660215 master-0 kubenswrapper[31830]: E0319 12:14:21.660088 31830 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 19 12:14:21.660215 master-0 kubenswrapper[31830]: I0319 12:14:21.660177 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb1000ab-4419-43ce-b1b7-8f43413e017f" volumeName="kubernetes.io/projected/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-api-access-6hk8l" seLinuxMountContext="" Mar 19 12:14:21.660215 master-0 kubenswrapper[31830]: I0319 12:14:21.660222 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3541cbe-3be0-40d3-89d2-b5937b6a8f47" volumeName="kubernetes.io/configmap/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-auth-proxy-config" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660277 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="de39c80c-acfa-4bc1-a844-95b170169b44" volumeName="kubernetes.io/secret/de39c80c-acfa-4bc1-a844-95b170169b44-openshift-state-metrics-tls" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660292 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06df1b1b-154e-46f9-aee0-79a137c6c928" volumeName="kubernetes.io/secret/06df1b1b-154e-46f9-aee0-79a137c6c928-serving-cert" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660303 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="13503fef-09b2-4dbe-9537-a5b361e7b591" volumeName="kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-etcd-client" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660313 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36e5fec9-7fb5-4460-8bb4-4b9e36fae978" volumeName="kubernetes.io/projected/36e5fec9-7fb5-4460-8bb4-4b9e36fae978-kube-api-access-z9hck" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660326 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9702fc8c-4fe0-413b-b2d4-db23021d42b8" volumeName="kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-ca" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660336 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1089ea24-add9-482e-9276-e6ded12052d7" volumeName="kubernetes.io/projected/1089ea24-add9-482e-9276-e6ded12052d7-kube-api-access" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660345 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19de6601-10d4-4112-a21f-0398d2b160d1" volumeName="kubernetes.io/configmap/19de6601-10d4-4112-a21f-0398d2b160d1-config" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660354 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="284768b8-9d70-4cf7-bace-8adc6b587186" volumeName="kubernetes.io/secret/284768b8-9d70-4cf7-bace-8adc6b587186-metrics-tls" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660364 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c80f8d0-ee9b-4a4d-ba92-e241b2552e58" volumeName="kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660379 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="979ba8cc-5a7b-4188-bf9e-c22d810888e9" volumeName="kubernetes.io/projected/979ba8cc-5a7b-4188-bf9e-c22d810888e9-kube-api-access-28ljd" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660392 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="979ba8cc-5a7b-4188-bf9e-c22d810888e9" volumeName="kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-serving-cert" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660403 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="beb562de-402b-4d9f-b5ed-090b60847a95" volumeName="kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660450 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf6b6560-1731-4fb1-b3c2-8257002842d6" volumeName="kubernetes.io/secret/cf6b6560-1731-4fb1-b3c2-8257002842d6-cert" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660462 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe245927-c937-4ec7-ab83-4900bade72cf" volumeName="kubernetes.io/configmap/fe245927-c937-4ec7-ab83-4900bade72cf-multus-daemon-config" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660471 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06df1b1b-154e-46f9-aee0-79a137c6c928" volumeName="kubernetes.io/projected/06df1b1b-154e-46f9-aee0-79a137c6c928-kube-api-access" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660480 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c80f8d0-ee9b-4a4d-ba92-e241b2552e58" volumeName="kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client-kube-rbac-proxy-config" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660490 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad327a59-7879-4215-bb95-3f2be64cb97f" volumeName="kubernetes.io/secret/ad327a59-7879-4215-bb95-3f2be64cb97f-cloud-credential-operator-serving-cert" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660500 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f05dca6c-7626-4970-a869-4208ff5605a2" volumeName="kubernetes.io/empty-dir/f05dca6c-7626-4970-a869-4208ff5605a2-catalog-content" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660509 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f236a5ab-b400-46fc-94ee-1fff476d6458" volumeName="kubernetes.io/secret/f236a5ab-b400-46fc-94ee-1fff476d6458-metrics-tls" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660520 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9702fc8c-4fe0-413b-b2d4-db23021d42b8" volumeName="kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-service-ca" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660530 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be4349fa-5c67-4135-80a7-b8a694553662" volumeName="kubernetes.io/projected/be4349fa-5c67-4135-80a7-b8a694553662-kube-api-access-jbzj2" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660540 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf226d89-450d-4876-a113-345632b94ee9" volumeName="kubernetes.io/configmap/bf226d89-450d-4876-a113-345632b94ee9-ovnkube-config" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660548 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2dbd8b3-0e02-4747-a166-80aa6a94b060" volumeName="kubernetes.io/projected/c2dbd8b3-0e02-4747-a166-80aa6a94b060-kube-api-access-npc2t" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660558 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1089ea24-add9-482e-9276-e6ded12052d7" volumeName="kubernetes.io/secret/1089ea24-add9-482e-9276-e6ded12052d7-serving-cert" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660588 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="616dbb32-6b65-4e44-a217-6b1be2844cc9" volumeName="kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660597 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aef8e03f-0363-4e13-b7ca-4fa871d77c62" volumeName="kubernetes.io/projected/aef8e03f-0363-4e13-b7ca-4fa871d77c62-kube-api-access-x252z" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660625 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da9becfb-a504-4ef7-92ed-cd2db439d5db" volumeName="kubernetes.io/configmap/da9becfb-a504-4ef7-92ed-cd2db439d5db-client-ca" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660635 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f05dca6c-7626-4970-a869-4208ff5605a2" volumeName="kubernetes.io/empty-dir/f05dca6c-7626-4970-a869-4208ff5605a2-utilities" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660675 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0e25d4ed-4ad0-4706-ad25-7822c9a1d07e" volumeName="kubernetes.io/projected/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-kube-api-access-r9k5t" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660686 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4264e82c-387f-4aa6-9ef6-b7beb61e098c" volumeName="kubernetes.io/empty-dir/4264e82c-387f-4aa6-9ef6-b7beb61e098c-snapshots" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660727 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7044a7b3-4fac-40af-a31c-054a1a1db26b" volumeName="kubernetes.io/projected/7044a7b3-4fac-40af-a31c-054a1a1db26b-kube-api-access-shfs6" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660739 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2151eb84-177e-459c-be71-f48465323ac2" volumeName="kubernetes.io/projected/2151eb84-177e-459c-be71-f48465323ac2-kube-api-access" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660748 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="311b8bab-6cee-406d-8e0e-5b18a743d5fa" volumeName="kubernetes.io/configmap/311b8bab-6cee-406d-8e0e-5b18a743d5fa-mcc-auth-proxy-config" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660759 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6863b35c-44ac-4333-97b5-e8e38b440a20" volumeName="kubernetes.io/configmap/6863b35c-44ac-4333-97b5-e8e38b440a20-signing-cabundle" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660788 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c80f8d0-ee9b-4a4d-ba92-e241b2552e58" volumeName="kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-metrics-client-ca" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660856 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="13503fef-09b2-4dbe-9537-a5b361e7b591" volumeName="kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-image-import-ca" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660885 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b87f8c3-1898-46dd-bcac-e8f22f31e812" volumeName="kubernetes.io/projected/2b87f8c3-1898-46dd-bcac-e8f22f31e812-kube-api-access-kbddm" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660894 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="661b8957-a890-4032-9e57-45e2e0b35249" volumeName="kubernetes.io/projected/661b8957-a890-4032-9e57-45e2e0b35249-kube-api-access-8hq8f" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660903 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9702fc8c-4fe0-413b-b2d4-db23021d42b8" volumeName="kubernetes.io/projected/9702fc8c-4fe0-413b-b2d4-db23021d42b8-kube-api-access-tpdts" seLinuxMountContext="" Mar 19 12:14:21.660868 master-0 kubenswrapper[31830]: I0319 12:14:21.660911 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf226d89-450d-4876-a113-345632b94ee9" volumeName="kubernetes.io/projected/bf226d89-450d-4876-a113-345632b94ee9-kube-api-access-wcxqj" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.660954 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19de6601-10d4-4112-a21f-0398d2b160d1" volumeName="kubernetes.io/configmap/19de6601-10d4-4112-a21f-0398d2b160d1-images" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.660965 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="661b8957-a890-4032-9e57-45e2e0b35249" volumeName="kubernetes.io/secret/661b8957-a890-4032-9e57-45e2e0b35249-serving-cert" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661006 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b2ecb08-a0f9-4127-967c-7087dea4c0f6" volumeName="kubernetes.io/configmap/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-images" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661015 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf6b6560-1731-4fb1-b3c2-8257002842d6" volumeName="kubernetes.io/configmap/cf6b6560-1731-4fb1-b3c2-8257002842d6-auth-proxy-config" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661024 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9ab6ec4-eec9-4d27-8b43-2aaf954f098f" volumeName="kubernetes.io/projected/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-kube-api-access-h5n89" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661033 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="13503fef-09b2-4dbe-9537-a5b361e7b591" volumeName="kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-etcd-serving-ca" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661043 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6db3fcbe-0dbf-464f-944b-62427173c8d3" volumeName="kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-client-ca-bundle" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661053 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d2db220-4d5b-4819-a910-b186e1e9fb3e" volumeName="kubernetes.io/secret/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovn-node-metrics-cert" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661063 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b0f5939c-48b1-4d6c-9712-9128a78d603b" volumeName="kubernetes.io/configmap/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-trusted-ca" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661072 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f236a5ab-b400-46fc-94ee-1fff476d6458" volumeName="kubernetes.io/projected/f236a5ab-b400-46fc-94ee-1fff476d6458-kube-api-access-ps4k8" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661099 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7241bf11-192e-47db-9d80-2324938ed34c" volumeName="kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661108 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9702fc8c-4fe0-413b-b2d4-db23021d42b8" volumeName="kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-config" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661117 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9d191d1-631d-4091-af8b-382283c18a5a" volumeName="kubernetes.io/empty-dir/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-textfile" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661131 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="beb562de-402b-4d9f-b5ed-090b60847a95" volumeName="kubernetes.io/projected/beb562de-402b-4d9f-b5ed-090b60847a95-kube-api-access-9mr6d" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661141 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="de39c80c-acfa-4bc1-a844-95b170169b44" volumeName="kubernetes.io/projected/de39c80c-acfa-4bc1-a844-95b170169b44-kube-api-access-6x2v6" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661151 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91112ce6-4f9d-44c1-a4e7-fea126554bcf" volumeName="kubernetes.io/projected/91112ce6-4f9d-44c1-a4e7-fea126554bcf-kube-api-access-8hrkb" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661211 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="979ba8cc-5a7b-4188-bf9e-c22d810888e9" volumeName="kubernetes.io/configmap/979ba8cc-5a7b-4188-bf9e-c22d810888e9-trusted-ca-bundle" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661221 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9d191d1-631d-4091-af8b-382283c18a5a" volumeName="kubernetes.io/configmap/a9d191d1-631d-4091-af8b-382283c18a5a-metrics-client-ca" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661247 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab54833d-e57b-479d-b171-68155f6566f1" volumeName="kubernetes.io/projected/ab54833d-e57b-479d-b171-68155f6566f1-kube-api-access-gl6d7" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661257 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da9becfb-a504-4ef7-92ed-cd2db439d5db" volumeName="kubernetes.io/configmap/da9becfb-a504-4ef7-92ed-cd2db439d5db-config" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661268 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8414b6b0-ee16-47a5-982b-ee58b136cfcf" volumeName="kubernetes.io/projected/8414b6b0-ee16-47a5-982b-ee58b136cfcf-kube-api-access-864rg" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661345 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="13503fef-09b2-4dbe-9537-a5b361e7b591" volumeName="kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-audit" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661357 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82b98dca-59f9-42be-94ca-4a2a2b6fea0f" volumeName="kubernetes.io/projected/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-kube-api-access-c5bmd" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661368 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="919daf8d-763a-44bc-8916-86b425a27cbd" volumeName="kubernetes.io/projected/919daf8d-763a-44bc-8916-86b425a27cbd-ca-certs" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661378 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ed2dbd1-aec4-4009-917a-933533912ab5" volumeName="kubernetes.io/configmap/9ed2dbd1-aec4-4009-917a-933533912ab5-config" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661388 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="de39c80c-acfa-4bc1-a844-95b170169b44" volumeName="kubernetes.io/secret/de39c80c-acfa-4bc1-a844-95b170169b44-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661414 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0316c374-f812-4e0a-8645-727e8372f16e" volumeName="kubernetes.io/projected/0316c374-f812-4e0a-8645-727e8372f16e-kube-api-access-tvvk8" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661424 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1089ea24-add9-482e-9276-e6ded12052d7" volumeName="kubernetes.io/configmap/1089ea24-add9-482e-9276-e6ded12052d7-config" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661473 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44469a78-9300-4260-89e9-ea939de1357b" volumeName="kubernetes.io/projected/44469a78-9300-4260-89e9-ea939de1357b-kube-api-access-t7zpw" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661483 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6db3fcbe-0dbf-464f-944b-62427173c8d3" volumeName="kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-metrics-server-audit-profiles" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661494 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87a3f546-e1c1-42a1-b80e-d45b6d5c0a04" volumeName="kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661530 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="919daf8d-763a-44bc-8916-86b425a27cbd" volumeName="kubernetes.io/projected/919daf8d-763a-44bc-8916-86b425a27cbd-kube-api-access-8brwr" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661540 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7044a7b3-4fac-40af-a31c-054a1a1db26b" volumeName="kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-whereabouts-flatfile-configmap" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661552 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82b98dca-59f9-42be-94ca-4a2a2b6fea0f" volumeName="kubernetes.io/projected/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-bound-sa-token" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661576 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d2db220-4d5b-4819-a910-b186e1e9fb3e" volumeName="kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-env-overrides" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661584 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d2db220-4d5b-4819-a910-b186e1e9fb3e" volumeName="kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovnkube-config" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661593 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b80027fd-7b39-477a-a337-ff9bb08e7eeb" volumeName="kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661602 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdcdb23d-ef1f-45e2-b9ac-7abf707637b6" volumeName="kubernetes.io/projected/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-kube-api-access-jnd9c" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661611 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f08c5930-44f0-48e4-80dd-2563f2733b2f" volumeName="kubernetes.io/projected/f08c5930-44f0-48e4-80dd-2563f2733b2f-kube-api-access-h84l9" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661619 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2151eb84-177e-459c-be71-f48465323ac2" volumeName="kubernetes.io/secret/2151eb84-177e-459c-be71-f48465323ac2-serving-cert" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661628 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="284768b8-9d70-4cf7-bace-8adc6b587186" volumeName="kubernetes.io/projected/284768b8-9d70-4cf7-bace-8adc6b587186-kube-api-access-8p6vn" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661636 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="311b8bab-6cee-406d-8e0e-5b18a743d5fa" volumeName="kubernetes.io/projected/311b8bab-6cee-406d-8e0e-5b18a743d5fa-kube-api-access-hjfpq" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661660 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6db3fcbe-0dbf-464f-944b-62427173c8d3" volumeName="kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-client-certs" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661669 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7241bf11-192e-47db-9d80-2324938ed34c" volumeName="kubernetes.io/configmap/7241bf11-192e-47db-9d80-2324938ed34c-telemetry-config" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661706 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="979ba8cc-5a7b-4188-bf9e-c22d810888e9" volumeName="kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-etcd-client" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661716 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="13503fef-09b2-4dbe-9537-a5b361e7b591" volumeName="kubernetes.io/projected/13503fef-09b2-4dbe-9537-a5b361e7b591-kube-api-access-mgdlc" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661724 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b2ecb08-a0f9-4127-967c-7087dea4c0f6" volumeName="kubernetes.io/configmap/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-config" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661738 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da9becfb-a504-4ef7-92ed-cd2db439d5db" volumeName="kubernetes.io/secret/da9becfb-a504-4ef7-92ed-cd2db439d5db-serving-cert" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661762 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="13503fef-09b2-4dbe-9537-a5b361e7b591" volumeName="kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-config" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661771 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a6b082a-649b-43f6-8e24-cf222873fe39" volumeName="kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-proxy-ca-bundles" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661815 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a6b082a-649b-43f6-8e24-cf222873fe39" volumeName="kubernetes.io/secret/3a6b082a-649b-43f6-8e24-cf222873fe39-serving-cert" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661830 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="661b8957-a890-4032-9e57-45e2e0b35249" volumeName="kubernetes.io/configmap/661b8957-a890-4032-9e57-45e2e0b35249-config" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661841 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ed7eded-1e67-49ad-9777-c2ed1e006ce3" volumeName="kubernetes.io/projected/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-kube-api-access-jnp9l" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661852 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3661faaa-2c9d-4fcd-a41f-71aa71a2e464" volumeName="kubernetes.io/projected/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-kube-api-access" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661864 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91112ce6-4f9d-44c1-a4e7-fea126554bcf" volumeName="kubernetes.io/secret/91112ce6-4f9d-44c1-a4e7-fea126554bcf-metrics-certs" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661873 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06f67c28-34fd-4356-92f0-edd0986ad34e" volumeName="kubernetes.io/projected/06f67c28-34fd-4356-92f0-edd0986ad34e-kube-api-access-bdpj4" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661912 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82b98dca-59f9-42be-94ca-4a2a2b6fea0f" volumeName="kubernetes.io/configmap/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-trusted-ca" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661921 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="919daf8d-763a-44bc-8916-86b425a27cbd" volumeName="kubernetes.io/empty-dir/919daf8d-763a-44bc-8916-86b425a27cbd-cache" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661947 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ed7eded-1e67-49ad-9777-c2ed1e006ce3" volumeName="kubernetes.io/empty-dir/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-catalog-content" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661956 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="13503fef-09b2-4dbe-9537-a5b361e7b591" volumeName="kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-encryption-config" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661980 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2151eb84-177e-459c-be71-f48465323ac2" volumeName="kubernetes.io/configmap/2151eb84-177e-459c-be71-f48465323ac2-config" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.661990 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a6b082a-649b-43f6-8e24-cf222873fe39" volumeName="kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-client-ca" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.662000 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86884445-e29b-492b-8810-b63b938b9170" volumeName="kubernetes.io/projected/86884445-e29b-492b-8810-b63b938b9170-kube-api-access-5kcbw" seLinuxMountContext="" Mar 19 12:14:21.661948 master-0 kubenswrapper[31830]: I0319 12:14:21.662008 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be4349fa-5c67-4135-80a7-b8a694553662" volumeName="kubernetes.io/secret/be4349fa-5c67-4135-80a7-b8a694553662-webhook-cert" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662016 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0e25d4ed-4ad0-4706-ad25-7822c9a1d07e" volumeName="kubernetes.io/secret/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-certs" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662025 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="667757ee-2670-4019-ad93-156521d3c2e7" volumeName="kubernetes.io/secret/667757ee-2670-4019-ad93-156521d3c2e7-samples-operator-tls" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662048 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91112ce6-4f9d-44c1-a4e7-fea126554bcf" volumeName="kubernetes.io/secret/91112ce6-4f9d-44c1-a4e7-fea126554bcf-stats-auth" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662056 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad327a59-7879-4215-bb95-3f2be64cb97f" volumeName="kubernetes.io/projected/ad327a59-7879-4215-bb95-3f2be64cb97f-kube-api-access-9fgj5" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662065 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b80027fd-7b39-477a-a337-ff9bb08e7eeb" volumeName="kubernetes.io/projected/b80027fd-7b39-477a-a337-ff9bb08e7eeb-kube-api-access-hs4jf" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662115 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2dbd8b3-0e02-4747-a166-80aa6a94b060" volumeName="kubernetes.io/empty-dir/c2dbd8b3-0e02-4747-a166-80aa6a94b060-operand-assets" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662124 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06f67c28-34fd-4356-92f0-edd0986ad34e" volumeName="kubernetes.io/configmap/06f67c28-34fd-4356-92f0-edd0986ad34e-iptables-alerter-script" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662132 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c2a33ba-76d0-4b81-a41d-9da16fd46209" volumeName="kubernetes.io/secret/1c2a33ba-76d0-4b81-a41d-9da16fd46209-webhook-certs" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662165 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ed2dbd1-aec4-4009-917a-933533912ab5" volumeName="kubernetes.io/projected/9ed2dbd1-aec4-4009-917a-933533912ab5-kube-api-access-gsk9d" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662174 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf226d89-450d-4876-a113-345632b94ee9" volumeName="kubernetes.io/secret/bf226d89-450d-4876-a113-345632b94ee9-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662219 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7383e647-63b0-452d-a39b-02ad27a9b053" volumeName="kubernetes.io/empty-dir/7383e647-63b0-452d-a39b-02ad27a9b053-catalog-content" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662229 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c52bbbe7-bc16-432f-a471-bc561083a853" volumeName="kubernetes.io/projected/c52bbbe7-bc16-432f-a471-bc561083a853-kube-api-access-4ztf7" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662238 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44469a78-9300-4260-89e9-ea939de1357b" volumeName="kubernetes.io/secret/44469a78-9300-4260-89e9-ea939de1357b-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662246 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6db3fcbe-0dbf-464f-944b-62427173c8d3" volumeName="kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-server-tls" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662254 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ed2dbd1-aec4-4009-917a-933533912ab5" volumeName="kubernetes.io/secret/9ed2dbd1-aec4-4009-917a-933533912ab5-serving-cert" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662282 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4264e82c-387f-4aa6-9ef6-b7beb61e098c" volumeName="kubernetes.io/projected/4264e82c-387f-4aa6-9ef6-b7beb61e098c-kube-api-access-8wfsr" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662319 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf6b6560-1731-4fb1-b3c2-8257002842d6" volumeName="kubernetes.io/projected/cf6b6560-1731-4fb1-b3c2-8257002842d6-kube-api-access-64twc" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662332 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8414b6b0-ee16-47a5-982b-ee58b136cfcf" volumeName="kubernetes.io/configmap/8414b6b0-ee16-47a5-982b-ee58b136cfcf-env-overrides" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662359 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d975e831-7348-41b9-9622-f4a503674c38" volumeName="kubernetes.io/projected/d975e831-7348-41b9-9622-f4a503674c38-kube-api-access-86r6z" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662368 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da9becfb-a504-4ef7-92ed-cd2db439d5db" volumeName="kubernetes.io/projected/da9becfb-a504-4ef7-92ed-cd2db439d5db-kube-api-access-lvzcn" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662404 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="63c12a89-1b49-4eba-8f5a-551b10d2246b" volumeName="kubernetes.io/projected/63c12a89-1b49-4eba-8f5a-551b10d2246b-kube-api-access-bst2w" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662414 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7383e647-63b0-452d-a39b-02ad27a9b053" volumeName="kubernetes.io/empty-dir/7383e647-63b0-452d-a39b-02ad27a9b053-utilities" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662423 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c80f8d0-ee9b-4a4d-ba92-e241b2552e58" volumeName="kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-serving-certs-ca-bundle" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662431 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad327a59-7879-4215-bb95-3f2be64cb97f" volumeName="kubernetes.io/configmap/ad327a59-7879-4215-bb95-3f2be64cb97f-cco-trusted-ca" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662440 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b0f5939c-48b1-4d6c-9712-9128a78d603b" volumeName="kubernetes.io/projected/b0f5939c-48b1-4d6c-9712-9128a78d603b-kube-api-access-6tqdb" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662448 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9ab6ec4-eec9-4d27-8b43-2aaf954f098f" volumeName="kubernetes.io/configmap/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-config" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662476 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82b98dca-59f9-42be-94ca-4a2a2b6fea0f" volumeName="kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662484 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="979ba8cc-5a7b-4188-bf9e-c22d810888e9" volumeName="kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-encryption-config" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662504 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3541cbe-3be0-40d3-89d2-b5937b6a8f47" volumeName="kubernetes.io/configmap/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-images" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662513 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19de6601-10d4-4112-a21f-0398d2b160d1" volumeName="kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662521 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4264e82c-387f-4aa6-9ef6-b7beb61e098c" volumeName="kubernetes.io/secret/4264e82c-387f-4aa6-9ef6-b7beb61e098c-serving-cert" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662546 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c80f8d0-ee9b-4a4d-ba92-e241b2552e58" volumeName="kubernetes.io/projected/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-kube-api-access-vm9zf" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662554 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2dbd8b3-0e02-4747-a166-80aa6a94b060" volumeName="kubernetes.io/secret/c2dbd8b3-0e02-4747-a166-80aa6a94b060-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662564 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f97d998-530c-4d9d-a030-ca1d9d2d4490" volumeName="kubernetes.io/secret/0f97d998-530c-4d9d-a030-ca1d9d2d4490-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662588 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7044a7b3-4fac-40af-a31c-054a1a1db26b" volumeName="kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-cni-binary-copy" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662597 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5238840f-3bef-43ad-ae68-ac187f073019" volumeName="kubernetes.io/projected/5238840f-3bef-43ad-ae68-ac187f073019-ca-certs" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662631 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="944eac68-e72b-4aed-b5dc-d7d9703178a3" volumeName="kubernetes.io/projected/944eac68-e72b-4aed-b5dc-d7d9703178a3-kube-api-access-m2mdn" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662646 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9d191d1-631d-4091-af8b-382283c18a5a" volumeName="kubernetes.io/projected/a9d191d1-631d-4091-af8b-382283c18a5a-kube-api-access-cq9p4" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662655 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb1000ab-4419-43ce-b1b7-8f43413e017f" volumeName="kubernetes.io/secret/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662664 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3017b5e-178e-49de-89d2-817a18398203" volumeName="kubernetes.io/secret/d3017b5e-178e-49de-89d2-817a18398203-serving-cert" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662673 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3541cbe-3be0-40d3-89d2-b5937b6a8f47" volumeName="kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662682 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06df1b1b-154e-46f9-aee0-79a137c6c928" volumeName="kubernetes.io/configmap/06df1b1b-154e-46f9-aee0-79a137c6c928-config" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662709 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4800b72f-7e54-4069-b771-87fb459eeb78" volumeName="kubernetes.io/projected/4800b72f-7e54-4069-b771-87fb459eeb78-kube-api-access-4lkzv" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662742 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b2ecb08-a0f9-4127-967c-7087dea4c0f6" volumeName="kubernetes.io/projected/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-kube-api-access-dxw6t" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662751 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9702fc8c-4fe0-413b-b2d4-db23021d42b8" volumeName="kubernetes.io/secret/9702fc8c-4fe0-413b-b2d4-db23021d42b8-serving-cert" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662759 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3017b5e-178e-49de-89d2-817a18398203" volumeName="kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-trusted-ca-bundle" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662768 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="311b8bab-6cee-406d-8e0e-5b18a743d5fa" volumeName="kubernetes.io/secret/311b8bab-6cee-406d-8e0e-5b18a743d5fa-proxy-tls" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662820 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3661faaa-2c9d-4fcd-a41f-71aa71a2e464" volumeName="kubernetes.io/secret/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-serving-cert" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662880 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36e5fec9-7fb5-4460-8bb4-4b9e36fae978" volumeName="kubernetes.io/secret/36e5fec9-7fb5-4460-8bb4-4b9e36fae978-cert" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662903 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a6b082a-649b-43f6-8e24-cf222873fe39" volumeName="kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-config" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662938 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9ab6ec4-eec9-4d27-8b43-2aaf954f098f" volumeName="kubernetes.io/secret/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-serving-cert" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662952 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee3529ac-6135-438b-9334-40c63c1fbd3d" volumeName="kubernetes.io/configmap/ee3529ac-6135-438b-9334-40c63c1fbd3d-auth-proxy-config" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662964 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86884445-e29b-492b-8810-b63b938b9170" volumeName="kubernetes.io/secret/86884445-e29b-492b-8810-b63b938b9170-prometheus-operator-tls" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662974 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c52bbbe7-bc16-432f-a471-bc561083a853" volumeName="kubernetes.io/empty-dir/c52bbbe7-bc16-432f-a471-bc561083a853-catalog-content" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662983 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3661faaa-2c9d-4fcd-a41f-71aa71a2e464" volumeName="kubernetes.io/configmap/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-service-ca" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.662991 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6863b35c-44ac-4333-97b5-e8e38b440a20" volumeName="kubernetes.io/secret/6863b35c-44ac-4333-97b5-e8e38b440a20-signing-key" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663000 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c80f8d0-ee9b-4a4d-ba92-e241b2552e58" volumeName="kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-federate-client-tls" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663009 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87a3f546-e1c1-42a1-b80e-d45b6d5c0a04" volumeName="kubernetes.io/projected/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-kube-api-access-hwfg5" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663032 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3017b5e-178e-49de-89d2-817a18398203" volumeName="kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-config" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663040 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fd40498c-f50a-408c-9a50-5d85ae666124" volumeName="kubernetes.io/configmap/fd40498c-f50a-408c-9a50-5d85ae666124-config" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663056 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="979ba8cc-5a7b-4188-bf9e-c22d810888e9" volumeName="kubernetes.io/configmap/979ba8cc-5a7b-4188-bf9e-c22d810888e9-audit-policies" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663072 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9d191d1-631d-4091-af8b-382283c18a5a" volumeName="kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663082 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e559e487-18b0-4622-92fa-d06e7397b312" volumeName="kubernetes.io/empty-dir/e559e487-18b0-4622-92fa-d06e7397b312-etc-tuned" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663108 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b87f8c3-1898-46dd-bcac-e8f22f31e812" volumeName="kubernetes.io/configmap/2b87f8c3-1898-46dd-bcac-e8f22f31e812-mcd-auth-proxy-config" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663120 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86884445-e29b-492b-8810-b63b938b9170" volumeName="kubernetes.io/secret/86884445-e29b-492b-8810-b63b938b9170-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663129 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be4349fa-5c67-4135-80a7-b8a694553662" volumeName="kubernetes.io/secret/be4349fa-5c67-4135-80a7-b8a694553662-apiservice-cert" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663140 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f08c5930-44f0-48e4-80dd-2563f2733b2f" volumeName="kubernetes.io/secret/f08c5930-44f0-48e4-80dd-2563f2733b2f-serving-cert" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663151 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe245927-c937-4ec7-ab83-4900bade72cf" volumeName="kubernetes.io/configmap/fe245927-c937-4ec7-ab83-4900bade72cf-cni-binary-copy" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663162 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="13503fef-09b2-4dbe-9537-a5b361e7b591" volumeName="kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-serving-cert" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663176 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6db3fcbe-0dbf-464f-944b-62427173c8d3" volumeName="kubernetes.io/projected/6db3fcbe-0dbf-464f-944b-62427173c8d3-kube-api-access-lllml" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663186 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7383e647-63b0-452d-a39b-02ad27a9b053" volumeName="kubernetes.io/projected/7383e647-63b0-452d-a39b-02ad27a9b053-kube-api-access-2xz8h" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663196 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb1000ab-4419-43ce-b1b7-8f43413e017f" volumeName="kubernetes.io/empty-dir/bb1000ab-4419-43ce-b1b7-8f43413e017f-volume-directive-shadow" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663206 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e559e487-18b0-4622-92fa-d06e7397b312" volumeName="kubernetes.io/empty-dir/e559e487-18b0-4622-92fa-d06e7397b312-tmp" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663216 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b2ecb08-a0f9-4127-967c-7087dea4c0f6" volumeName="kubernetes.io/secret/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-machine-api-operator-tls" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663226 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c52bbbe7-bc16-432f-a471-bc561083a853" volumeName="kubernetes.io/empty-dir/c52bbbe7-bc16-432f-a471-bc561083a853-utilities" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663235 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3541cbe-3be0-40d3-89d2-b5937b6a8f47" volumeName="kubernetes.io/projected/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-kube-api-access-pv6bc" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663247 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c2a33ba-76d0-4b81-a41d-9da16fd46209" volumeName="kubernetes.io/projected/1c2a33ba-76d0-4b81-a41d-9da16fd46209-kube-api-access-k8n22" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663258 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4264e82c-387f-4aa6-9ef6-b7beb61e098c" volumeName="kubernetes.io/configmap/4264e82c-387f-4aa6-9ef6-b7beb61e098c-trusted-ca-bundle" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663270 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8414b6b0-ee16-47a5-982b-ee58b136cfcf" volumeName="kubernetes.io/secret/8414b6b0-ee16-47a5-982b-ee58b136cfcf-webhook-cert" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663295 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9702fc8c-4fe0-413b-b2d4-db23021d42b8" volumeName="kubernetes.io/secret/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-client" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663372 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf226d89-450d-4876-a113-345632b94ee9" volumeName="kubernetes.io/configmap/bf226d89-450d-4876-a113-345632b94ee9-env-overrides" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663383 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19de6601-10d4-4112-a21f-0398d2b160d1" volumeName="kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663398 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="63c12a89-1b49-4eba-8f5a-551b10d2246b" volumeName="kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663409 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb1000ab-4419-43ce-b1b7-8f43413e017f" volumeName="kubernetes.io/secret/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-tls" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663421 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee3529ac-6135-438b-9334-40c63c1fbd3d" volumeName="kubernetes.io/configmap/ee3529ac-6135-438b-9334-40c63c1fbd3d-images" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663431 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a6b082a-649b-43f6-8e24-cf222873fe39" volumeName="kubernetes.io/projected/3a6b082a-649b-43f6-8e24-cf222873fe39-kube-api-access-srbt4" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663441 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91112ce6-4f9d-44c1-a4e7-fea126554bcf" volumeName="kubernetes.io/secret/91112ce6-4f9d-44c1-a4e7-fea126554bcf-default-certificate" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663450 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="de39c80c-acfa-4bc1-a844-95b170169b44" volumeName="kubernetes.io/configmap/de39c80c-acfa-4bc1-a844-95b170169b44-metrics-client-ca" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663461 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f08c5930-44f0-48e4-80dd-2563f2733b2f" volumeName="kubernetes.io/configmap/f08c5930-44f0-48e4-80dd-2563f2733b2f-config" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663470 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0e25d4ed-4ad0-4706-ad25-7822c9a1d07e" volumeName="kubernetes.io/secret/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-node-bootstrap-token" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663480 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7241bf11-192e-47db-9d80-2324938ed34c" volumeName="kubernetes.io/projected/7241bf11-192e-47db-9d80-2324938ed34c-kube-api-access-s5mkm" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663489 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8414b6b0-ee16-47a5-982b-ee58b136cfcf" volumeName="kubernetes.io/configmap/8414b6b0-ee16-47a5-982b-ee58b136cfcf-ovnkube-identity-cm" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663502 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b80027fd-7b39-477a-a337-ff9bb08e7eeb" volumeName="kubernetes.io/projected/b80027fd-7b39-477a-a337-ff9bb08e7eeb-bound-sa-token" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663512 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="398bcaca-1bea-4633-a78f-717e3d015ddd" volumeName="kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663521 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aef8e03f-0363-4e13-b7ca-4fa871d77c62" volumeName="kubernetes.io/empty-dir/aef8e03f-0363-4e13-b7ca-4fa871d77c62-available-featuregates" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663531 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b80027fd-7b39-477a-a337-ff9bb08e7eeb" volumeName="kubernetes.io/configmap/b80027fd-7b39-477a-a337-ff9bb08e7eeb-trusted-ca" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663540 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fd40498c-f50a-408c-9a50-5d85ae666124" volumeName="kubernetes.io/configmap/fd40498c-f50a-408c-9a50-5d85ae666124-auth-proxy-config" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663548 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f97d998-530c-4d9d-a030-ca1d9d2d4490" volumeName="kubernetes.io/projected/0f97d998-530c-4d9d-a030-ca1d9d2d4490-kube-api-access-zntzt" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663558 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19de6601-10d4-4112-a21f-0398d2b160d1" volumeName="kubernetes.io/projected/19de6601-10d4-4112-a21f-0398d2b160d1-kube-api-access-6xpc2" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663566 31830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3017b5e-178e-49de-89d2-817a18398203" volumeName="kubernetes.io/projected/d3017b5e-178e-49de-89d2-817a18398203-kube-api-access-b6wm6" seLinuxMountContext="" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663575 31830 reconstruct.go:97] "Volume reconstruction finished" Mar 19 12:14:21.664143 master-0 kubenswrapper[31830]: I0319 12:14:21.663583 31830 reconciler.go:26] "Reconciler: start to sync state" Mar 19 12:14:21.668308 master-0 kubenswrapper[31830]: I0319 12:14:21.666623 31830 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 19 12:14:21.674502 master-0 kubenswrapper[31830]: I0319 12:14:21.674435 31830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 12:14:21.676501 master-0 kubenswrapper[31830]: I0319 12:14:21.676479 31830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 12:14:21.676596 master-0 kubenswrapper[31830]: I0319 12:14:21.676579 31830 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 12:14:21.676649 master-0 kubenswrapper[31830]: I0319 12:14:21.676608 31830 kubelet.go:2335] "Starting kubelet main sync loop" Mar 19 12:14:21.676688 master-0 kubenswrapper[31830]: E0319 12:14:21.676655 31830 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 12:14:21.678564 master-0 kubenswrapper[31830]: I0319 12:14:21.678528 31830 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 19 12:14:21.700272 master-0 kubenswrapper[31830]: I0319 12:14:21.700132 31830 generic.go:334] "Generic (PLEG): container finished" podID="a9819a56-abb1-485c-b424-5c62e30d5afc" containerID="61889dd9a935bc86ee38882d43925886388331ab38ba3004e85cc49cd1f39072" exitCode=0 Mar 19 12:14:21.702276 master-0 kubenswrapper[31830]: I0319 12:14:21.702237 31830 generic.go:334] "Generic (PLEG): container finished" podID="9702fc8c-4fe0-413b-b2d4-db23021d42b8" containerID="6c3d43a01987e52cadf8e3819b9c184c46b6535cb510d14c96117eed3c48a981" exitCode=0 Mar 19 12:14:21.704761 master-0 kubenswrapper[31830]: I0319 12:14:21.704730 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-nhvl4_aef8e03f-0363-4e13-b7ca-4fa871d77c62/openshift-config-operator/1.log" Mar 19 12:14:21.705216 master-0 kubenswrapper[31830]: I0319 12:14:21.705177 31830 generic.go:334] "Generic (PLEG): container finished" podID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerID="2e74e767e3ac9aff0d456d8d8b27b05725691d9b35635b73f0381a2cb7166772" exitCode=255 Mar 19 12:14:21.705256 master-0 kubenswrapper[31830]: I0319 12:14:21.705215 31830 generic.go:334] "Generic (PLEG): container finished" podID="aef8e03f-0363-4e13-b7ca-4fa871d77c62" containerID="583df0d35b75cdd42a8c5d73920d4fc8b3684739b4fbdc9aa3860b1cc1087eeb" exitCode=0 Mar 19 12:14:21.712052 master-0 kubenswrapper[31830]: I0319 12:14:21.712013 31830 generic.go:334] "Generic (PLEG): container finished" podID="11f83dfb-da04-483f-b281-ebdb39f3ab27" containerID="b09cf9e92d522e2b105a0b4a4e50ff7409083b9260caed07cdd2a78e778f9e16" exitCode=0 Mar 19 12:14:21.714789 master-0 kubenswrapper[31830]: I0319 12:14:21.714737 31830 generic.go:334] "Generic (PLEG): container finished" podID="2151eb84-177e-459c-be71-f48465323ac2" containerID="76df0534cc0fd6a5cc55f7565b57a91fd38d7e12169a76c5133f215b1479d2db" exitCode=0 Mar 19 12:14:21.716888 master-0 kubenswrapper[31830]: I0319 12:14:21.716852 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 19 12:14:21.717281 master-0 kubenswrapper[31830]: I0319 12:14:21.717248 31830 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="6b554ade444a2218312faf004411e7ca5ff136f234fd5270edc3b29df56f6e17" exitCode=1 Mar 19 12:14:21.717281 master-0 kubenswrapper[31830]: I0319 12:14:21.717277 31830 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="b1a54e1d5a4e1d27db12da7c6949a0237da9f713c6a17f5af4237b1c8b03cbfa" exitCode=0 Mar 19 12:14:21.721590 master-0 kubenswrapper[31830]: I0319 12:14:21.721446 31830 generic.go:334] "Generic (PLEG): container finished" podID="8e27b7d086edf5d2cf47b703574641d8" containerID="04102fb37d09b73e728e34206b1d91a20ab150cf6fe0171a324821c07888079f" exitCode=0 Mar 19 12:14:21.723027 master-0 kubenswrapper[31830]: I0319 12:14:21.722993 31830 generic.go:334] "Generic (PLEG): container finished" podID="ac20c616-753e-461a-9c39-2129239f47de" containerID="8022cb0787b078b8490d5e3b8eb77b94bc5a7657a78677fc984224192ff65ab6" exitCode=0 Mar 19 12:14:21.726962 master-0 kubenswrapper[31830]: I0319 12:14:21.726933 31830 generic.go:334] "Generic (PLEG): container finished" podID="c52bbbe7-bc16-432f-a471-bc561083a853" containerID="2a28f91cb7fa0c9891cfe8e8b101fe6954743be580a42629eefdf4e346a6ff36" exitCode=0 Mar 19 12:14:21.727064 master-0 kubenswrapper[31830]: I0319 12:14:21.727051 31830 generic.go:334] "Generic (PLEG): container finished" podID="c52bbbe7-bc16-432f-a471-bc561083a853" containerID="6c5c4d40a16417076e4498cb487b735b6cf2450b0bf97275a9d9f7f4cc5ea19e" exitCode=0 Mar 19 12:14:21.731898 master-0 kubenswrapper[31830]: I0319 12:14:21.730997 31830 generic.go:334] "Generic (PLEG): container finished" podID="979ba8cc-5a7b-4188-bf9e-c22d810888e9" containerID="05182f5833dcf5495367d45fa2481464014605bf23633fb02f16821c8ed341bf" exitCode=0 Mar 19 12:14:21.733953 master-0 kubenswrapper[31830]: I0319 12:14:21.733918 31830 generic.go:334] "Generic (PLEG): container finished" podID="661b8957-a890-4032-9e57-45e2e0b35249" containerID="48511943c8e0f8f2cb56a0dbe005be6b65b3cfab069bdef05e341ca254849587" exitCode=0 Mar 19 12:14:21.736566 master-0 kubenswrapper[31830]: I0319 12:14:21.736502 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-nb8bk_284768b8-9d70-4cf7-bace-8adc6b587186/network-operator/0.log" Mar 19 12:14:21.736566 master-0 kubenswrapper[31830]: I0319 12:14:21.736534 31830 generic.go:334] "Generic (PLEG): container finished" podID="284768b8-9d70-4cf7-bace-8adc6b587186" containerID="4a5b36532ee146a92740f77707f5b0a6a8c33bb89c0054e1d9177bfea2033a2d" exitCode=255 Mar 19 12:14:21.741012 master-0 kubenswrapper[31830]: I0319 12:14:21.740976 31830 generic.go:334] "Generic (PLEG): container finished" podID="0ed7eded-1e67-49ad-9777-c2ed1e006ce3" containerID="140f5b6d0ad45c210ec34db27352588bd40a8af50088c57ef36777013e203f6c" exitCode=0 Mar 19 12:14:21.741094 master-0 kubenswrapper[31830]: I0319 12:14:21.741025 31830 generic.go:334] "Generic (PLEG): container finished" podID="0ed7eded-1e67-49ad-9777-c2ed1e006ce3" containerID="80c673b2188e95ea8d6803bb2b30df3a1dbcd94b373e0bf980cd0ab82c7ba0bd" exitCode=0 Mar 19 12:14:21.742890 master-0 kubenswrapper[31830]: I0319 12:14:21.742834 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-7cdddc6cb-q222c_3a6b082a-649b-43f6-8e24-cf222873fe39/controller-manager/2.log" Mar 19 12:14:21.742890 master-0 kubenswrapper[31830]: I0319 12:14:21.742883 31830 generic.go:334] "Generic (PLEG): container finished" podID="3a6b082a-649b-43f6-8e24-cf222873fe39" containerID="09044fa3844b85995a99e406cd860f5e46e3c822f972902a1ed997f5be96ef8c" exitCode=255 Mar 19 12:14:21.746067 master-0 kubenswrapper[31830]: I0319 12:14:21.746032 31830 generic.go:334] "Generic (PLEG): container finished" podID="f05dca6c-7626-4970-a869-4208ff5605a2" containerID="8cc0b059aa2839b58a2ae2c6d2b64bd0a41bd8d8facc9d7c47f7f2b8dedcba42" exitCode=0 Mar 19 12:14:21.746067 master-0 kubenswrapper[31830]: I0319 12:14:21.746055 31830 generic.go:334] "Generic (PLEG): container finished" podID="f05dca6c-7626-4970-a869-4208ff5605a2" containerID="60bc1dc90b88b8a914cc55873afedd31f4e84b73bea5030f4f1cb08c053d6c7d" exitCode=0 Mar 19 12:14:21.748719 master-0 kubenswrapper[31830]: I0319 12:14:21.748681 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-gx4w8_9ed2dbd1-aec4-4009-917a-933533912ab5/openshift-controller-manager-operator/1.log" Mar 19 12:14:21.748719 master-0 kubenswrapper[31830]: I0319 12:14:21.748714 31830 generic.go:334] "Generic (PLEG): container finished" podID="9ed2dbd1-aec4-4009-917a-933533912ab5" containerID="24fd9caa7952430318d8f0070bff5d8f9a23ccd510c898e8d4b008fdb27da600" exitCode=255 Mar 19 12:14:21.755022 master-0 kubenswrapper[31830]: I0319 12:14:21.754983 31830 generic.go:334] "Generic (PLEG): container finished" podID="d9ab6ec4-eec9-4d27-8b43-2aaf954f098f" containerID="9dbaaa2ce519ab256717766bb8d971f864766afcc411753d09c087dd190cf903" exitCode=0 Mar 19 12:14:21.758589 master-0 kubenswrapper[31830]: I0319 12:14:21.758557 31830 generic.go:334] "Generic (PLEG): container finished" podID="f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c" containerID="7e673f997c20469e5f546d3e95284e0a33e36f035fae4d41c3c443160f062f50" exitCode=0 Mar 19 12:14:21.764129 master-0 kubenswrapper[31830]: I0319 12:14:21.764097 31830 generic.go:334] "Generic (PLEG): container finished" podID="bf226d89-450d-4876-a113-345632b94ee9" containerID="3d6c29fa2fea2a4028ae9bf07fe3dfb5fccd02ce108e84c4ff9630eee5fdf4b0" exitCode=0 Mar 19 12:14:21.766375 master-0 kubenswrapper[31830]: I0319 12:14:21.766347 31830 generic.go:334] "Generic (PLEG): container finished" podID="7383e647-63b0-452d-a39b-02ad27a9b053" containerID="851231fe9ccfeac8a5cba3d3576e738d92e2cffbc59eaab8e823a5bea8c281c6" exitCode=0 Mar 19 12:14:21.766488 master-0 kubenswrapper[31830]: I0319 12:14:21.766448 31830 generic.go:334] "Generic (PLEG): container finished" podID="7383e647-63b0-452d-a39b-02ad27a9b053" containerID="88999f37d32fea17c2f7cb71f197065956c6e3b527bdca5b8e8d64ee4a63831d" exitCode=0 Mar 19 12:14:21.769066 master-0 kubenswrapper[31830]: I0319 12:14:21.769051 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-qv29l_fd40498c-f50a-408c-9a50-5d85ae666124/machine-approver-controller/0.log" Mar 19 12:14:21.769781 master-0 kubenswrapper[31830]: I0319 12:14:21.769746 31830 generic.go:334] "Generic (PLEG): container finished" podID="fd40498c-f50a-408c-9a50-5d85ae666124" containerID="e46402e9e37c366c46da921e8257890f1d201b54bbd07d4bc4010bce5ecefa6c" exitCode=255 Mar 19 12:14:21.773620 master-0 kubenswrapper[31830]: I0319 12:14:21.773596 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-pkgvq_d3017b5e-178e-49de-89d2-817a18398203/authentication-operator/1.log" Mar 19 12:14:21.773807 master-0 kubenswrapper[31830]: I0319 12:14:21.773630 31830 generic.go:334] "Generic (PLEG): container finished" podID="d3017b5e-178e-49de-89d2-817a18398203" containerID="6dedac466f0712e9cb88164ac3beff662b4163f5b6d34ec1e978daf51f4b9061" exitCode=1 Mar 19 12:14:21.777184 master-0 kubenswrapper[31830]: E0319 12:14:21.776907 31830 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 19 12:14:21.777580 master-0 kubenswrapper[31830]: I0319 12:14:21.777543 31830 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="4eb7482c86a1b5f9e745f031e830bded6c37fd855abcbff4d6d73294bfadb247" exitCode=0 Mar 19 12:14:21.777580 master-0 kubenswrapper[31830]: I0319 12:14:21.777577 31830 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="f347ebf4af2e430c7010deb32f74eaaa375be42bd1cb0fd78e647b0e4fd96480" exitCode=0 Mar 19 12:14:21.777731 master-0 kubenswrapper[31830]: I0319 12:14:21.777598 31830 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="95a5e59caf12dcb834fa10b5b5af9755159f99a81152a1ebbfb9f9785ea5edff" exitCode=0 Mar 19 12:14:21.780375 master-0 kubenswrapper[31830]: I0319 12:14:21.780319 31830 generic.go:334] "Generic (PLEG): container finished" podID="8b48817c-05cd-430b-9b1f-9cc037f1ca77" containerID="4ffdbe686ec312f51e0f69bfddfcf8ddbe9d68d7435e9ea8d330dd01862adb85" exitCode=0 Mar 19 12:14:21.786322 master-0 kubenswrapper[31830]: I0319 12:14:21.786279 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-84gh4_ee3529ac-6135-438b-9334-40c63c1fbd3d/config-sync-controllers/0.log" Mar 19 12:14:21.786864 master-0 kubenswrapper[31830]: I0319 12:14:21.786835 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-84gh4_ee3529ac-6135-438b-9334-40c63c1fbd3d/cluster-cloud-controller-manager/0.log" Mar 19 12:14:21.786911 master-0 kubenswrapper[31830]: I0319 12:14:21.786888 31830 generic.go:334] "Generic (PLEG): container finished" podID="ee3529ac-6135-438b-9334-40c63c1fbd3d" containerID="296dc8986d8d88e53b561f3bac073cd3bc6b8803c01b285a45dd14b4fa44bec7" exitCode=1 Mar 19 12:14:21.786943 master-0 kubenswrapper[31830]: I0319 12:14:21.786909 31830 generic.go:334] "Generic (PLEG): container finished" podID="ee3529ac-6135-438b-9334-40c63c1fbd3d" containerID="10c6568199a7e8563a8238a4394e2eb6a83f98ca431cdeed29a3dfc7601564fd" exitCode=1 Mar 19 12:14:21.792985 master-0 kubenswrapper[31830]: I0319 12:14:21.792950 31830 generic.go:334] "Generic (PLEG): container finished" podID="9d2db220-4d5b-4819-a910-b186e1e9fb3e" containerID="d91c3177fcc79be021d9124f0b7323db9969b5d246ad69be6568e14b2bb1c146" exitCode=0 Mar 19 12:14:21.795879 master-0 kubenswrapper[31830]: I0319 12:14:21.795853 31830 generic.go:334] "Generic (PLEG): container finished" podID="7044a7b3-4fac-40af-a31c-054a1a1db26b" containerID="6bec5ff668b2f0913a9713d16292d3781feb7dfeeb82d87acec30ea3bfcbeb08" exitCode=0 Mar 19 12:14:21.796100 master-0 kubenswrapper[31830]: I0319 12:14:21.795878 31830 generic.go:334] "Generic (PLEG): container finished" podID="7044a7b3-4fac-40af-a31c-054a1a1db26b" containerID="09e947b1211885dac847d7f6f4b5d685a97ae8ac56061459ae15b5ca2dde25cb" exitCode=0 Mar 19 12:14:21.796142 master-0 kubenswrapper[31830]: I0319 12:14:21.796104 31830 generic.go:334] "Generic (PLEG): container finished" podID="7044a7b3-4fac-40af-a31c-054a1a1db26b" containerID="d621a54b4c12065eb160ef19e85adc68090a98c2fb8fea5b5228543edbaf07e1" exitCode=0 Mar 19 12:14:21.796142 master-0 kubenswrapper[31830]: I0319 12:14:21.796116 31830 generic.go:334] "Generic (PLEG): container finished" podID="7044a7b3-4fac-40af-a31c-054a1a1db26b" containerID="056242a76e14af2b45592d6a5dba2e28b2cd2e138b0b1a0f773a8e9eef170947" exitCode=0 Mar 19 12:14:21.796142 master-0 kubenswrapper[31830]: I0319 12:14:21.796125 31830 generic.go:334] "Generic (PLEG): container finished" podID="7044a7b3-4fac-40af-a31c-054a1a1db26b" containerID="a7b363361678d9e81d9d8ef32a8db06e2b9f3625d0d6871f670414917c137669" exitCode=0 Mar 19 12:14:21.796142 master-0 kubenswrapper[31830]: I0319 12:14:21.796135 31830 generic.go:334] "Generic (PLEG): container finished" podID="7044a7b3-4fac-40af-a31c-054a1a1db26b" containerID="2993484a619b94d2ea27105e0262a5ba0f7bb5c64e52ff512e989510a1380a8f" exitCode=0 Mar 19 12:14:21.803448 master-0 kubenswrapper[31830]: I0319 12:14:21.803339 31830 generic.go:334] "Generic (PLEG): container finished" podID="bfb8e49c-30e6-4939-9ef9-1323883a8d6a" containerID="4dc6cd1098d9b181306d55e6f29d0f09a98838187ca958b399501163372876ca" exitCode=1 Mar 19 12:14:21.808380 master-0 kubenswrapper[31830]: I0319 12:14:21.808342 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_2e4442dc-19e2-42a3-b5d9-7af7765b1939/installer/0.log" Mar 19 12:14:21.808538 master-0 kubenswrapper[31830]: I0319 12:14:21.808396 31830 generic.go:334] "Generic (PLEG): container finished" podID="2e4442dc-19e2-42a3-b5d9-7af7765b1939" containerID="01fb0bb7c58b7c7fb9f4e6423408b3fdefa74b9c0303c15e18382b768dd8f028" exitCode=1 Mar 19 12:14:21.813351 master-0 kubenswrapper[31830]: I0319 12:14:21.813315 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-9mpxd_5238840f-3bef-43ad-ae68-ac187f073019/manager/1.log" Mar 19 12:14:21.814143 master-0 kubenswrapper[31830]: I0319 12:14:21.814102 31830 generic.go:334] "Generic (PLEG): container finished" podID="5238840f-3bef-43ad-ae68-ac187f073019" containerID="80a4b06853370526b35bd2b1f042248803efc6dea62506012de0886df3162aa5" exitCode=1 Mar 19 12:14:21.815887 master-0 kubenswrapper[31830]: I0319 12:14:21.815852 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_632bdf3b-0ba0-4874-a2ec-8396683c35c5/installer/0.log" Mar 19 12:14:21.815943 master-0 kubenswrapper[31830]: I0319 12:14:21.815895 31830 generic.go:334] "Generic (PLEG): container finished" podID="632bdf3b-0ba0-4874-a2ec-8396683c35c5" containerID="0db01150a16f0758697f4004ab15abe194def9a3c61ba179de9b9e1316f2ccf4" exitCode=1 Mar 19 12:14:21.819179 master-0 kubenswrapper[31830]: I0319 12:14:21.819131 31830 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="9fba0a39bbccef4b23df04adb335845926b8d52e014255c123724fd850a01216" exitCode=0 Mar 19 12:14:21.823054 master-0 kubenswrapper[31830]: I0319 12:14:21.823030 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-ftml6_19de6601-10d4-4112-a21f-0398d2b160d1/cluster-baremetal-operator/1.log" Mar 19 12:14:21.823705 master-0 kubenswrapper[31830]: I0319 12:14:21.823678 31830 generic.go:334] "Generic (PLEG): container finished" podID="19de6601-10d4-4112-a21f-0398d2b160d1" containerID="dbd72cd315e8f5fa6faaefc2be981b3f9a0d499a3d7eead86b3d71318cde1c34" exitCode=1 Mar 19 12:14:21.827402 master-0 kubenswrapper[31830]: I0319 12:14:21.827357 31830 generic.go:334] "Generic (PLEG): container finished" podID="06df1b1b-154e-46f9-aee0-79a137c6c928" containerID="136228bc884d9d84e6c34125e85b6f53a4eb9c869542bab1b85def5ce8ff08ff" exitCode=0 Mar 19 12:14:21.830627 master-0 kubenswrapper[31830]: I0319 12:14:21.830358 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-6m654_944eac68-e72b-4aed-b5dc-d7d9703178a3/snapshot-controller/4.log" Mar 19 12:14:21.830627 master-0 kubenswrapper[31830]: I0319 12:14:21.830440 31830 generic.go:334] "Generic (PLEG): container finished" podID="944eac68-e72b-4aed-b5dc-d7d9703178a3" containerID="c5a947116c6a12f89fdc1149ba43ce5607536b57e0862f6ca233d92f281d6f5b" exitCode=1 Mar 19 12:14:21.834354 master-0 kubenswrapper[31830]: I0319 12:14:21.834311 31830 generic.go:334] "Generic (PLEG): container finished" podID="12d71593-ee54-4321-bc0f-a24261873bd1" containerID="bce063a1f339b0aa356b146565a1aad286cac9d49e6c2b9606f7a6d9709c3159" exitCode=0 Mar 19 12:14:21.841784 master-0 kubenswrapper[31830]: I0319 12:14:21.841745 31830 generic.go:334] "Generic (PLEG): container finished" podID="89890698-dd48-486b-bd64-dc909aecd9e8" containerID="940ef039d55964b5c0d66bfc983f2f10d9883865e517e1851c87917cb03802e7" exitCode=0 Mar 19 12:14:21.845030 master-0 kubenswrapper[31830]: I0319 12:14:21.845003 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-j2w8z_919daf8d-763a-44bc-8916-86b425a27cbd/manager/1.log" Mar 19 12:14:21.845443 master-0 kubenswrapper[31830]: I0319 12:14:21.845408 31830 generic.go:334] "Generic (PLEG): container finished" podID="919daf8d-763a-44bc-8916-86b425a27cbd" containerID="48baf89d0a5776fb35854b24f12ca1544d0d250398de394c850b09cf7a229ce1" exitCode=1 Mar 19 12:14:21.849969 master-0 kubenswrapper[31830]: I0319 12:14:21.849927 31830 generic.go:334] "Generic (PLEG): container finished" podID="a9d191d1-631d-4091-af8b-382283c18a5a" containerID="ca89a41464eb0e27fe90d37c782e7129d81c40bb812cea238d07969b1741e6d0" exitCode=0 Mar 19 12:14:21.859411 master-0 kubenswrapper[31830]: I0319 12:14:21.859289 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-qv4cg_1089ea24-add9-482e-9276-e6ded12052d7/kube-apiserver-operator/2.log" Mar 19 12:14:21.859411 master-0 kubenswrapper[31830]: I0319 12:14:21.859351 31830 generic.go:334] "Generic (PLEG): container finished" podID="1089ea24-add9-482e-9276-e6ded12052d7" containerID="7b70d5a46fbdbc272ee13227763b5a028d2f93b2e62fbbeaef054faab0e08e37" exitCode=255 Mar 19 12:14:21.863313 master-0 kubenswrapper[31830]: I0319 12:14:21.863268 31830 generic.go:334] "Generic (PLEG): container finished" podID="13503fef-09b2-4dbe-9537-a5b361e7b591" containerID="02cfc804dc670307f6eb25b2923269cce58d61ddff2ed2ded28891fde86083af" exitCode=0 Mar 19 12:14:21.867463 master-0 kubenswrapper[31830]: I0319 12:14:21.867426 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-wd4nx_8414b6b0-ee16-47a5-982b-ee58b136cfcf/approver/1.log" Mar 19 12:14:21.867917 master-0 kubenswrapper[31830]: I0319 12:14:21.867875 31830 generic.go:334] "Generic (PLEG): container finished" podID="8414b6b0-ee16-47a5-982b-ee58b136cfcf" containerID="10c6078f6bb7ab73c8304b00dbc345f2f9442775840c07f5fbb58265a93f7893" exitCode=1 Mar 19 12:14:21.877120 master-0 kubenswrapper[31830]: I0319 12:14:21.877081 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-tql86_44469a78-9300-4260-89e9-ea939de1357b/control-plane-machine-set-operator/0.log" Mar 19 12:14:21.877120 master-0 kubenswrapper[31830]: I0319 12:14:21.877136 31830 generic.go:334] "Generic (PLEG): container finished" podID="44469a78-9300-4260-89e9-ea939de1357b" containerID="bcbe72e4cc3e493a5ae6c052d3dcfb298a861d9613583852bbc5958392be50c4" exitCode=1 Mar 19 12:14:21.879319 master-0 kubenswrapper[31830]: I0319 12:14:21.879291 31830 generic.go:334] "Generic (PLEG): container finished" podID="f08c5930-44f0-48e4-80dd-2563f2733b2f" containerID="41d4637f09562b9b79d583fb65c9acfd7f81986cff143ad48c1c09b266f39b23" exitCode=0 Mar 19 12:14:21.887261 master-0 kubenswrapper[31830]: I0319 12:14:21.887231 31830 generic.go:334] "Generic (PLEG): container finished" podID="0f97d998-530c-4d9d-a030-ca1d9d2d4490" containerID="fe8804b9f205d5f40aba452ae8167e7ca2d2057bbd5a93b9e42d8ec2d88c8b07" exitCode=0 Mar 19 12:14:21.889285 master-0 kubenswrapper[31830]: I0319 12:14:21.889231 31830 generic.go:334] "Generic (PLEG): container finished" podID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerID="a17333f8b7653c93420e9827fce00e5a871f02fd861b2a225722f6e8fbb5e010" exitCode=0 Mar 19 12:14:21.892392 master-0 kubenswrapper[31830]: I0319 12:14:21.892369 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_4b49f09f-2efa-4657-9f5a-fbddd42bee0d/installer/0.log" Mar 19 12:14:21.892469 master-0 kubenswrapper[31830]: I0319 12:14:21.892413 31830 generic.go:334] "Generic (PLEG): container finished" podID="4b49f09f-2efa-4657-9f5a-fbddd42bee0d" containerID="1f0110e6404807316fe552282de736e25a5c73a98ca28c762d1ca02e35c0a306" exitCode=1 Mar 19 12:14:21.894089 master-0 kubenswrapper[31830]: I0319 12:14:21.894042 31830 generic.go:334] "Generic (PLEG): container finished" podID="b0f5939c-48b1-4d6c-9712-9128a78d603b" containerID="3cb3f801dd00591244b19b3ad51ca78e956ed275b4329bac7bcfc1f2f8080cd6" exitCode=0 Mar 19 12:14:21.896091 master-0 kubenswrapper[31830]: I0319 12:14:21.896055 31830 generic.go:334] "Generic (PLEG): container finished" podID="c2dbd8b3-0e02-4747-a166-80aa6a94b060" containerID="697b28a330e52c45053a0bb858d1df6049dfd854ab75b1f95587cbc7874588cd" exitCode=0 Mar 19 12:14:21.896091 master-0 kubenswrapper[31830]: I0319 12:14:21.896088 31830 generic.go:334] "Generic (PLEG): container finished" podID="c2dbd8b3-0e02-4747-a166-80aa6a94b060" containerID="2457fc795f5fa01ac43b0f615c5a28446422acb5259e051c1c008795c84b021b" exitCode=0 Mar 19 12:14:21.896190 master-0 kubenswrapper[31830]: I0319 12:14:21.896099 31830 generic.go:334] "Generic (PLEG): container finished" podID="c2dbd8b3-0e02-4747-a166-80aa6a94b060" containerID="58b2ce2cf7ade5f0117d8bf2599516b6d2046b5a2b2cff339f1186030594c1b8" exitCode=0 Mar 19 12:14:21.903129 master-0 kubenswrapper[31830]: I0319 12:14:21.903067 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/5.log" Mar 19 12:14:21.903529 master-0 kubenswrapper[31830]: I0319 12:14:21.903494 31830 generic.go:334] "Generic (PLEG): container finished" podID="b80027fd-7b39-477a-a337-ff9bb08e7eeb" containerID="f6cff670ffd3b7d67c924d90e3c87c5305a542b098fae4298d16010aa46c7cd3" exitCode=1 Mar 19 12:14:21.933017 master-0 kubenswrapper[31830]: I0319 12:14:21.932950 31830 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="af7ab2de52b543dbb0460a9ad1ef51b497e5cd2bc41457946ff4763f02848a63" exitCode=0 Mar 19 12:14:21.933017 master-0 kubenswrapper[31830]: I0319 12:14:21.933005 31830 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="3feac3c251ff91bcd1b3442311df2d939efe2cd53ade12c46efdb03023c1d996" exitCode=0 Mar 19 12:14:21.933017 master-0 kubenswrapper[31830]: I0319 12:14:21.933019 31830 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="0ee632c730d638e023a5c04cff8a8c19cb288483cbace4dc6c5c42638a2423e0" exitCode=0 Mar 19 12:14:21.950659 master-0 kubenswrapper[31830]: I0319 12:14:21.950617 31830 generic.go:334] "Generic (PLEG): container finished" podID="b425669d-6f80-4a2b-b2f2-5c6766654c6c" containerID="4f12a6a6377eb63e234161ff939d40e45bfc8d6ae4fa1554dca2cf62421fb52b" exitCode=0 Mar 19 12:14:21.977198 master-0 kubenswrapper[31830]: E0319 12:14:21.977104 31830 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 19 12:14:22.199462 master-0 kubenswrapper[31830]: I0319 12:14:22.199331 31830 manager.go:324] Recovery completed Mar 19 12:14:22.328541 master-0 kubenswrapper[31830]: I0319 12:14:22.328472 31830 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 19 12:14:22.328541 master-0 kubenswrapper[31830]: I0319 12:14:22.328524 31830 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 19 12:14:22.328782 master-0 kubenswrapper[31830]: I0319 12:14:22.328564 31830 state_mem.go:36] "Initialized new in-memory state store" Mar 19 12:14:22.328856 master-0 kubenswrapper[31830]: I0319 12:14:22.328826 31830 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 19 12:14:22.328897 master-0 kubenswrapper[31830]: I0319 12:14:22.328848 31830 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 19 12:14:22.328897 master-0 kubenswrapper[31830]: I0319 12:14:22.328873 31830 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 19 12:14:22.328897 master-0 kubenswrapper[31830]: I0319 12:14:22.328882 31830 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 19 12:14:22.328897 master-0 kubenswrapper[31830]: I0319 12:14:22.328890 31830 policy_none.go:49] "None policy: Start" Mar 19 12:14:22.334409 master-0 kubenswrapper[31830]: I0319 12:14:22.334339 31830 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 12:14:22.334409 master-0 kubenswrapper[31830]: I0319 12:14:22.334417 31830 state_mem.go:35] "Initializing new in-memory state store" Mar 19 12:14:22.334738 master-0 kubenswrapper[31830]: I0319 12:14:22.334709 31830 state_mem.go:75] "Updated machine memory state" Mar 19 12:14:22.334738 master-0 kubenswrapper[31830]: I0319 12:14:22.334728 31830 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 19 12:14:22.354774 master-0 kubenswrapper[31830]: I0319 12:14:22.354716 31830 manager.go:334] "Starting Device Plugin manager" Mar 19 12:14:22.355009 master-0 kubenswrapper[31830]: I0319 12:14:22.354818 31830 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 12:14:22.355009 master-0 kubenswrapper[31830]: I0319 12:14:22.354835 31830 server.go:79] "Starting device plugin registration server" Mar 19 12:14:22.355279 master-0 kubenswrapper[31830]: I0319 12:14:22.355248 31830 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 12:14:22.355349 master-0 kubenswrapper[31830]: I0319 12:14:22.355270 31830 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 12:14:22.355536 master-0 kubenswrapper[31830]: I0319 12:14:22.355502 31830 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 19 12:14:22.355609 master-0 kubenswrapper[31830]: I0319 12:14:22.355589 31830 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 19 12:14:22.355609 master-0 kubenswrapper[31830]: I0319 12:14:22.355603 31830 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 12:14:22.377651 master-0 kubenswrapper[31830]: I0319 12:14:22.377568 31830 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0"] Mar 19 12:14:22.378906 master-0 kubenswrapper[31830]: I0319 12:14:22.378862 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d2d73d5870e62554bb684d309080c493974123e3d07fe8faf016c90bfd3fdd4" Mar 19 12:14:22.378994 master-0 kubenswrapper[31830]: I0319 12:14:22.378926 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bc9b9c94d7c2fc35e88bdf943a6e373d9be7c1dc5c7edff2198406e6c44db25" Mar 19 12:14:22.379109 master-0 kubenswrapper[31830]: I0319 12:14:22.378946 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"20d447d60e6c323ac2a99fb9005538b9f698220ad800f2a9d7a82ebdd391df17"} Mar 19 12:14:22.379109 master-0 kubenswrapper[31830]: I0319 12:14:22.379105 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"6b554ade444a2218312faf004411e7ca5ff136f234fd5270edc3b29df56f6e17"} Mar 19 12:14:22.379195 master-0 kubenswrapper[31830]: I0319 12:14:22.379121 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"b1a54e1d5a4e1d27db12da7c6949a0237da9f713c6a17f5af4237b1c8b03cbfa"} Mar 19 12:14:22.379195 master-0 kubenswrapper[31830]: I0319 12:14:22.379131 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"e95d63ff24c648e1680e38f824e92cec0fcea7bc20cdade312b98b0468aad916"} Mar 19 12:14:22.379195 master-0 kubenswrapper[31830]: I0319 12:14:22.379159 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"fcec6c469a1150ebd576b3e8ddd08ae79f306b35899ebd8eb5044a4ccd5c6c61"} Mar 19 12:14:22.379195 master-0 kubenswrapper[31830]: I0319 12:14:22.379172 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"d4b6f2e178f5cea03cca73846d1f496d006bc91e2a6e21d8cb7ab57e7c076671"} Mar 19 12:14:22.379195 master-0 kubenswrapper[31830]: I0319 12:14:22.379181 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"6e5f36c23efb75db8a09134649847dcb43474b94fa919dd3367661556f399de4"} Mar 19 12:14:22.379195 master-0 kubenswrapper[31830]: I0319 12:14:22.379190 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerDied","Data":"04102fb37d09b73e728e34206b1d91a20ab150cf6fe0171a324821c07888079f"} Mar 19 12:14:22.379768 master-0 kubenswrapper[31830]: I0319 12:14:22.379199 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"8b160a1a52470caaf8eb5167c80599083e3f1829f2580cc4817859648d8bb802"} Mar 19 12:14:22.379768 master-0 kubenswrapper[31830]: I0319 12:14:22.379535 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa598cee3a86e2c04eff522555d0cdf5e0216e7c4e188a8334de9e13d56ec286" Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.380367 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0904905367e561b547f2af7eae1570bb91ab634506393bc2f83371ecfe7fbc0" Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.380489 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1783b50c5a08e2a42241bed3f2df9ef9e7315549e4393a5e98fdcdce6ecef6e" Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.380505 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7220eeff67efce450283cc72bc4e2acf7316ae81a06fc10749f8bb6f974b934b" Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.380561 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2268d57847d1028cfcb3c36b8c37f3a09a3721c2f716f744757dbffd1bb03d4" Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.380576 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfaade6a812c1fae7dc2bc47f01477e66bb0563b115dfa8becda8b83dc0a10b7" Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.380594 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c8244ac71cff666f8f31eda66e91f3ec8411550f1be8d391239277f0b7cf02b" Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.380605 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"85ac3bdc63293b181e3d779f2c3dd340478df86a724f42e86c25f67c2e97c503"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.380633 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"0b5ac637490a34aebef118fd54fa4f10821c561174d2eea7d5ba728bc39d30a3"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.380649 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"8721a1bcca3e6b8ad33aab266086c8fc37d6d5fcfd1c38b4b527628964b85d3a"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.380659 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"565f9c73fa36f45fb9cd49587a2aa91819aea7c5a1fe5e602d37b0a519ad8eea"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.380668 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"49208618a1a605bcca0e7a0e4a618f3e1b277ea51f314ec7cb82b073d6261eb8"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.380677 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerDied","Data":"9fba0a39bbccef4b23df04adb335845926b8d52e014255c123724fd850a01216"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.380687 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"f381b85f9130b76eda5dc167d27eb69ac9b6f2de032bdb231577387d3f19b35d"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.380727 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed283c061d1fd79e9b8f04b4ebc51756f0469a7d30532249627ffce7936f190b" Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.380750 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2035c8e72f2b89c4f96d115722ef5f74b915d093ec98a02ef0fa3a58ae56a155" Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.380780 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"7783994ea3804af3822e1e8ef880d160160be30c6cc27242405255670e8fc218"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.380815 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"27ccdb8fe17b3c5cb9acf1759072b6837f5312b119b69e4b34ee0c362bd4382c"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.380925 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df06fa6144150d2fd73d9f262bf2cf21b2895ff0830d1e0b601df841982f89d6" Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.380974 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"9c85ea03175b078051d022f10609cbe5a9f4cf523155732a5c478d72bb14664e"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.380986 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"e82f2fc3d8273fe92f80fac6c311d17b7083f322a8c31b9e4e35d22dddf4adb6"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.380997 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"bd3a17cd87fde6f7144b0e322661921d9832fa6483a57510e51852051cbeb528"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.381007 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"39e0673977f4f7234890fa98a05a4d43d9da817767f59ad368c824fe6d9cdda5"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.381018 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"06df91d89c735b834fc346a4f7854eb6c43febaa5e7607e925c686e24ccb4eda"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.381026 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"af7ab2de52b543dbb0460a9ad1ef51b497e5cd2bc41457946ff4763f02848a63"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.381040 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"3feac3c251ff91bcd1b3442311df2d939efe2cd53ade12c46efdb03023c1d996"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.381052 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"0ee632c730d638e023a5c04cff8a8c19cb288483cbace4dc6c5c42638a2423e0"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.381075 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"2ea49210674ab53911da00e8c007432ee001baf1726a3c4349603d4b14736471"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.381090 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"09672015532ae9d1d74ae4d426cd904b","Type":"ContainerStarted","Data":"3fff7305ffab3c7b2d64fb017b4d322893f65a346d3d05dc9207a0c3f727bb4b"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.381102 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"09672015532ae9d1d74ae4d426cd904b","Type":"ContainerStarted","Data":"14e2eab8d6fc7f70b2c656df6e5623f56e87c29ceaaedf3b47b4662d233279d5"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.381113 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"09672015532ae9d1d74ae4d426cd904b","Type":"ContainerStarted","Data":"e2254e5955e606c47be9604d12c39e06178d4d59ccf279a6986ce5edd6dc066e"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.381131 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"09672015532ae9d1d74ae4d426cd904b","Type":"ContainerStarted","Data":"0caac3ca6bbe34a0e2d497521111d7392578df46354c8eb9456dc2e8b18fadb9"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.381141 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"09672015532ae9d1d74ae4d426cd904b","Type":"ContainerStarted","Data":"401877adce8e78dfdd3ac293a53a75da77fa4a3177086a087aa6915ac4d36604"} Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.381165 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd02abc3df1ea2ca997096da3d27136acff3102126289b07e7fa867e530a0c53" Mar 19 12:14:22.386604 master-0 kubenswrapper[31830]: I0319 12:14:22.381199 31830 scope.go:117] "RemoveContainer" containerID="a17333f8b7653c93420e9827fce00e5a871f02fd861b2a225722f6e8fbb5e010" Mar 19 12:14:22.396046 master-0 kubenswrapper[31830]: E0319 12:14:22.395820 31830 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 12:14:22.404376 master-0 kubenswrapper[31830]: E0319 12:14:22.404322 31830 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:14:22.458882 master-0 kubenswrapper[31830]: I0319 12:14:22.456087 31830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 12:14:22.462819 master-0 kubenswrapper[31830]: I0319 12:14:22.459159 31830 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 12:14:22.462819 master-0 kubenswrapper[31830]: I0319 12:14:22.459199 31830 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 12:14:22.462819 master-0 kubenswrapper[31830]: I0319 12:14:22.459209 31830 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 12:14:22.462819 master-0 kubenswrapper[31830]: I0319 12:14:22.459328 31830 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 12:14:22.469819 master-0 kubenswrapper[31830]: I0319 12:14:22.469382 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:22.469819 master-0 kubenswrapper[31830]: I0319 12:14:22.469460 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:22.469819 master-0 kubenswrapper[31830]: I0319 12:14:22.469496 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:22.469819 master-0 kubenswrapper[31830]: I0319 12:14:22.469539 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:22.469819 master-0 kubenswrapper[31830]: I0319 12:14:22.469614 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:14:22.469819 master-0 kubenswrapper[31830]: I0319 12:14:22.469636 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:14:22.469819 master-0 kubenswrapper[31830]: I0319 12:14:22.469659 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 12:14:22.469819 master-0 kubenswrapper[31830]: I0319 12:14:22.469679 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:14:22.469819 master-0 kubenswrapper[31830]: I0319 12:14:22.469701 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:14:22.469819 master-0 kubenswrapper[31830]: I0319 12:14:22.469720 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:22.469819 master-0 kubenswrapper[31830]: I0319 12:14:22.469742 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:22.469819 master-0 kubenswrapper[31830]: I0319 12:14:22.469761 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/09672015532ae9d1d74ae4d426cd904b-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"09672015532ae9d1d74ae4d426cd904b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:14:22.469819 master-0 kubenswrapper[31830]: I0319 12:14:22.469782 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 12:14:22.469819 master-0 kubenswrapper[31830]: I0319 12:14:22.469861 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:14:22.470540 master-0 kubenswrapper[31830]: I0319 12:14:22.469881 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:14:22.470540 master-0 kubenswrapper[31830]: I0319 12:14:22.469919 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:22.470540 master-0 kubenswrapper[31830]: I0319 12:14:22.469957 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/09672015532ae9d1d74ae4d426cd904b-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"09672015532ae9d1d74ae4d426cd904b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:14:22.470540 master-0 kubenswrapper[31830]: I0319 12:14:22.470004 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:14:22.470540 master-0 kubenswrapper[31830]: I0319 12:14:22.470029 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:14:22.470540 master-0 kubenswrapper[31830]: I0319 12:14:22.470056 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:22.487823 master-0 kubenswrapper[31830]: I0319 12:14:22.485514 31830 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 19 12:14:22.487823 master-0 kubenswrapper[31830]: I0319 12:14:22.485668 31830 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 19 12:14:22.501403 master-0 kubenswrapper[31830]: E0319 12:14:22.501347 31830 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 19 12:14:22.502021 master-0 kubenswrapper[31830]: E0319 12:14:22.501351 31830 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.570368 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.570415 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.570439 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.570538 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.570659 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.570739 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.570807 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.570840 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.570871 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.570984 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571040 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571067 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571084 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571100 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571122 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/09672015532ae9d1d74ae4d426cd904b-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"09672015532ae9d1d74ae4d426cd904b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571137 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571153 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571147 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571166 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571182 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571197 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/09672015532ae9d1d74ae4d426cd904b-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"09672015532ae9d1d74ae4d426cd904b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571200 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571210 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571234 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571238 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571251 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571311 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571341 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571366 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571390 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571424 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571459 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571485 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:22.571487 master-0 kubenswrapper[31830]: I0319 12:14:22.571518 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:22.572696 master-0 kubenswrapper[31830]: I0319 12:14:22.571550 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:14:22.572696 master-0 kubenswrapper[31830]: I0319 12:14:22.571580 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:22.572696 master-0 kubenswrapper[31830]: I0319 12:14:22.571618 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:22.572696 master-0 kubenswrapper[31830]: I0319 12:14:22.571666 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/09672015532ae9d1d74ae4d426cd904b-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"09672015532ae9d1d74ae4d426cd904b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:14:22.572696 master-0 kubenswrapper[31830]: I0319 12:14:22.571699 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 12:14:22.572696 master-0 kubenswrapper[31830]: I0319 12:14:22.571737 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/09672015532ae9d1d74ae4d426cd904b-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"09672015532ae9d1d74ae4d426cd904b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:14:22.605643 master-0 kubenswrapper[31830]: I0319 12:14:22.605582 31830 apiserver.go:52] "Watching apiserver" Mar 19 12:14:22.621677 master-0 kubenswrapper[31830]: I0319 12:14:22.621634 31830 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 19 12:14:22.623065 master-0 kubenswrapper[31830]: I0319 12:14:22.623013 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7d58488df-czxxt","openshift-ingress-canary/ingress-canary-w8jqs","openshift-kube-scheduler/installer-3-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-machine-config-operator/machine-config-server-g7mqg","openshift-monitoring/metrics-server-86889676f6-phlgd","assisted-installer/assisted-installer-controller-b6qm2","openshift-network-diagnostics/network-check-target-v66z4","openshift-network-operator/iptables-alerter-276t5","openshift-network-operator/network-operator-7bd846bfc4-nb8bk","openshift-service-ca/service-ca-79bc6b8d76-5rbp5","openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c","openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh","openshift-network-node-identity/network-node-identity-wd4nx","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4","openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86","openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6","openshift-monitoring/telemeter-client-6975d7769d-nvxfv","openshift-multus/multus-w82cg","openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654","openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4","openshift-ingress/router-default-7dcf5569b5-lkpgl","openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6","openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw","openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-2chdm","openshift-etcd/installer-2-master-0","openshift-ingress-operator/ingress-operator-66b84d69b-btppx","openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs","openshift-marketplace/redhat-operators-fbd5s","openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q","openshift-monitoring/node-exporter-lpndz","openshift-multus/multus-additional-cni-plugins-2z4h8","openshift-dns-operator/dns-operator-9c5679d8f-z6kvm","openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r","openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj","openshift-multus/network-metrics-daemon-6t6sn","openshift-kube-controller-manager/installer-3-master-0","openshift-kube-controller-manager/installer-2-master-0","openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4","openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz","openshift-insights/insights-operator-68bf6ff9d6-djdmh","openshift-kube-scheduler/installer-5-master-0","openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d","openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq","openshift-marketplace/marketplace-operator-89ccd998f-pr7gk","openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq","openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8","openshift-controller-manager/controller-manager-7cdddc6cb-q222c","openshift-kube-apiserver/installer-3-master-0","openshift-machine-config-operator/machine-config-daemon-ms2wn","openshift-marketplace/certified-operators-tdnkp","openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9","openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t","openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x","openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9","openshift-kube-apiserver/installer-1-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-dns/node-resolver-jqzxt","openshift-cluster-node-tuning-operator/tuned-dc5br","openshift-etcd/etcd-master-0","openshift-etcd/installer-1-master-0","openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6","openshift-ovn-kubernetes/ovnkube-node-lk9x9","openshift-apiserver/apiserver-897cc986b-vpg2l","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt","openshift-dns/dns-default-zjdkm","openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b","openshift-kube-controller-manager/installer-4-master-0","openshift-marketplace/community-operators-s22fd","openshift-marketplace/redhat-marketplace-cjgpg","openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd","openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z","openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4","openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm","openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl","openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb","openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt","openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-cx8l9","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/installer-4-master-0","openshift-kube-storage-version-migrator/migrator-8487694857-99fgs","openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj","openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg","openshift-network-diagnostics/network-check-source-b4bf74f6-6dmt7","openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l"] Mar 19 12:14:22.623397 master-0 kubenswrapper[31830]: I0319 12:14:22.623349 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-b6qm2" Mar 19 12:14:22.632880 master-0 kubenswrapper[31830]: I0319 12:14:22.632828 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 19 12:14:22.636532 master-0 kubenswrapper[31830]: I0319 12:14:22.636325 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 19 12:14:22.636532 master-0 kubenswrapper[31830]: I0319 12:14:22.636433 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 19 12:14:22.636746 master-0 kubenswrapper[31830]: I0319 12:14:22.636570 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 19 12:14:22.640295 master-0 kubenswrapper[31830]: I0319 12:14:22.637622 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 19 12:14:22.640295 master-0 kubenswrapper[31830]: I0319 12:14:22.637873 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 19 12:14:22.640295 master-0 kubenswrapper[31830]: I0319 12:14:22.637973 31830 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="b560fc59-8f41-478e-a914-b16b6c35032a" Mar 19 12:14:22.640295 master-0 kubenswrapper[31830]: I0319 12:14:22.638616 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 19 12:14:22.640295 master-0 kubenswrapper[31830]: I0319 12:14:22.639259 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 19 12:14:22.640295 master-0 kubenswrapper[31830]: I0319 12:14:22.639716 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 19 12:14:22.642287 master-0 kubenswrapper[31830]: I0319 12:14:22.642202 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 19 12:14:22.654574 master-0 kubenswrapper[31830]: I0319 12:14:22.653503 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 19 12:14:22.654574 master-0 kubenswrapper[31830]: I0319 12:14:22.654127 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 19 12:14:22.654574 master-0 kubenswrapper[31830]: I0319 12:14:22.654232 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 19 12:14:22.654910 master-0 kubenswrapper[31830]: I0319 12:14:22.654707 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 19 12:14:22.654910 master-0 kubenswrapper[31830]: I0319 12:14:22.654851 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 19 12:14:22.655000 master-0 kubenswrapper[31830]: I0319 12:14:22.654972 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 19 12:14:22.656901 master-0 kubenswrapper[31830]: I0319 12:14:22.655052 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 19 12:14:22.656901 master-0 kubenswrapper[31830]: I0319 12:14:22.655127 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 19 12:14:22.656901 master-0 kubenswrapper[31830]: I0319 12:14:22.655239 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 19 12:14:22.656901 master-0 kubenswrapper[31830]: I0319 12:14:22.655490 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 19 12:14:22.656901 master-0 kubenswrapper[31830]: I0319 12:14:22.655640 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 19 12:14:22.656901 master-0 kubenswrapper[31830]: I0319 12:14:22.655722 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 19 12:14:22.656901 master-0 kubenswrapper[31830]: I0319 12:14:22.656088 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 19 12:14:22.656901 master-0 kubenswrapper[31830]: I0319 12:14:22.656378 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 19 12:14:22.656901 master-0 kubenswrapper[31830]: I0319 12:14:22.656548 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 19 12:14:22.656901 master-0 kubenswrapper[31830]: I0319 12:14:22.656666 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 19 12:14:22.656901 master-0 kubenswrapper[31830]: I0319 12:14:22.656766 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 19 12:14:22.657430 master-0 kubenswrapper[31830]: I0319 12:14:22.657127 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 19 12:14:22.657430 master-0 kubenswrapper[31830]: I0319 12:14:22.657215 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 19 12:14:22.657430 master-0 kubenswrapper[31830]: I0319 12:14:22.657281 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 19 12:14:22.657430 master-0 kubenswrapper[31830]: I0319 12:14:22.657347 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 19 12:14:22.657430 master-0 kubenswrapper[31830]: I0319 12:14:22.657412 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 19 12:14:22.657629 master-0 kubenswrapper[31830]: I0319 12:14:22.657526 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.657715 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.657855 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.657950 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.657973 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.658030 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.658114 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.658193 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.658327 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.658425 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.658464 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.658616 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.658624 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.658696 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.658849 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.658875 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.658939 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.658965 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.659008 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.659054 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.659072 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.659134 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 19 12:14:22.659131 master-0 kubenswrapper[31830]: I0319 12:14:22.659147 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 19 12:14:22.660031 master-0 kubenswrapper[31830]: I0319 12:14:22.659226 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 19 12:14:22.660031 master-0 kubenswrapper[31830]: I0319 12:14:22.659243 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 19 12:14:22.660031 master-0 kubenswrapper[31830]: I0319 12:14:22.659354 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 19 12:14:22.660031 master-0 kubenswrapper[31830]: I0319 12:14:22.659403 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 19 12:14:22.660031 master-0 kubenswrapper[31830]: I0319 12:14:22.659453 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 19 12:14:22.660031 master-0 kubenswrapper[31830]: I0319 12:14:22.659515 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 19 12:14:22.660031 master-0 kubenswrapper[31830]: I0319 12:14:22.659568 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 19 12:14:22.660031 master-0 kubenswrapper[31830]: I0319 12:14:22.659599 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 19 12:14:22.660031 master-0 kubenswrapper[31830]: I0319 12:14:22.659721 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 19 12:14:22.660031 master-0 kubenswrapper[31830]: I0319 12:14:22.659771 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 19 12:14:22.661594 master-0 kubenswrapper[31830]: I0319 12:14:22.661571 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 19 12:14:22.661687 master-0 kubenswrapper[31830]: I0319 12:14:22.661652 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 19 12:14:22.661734 master-0 kubenswrapper[31830]: I0319 12:14:22.661709 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 19 12:14:22.661734 master-0 kubenswrapper[31830]: I0319 12:14:22.661660 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 19 12:14:22.661863 master-0 kubenswrapper[31830]: I0319 12:14:22.661830 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 19 12:14:22.661863 master-0 kubenswrapper[31830]: I0319 12:14:22.661837 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 19 12:14:22.662732 master-0 kubenswrapper[31830]: I0319 12:14:22.662137 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 19 12:14:22.662732 master-0 kubenswrapper[31830]: I0319 12:14:22.662330 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 19 12:14:22.662732 master-0 kubenswrapper[31830]: I0319 12:14:22.662334 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 19 12:14:22.662732 master-0 kubenswrapper[31830]: I0319 12:14:22.662548 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 19 12:14:22.662732 master-0 kubenswrapper[31830]: I0319 12:14:22.662578 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 19 12:14:22.662993 master-0 kubenswrapper[31830]: I0319 12:14:22.662874 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 19 12:14:22.662993 master-0 kubenswrapper[31830]: I0319 12:14:22.662902 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 19 12:14:22.663075 master-0 kubenswrapper[31830]: I0319 12:14:22.663035 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 19 12:14:22.663075 master-0 kubenswrapper[31830]: I0319 12:14:22.663043 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 19 12:14:22.663149 master-0 kubenswrapper[31830]: I0319 12:14:22.663048 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 19 12:14:22.663297 master-0 kubenswrapper[31830]: I0319 12:14:22.663278 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 19 12:14:22.666546 master-0 kubenswrapper[31830]: I0319 12:14:22.663517 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 19 12:14:22.666546 master-0 kubenswrapper[31830]: I0319 12:14:22.663574 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 19 12:14:22.666546 master-0 kubenswrapper[31830]: I0319 12:14:22.663668 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 19 12:14:22.666546 master-0 kubenswrapper[31830]: I0319 12:14:22.663846 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 19 12:14:22.666546 master-0 kubenswrapper[31830]: I0319 12:14:22.663913 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 19 12:14:22.666546 master-0 kubenswrapper[31830]: I0319 12:14:22.664517 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 12:14:22.666546 master-0 kubenswrapper[31830]: I0319 12:14:22.664675 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 12:14:22.666546 master-0 kubenswrapper[31830]: I0319 12:14:22.665441 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 19 12:14:22.666546 master-0 kubenswrapper[31830]: I0319 12:14:22.665940 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 12:14:22.666546 master-0 kubenswrapper[31830]: I0319 12:14:22.666200 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 19 12:14:22.666546 master-0 kubenswrapper[31830]: I0319 12:14:22.666556 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 19 12:14:22.667128 master-0 kubenswrapper[31830]: I0319 12:14:22.666918 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 19 12:14:22.667128 master-0 kubenswrapper[31830]: I0319 12:14:22.667037 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 19 12:14:22.667225 master-0 kubenswrapper[31830]: I0319 12:14:22.667146 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 19 12:14:22.667284 master-0 kubenswrapper[31830]: I0319 12:14:22.667251 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 19 12:14:22.668861 master-0 kubenswrapper[31830]: I0319 12:14:22.668822 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 19 12:14:22.674598 master-0 kubenswrapper[31830]: I0319 12:14:22.674560 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-os-release\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 12:14:22.674684 master-0 kubenswrapper[31830]: I0319 12:14:22.674628 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2151eb84-177e-459c-be71-f48465323ac2-config\") pod \"kube-controller-manager-operator-ff989d6cc-fjn4b\" (UID: \"2151eb84-177e-459c-be71-f48465323ac2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 12:14:22.674684 master-0 kubenswrapper[31830]: I0319 12:14:22.674663 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-cnibin\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 12:14:22.674747 master-0 kubenswrapper[31830]: I0319 12:14:22.674696 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 12:14:22.674861 master-0 kubenswrapper[31830]: I0319 12:14:22.674723 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpdts\" (UniqueName: \"kubernetes.io/projected/9702fc8c-4fe0-413b-b2d4-db23021d42b8-kube-api-access-tpdts\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 12:14:22.674919 master-0 kubenswrapper[31830]: I0319 12:14:22.674882 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/284768b8-9d70-4cf7-bace-8adc6b587186-host-etc-kube\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 12:14:22.675208 master-0 kubenswrapper[31830]: I0319 12:14:22.675175 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2151eb84-177e-459c-be71-f48465323ac2-config\") pod \"kube-controller-manager-operator-ff989d6cc-fjn4b\" (UID: \"2151eb84-177e-459c-be71-f48465323ac2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 12:14:22.675248 master-0 kubenswrapper[31830]: I0319 12:14:22.675213 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/63c12a89-1b49-4eba-8f5a-551b10d2246b-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 12:14:22.675570 master-0 kubenswrapper[31830]: I0319 12:14:22.675544 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/7241bf11-192e-47db-9d80-2324938ed34c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 12:14:22.675782 master-0 kubenswrapper[31830]: I0319 12:14:22.675750 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06df1b1b-154e-46f9-aee0-79a137c6c928-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-ptqdh\" (UID: \"06df1b1b-154e-46f9-aee0-79a137c6c928\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 12:14:22.675939 master-0 kubenswrapper[31830]: I0319 12:14:22.675916 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x252z\" (UniqueName: \"kubernetes.io/projected/aef8e03f-0363-4e13-b7ca-4fa871d77c62-kube-api-access-x252z\") pod \"openshift-config-operator-95bf4f4d-nhvl4\" (UID: \"aef8e03f-0363-4e13-b7ca-4fa871d77c62\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 12:14:22.675989 master-0 kubenswrapper[31830]: I0319 12:14:22.675961 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 12:14:22.676039 master-0 kubenswrapper[31830]: I0319 12:14:22.675995 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f08c5930-44f0-48e4-80dd-2563f2733b2f-config\") pod \"openshift-apiserver-operator-d65958b8-mjs7x\" (UID: \"f08c5930-44f0-48e4-80dd-2563f2733b2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 12:14:22.676235 master-0 kubenswrapper[31830]: I0319 12:14:22.676199 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b80027fd-7b39-477a-a337-ff9bb08e7eeb-bound-sa-token\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 12:14:22.676274 master-0 kubenswrapper[31830]: I0319 12:14:22.676260 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk\" (UID: \"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 12:14:22.676321 master-0 kubenswrapper[31830]: I0319 12:14:22.676297 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-cni-binary-copy\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 12:14:22.676359 master-0 kubenswrapper[31830]: I0319 12:14:22.676333 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-config\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.676373 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwfg5\" (UniqueName: \"kubernetes.io/projected/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-kube-api-access-hwfg5\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.676647 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mr6d\" (UniqueName: \"kubernetes.io/projected/beb562de-402b-4d9f-b5ed-090b60847a95-kube-api-access-9mr6d\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.676689 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06df1b1b-154e-46f9-aee0-79a137c6c928-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-ptqdh\" (UID: \"06df1b1b-154e-46f9-aee0-79a137c6c928\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.676856 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.676989 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-cni-bin\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.677051 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1089ea24-add9-482e-9276-e6ded12052d7-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-qv4cg\" (UID: \"1089ea24-add9-482e-9276-e6ded12052d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.677072 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-conf-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.677154 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06df1b1b-154e-46f9-aee0-79a137c6c928-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-ptqdh\" (UID: \"06df1b1b-154e-46f9-aee0-79a137c6c928\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.677283 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.678022 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1089ea24-add9-482e-9276-e6ded12052d7-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-qv4cg\" (UID: \"1089ea24-add9-482e-9276-e6ded12052d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.680310 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk\" (UID: \"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.680370 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5mkm\" (UniqueName: \"kubernetes.io/projected/7241bf11-192e-47db-9d80-2324938ed34c-kube-api-access-s5mkm\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.680397 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.680420 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.680443 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1089ea24-add9-482e-9276-e6ded12052d7-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-qv4cg\" (UID: \"1089ea24-add9-482e-9276-e6ded12052d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.680462 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/aef8e03f-0363-4e13-b7ca-4fa871d77c62-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-nhvl4\" (UID: \"aef8e03f-0363-4e13-b7ca-4fa871d77c62\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.680618 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3017b5e-178e-49de-89d2-817a18398203-serving-cert\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.680652 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.680680 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shfs6\" (UniqueName: \"kubernetes.io/projected/7044a7b3-4fac-40af-a31c-054a1a1db26b-kube-api-access-shfs6\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.680707 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.680737 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/19de6601-10d4-4112-a21f-0398d2b160d1-images\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.680766 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.680789 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6wm6\" (UniqueName: \"kubernetes.io/projected/d3017b5e-178e-49de-89d2-817a18398203-kube-api-access-b6wm6\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.680851 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/284768b8-9d70-4cf7-bace-8adc6b587186-metrics-tls\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.680864 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.680915 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-cni-binary-copy\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.681180 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p6vn\" (UniqueName: \"kubernetes.io/projected/284768b8-9d70-4cf7-bace-8adc6b587186-kube-api-access-8p6vn\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.681208 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19de6601-10d4-4112-a21f-0398d2b160d1-config\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.681228 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.681420 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npc2t\" (UniqueName: \"kubernetes.io/projected/c2dbd8b3-0e02-4747-a166-80aa6a94b060-kube-api-access-npc2t\") pod \"cluster-olm-operator-67dcd4998-cg9pq\" (UID: \"c2dbd8b3-0e02-4747-a166-80aa6a94b060\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.681447 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f97d998-530c-4d9d-a030-ca1d9d2d4490-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-6wzws\" (UID: \"0f97d998-530c-4d9d-a030-ca1d9d2d4490\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.681471 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bst2w\" (UniqueName: \"kubernetes.io/projected/63c12a89-1b49-4eba-8f5a-551b10d2246b-kube-api-access-bst2w\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.681493 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2151eb84-177e-459c-be71-f48465323ac2-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-fjn4b\" (UID: \"2151eb84-177e-459c-be71-f48465323ac2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.681524 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-hostroot\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.681552 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b80027fd-7b39-477a-a337-ff9bb08e7eeb-trusted-ca\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.681684 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvvk8\" (UniqueName: \"kubernetes.io/projected/0316c374-f812-4e0a-8645-727e8372f16e-kube-api-access-tvvk8\") pod \"network-check-source-b4bf74f6-6dmt7\" (UID: \"0316c374-f812-4e0a-8645-727e8372f16e\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-6dmt7" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.682436 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19de6601-10d4-4112-a21f-0398d2b160d1-config\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.682825 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-proxy-tls\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.683190 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2151eb84-177e-459c-be71-f48465323ac2-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-fjn4b\" (UID: \"2151eb84-177e-459c-be71-f48465323ac2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.683600 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cert\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.683812 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsk9d\" (UniqueName: \"kubernetes.io/projected/9ed2dbd1-aec4-4009-917a-933533912ab5-kube-api-access-gsk9d\") pod \"openshift-controller-manager-operator-8c94f4649-gx4w8\" (UID: \"9ed2dbd1-aec4-4009-917a-933533912ab5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.683860 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.683894 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9702fc8c-4fe0-413b-b2d4-db23021d42b8-serving-cert\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.683926 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4hsp\" (UniqueName: \"kubernetes.io/projected/fe245927-c937-4ec7-ab83-4900bade72cf-kube-api-access-s4hsp\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.683957 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-system-cni-dir\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.683208 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f97d998-530c-4d9d-a030-ca1d9d2d4490-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-6wzws\" (UID: \"0f97d998-530c-4d9d-a030-ca1d9d2d4490\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.684285 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/284768b8-9d70-4cf7-bace-8adc6b587186-metrics-tls\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.684300 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/19de6601-10d4-4112-a21f-0398d2b160d1-images\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.684372 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-config\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.684716 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/aef8e03f-0363-4e13-b7ca-4fa871d77c62-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-nhvl4\" (UID: \"aef8e03f-0363-4e13-b7ca-4fa871d77c62\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.684748 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab54833d-e57b-479d-b171-68155f6566f1-metrics-tls\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.684886 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 12:14:22.685038 master-0 kubenswrapper[31830]: I0319 12:14:22.685072 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-srv-cert\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 12:14:22.688201 master-0 kubenswrapper[31830]: I0319 12:14:22.688159 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b80027fd-7b39-477a-a337-ff9bb08e7eeb-metrics-tls\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 12:14:22.688285 master-0 kubenswrapper[31830]: I0319 12:14:22.688179 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/63c12a89-1b49-4eba-8f5a-551b10d2246b-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.688461 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9702fc8c-4fe0-413b-b2d4-db23021d42b8-serving-cert\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.688543 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xpc2\" (UniqueName: \"kubernetes.io/projected/19de6601-10d4-4112-a21f-0398d2b160d1-kube-api-access-6xpc2\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.688576 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.688603 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.688629 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-system-cni-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.688654 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.688679 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h84l9\" (UniqueName: \"kubernetes.io/projected/f08c5930-44f0-48e4-80dd-2563f2733b2f-kube-api-access-h84l9\") pod \"openshift-apiserver-operator-d65958b8-mjs7x\" (UID: \"f08c5930-44f0-48e4-80dd-2563f2733b2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.688703 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.688718 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3017b5e-178e-49de-89d2-817a18398203-serving-cert\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.688728 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.688754 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2151eb84-177e-459c-be71-f48465323ac2-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-fjn4b\" (UID: \"2151eb84-177e-459c-be71-f48465323ac2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.688776 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/661b8957-a890-4032-9e57-45e2e0b35249-serving-cert\") pod \"service-ca-operator-b865698dc-sxsxt\" (UID: \"661b8957-a890-4032-9e57-45e2e0b35249\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.688818 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zntzt\" (UniqueName: \"kubernetes.io/projected/0f97d998-530c-4d9d-a030-ca1d9d2d4490-kube-api-access-zntzt\") pod \"cluster-storage-operator-7d87854d6-6wzws\" (UID: \"0f97d998-530c-4d9d-a030-ca1d9d2d4490\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.688847 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.688873 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-cnibin\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.688899 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs4jf\" (UniqueName: \"kubernetes.io/projected/b80027fd-7b39-477a-a337-ff9bb08e7eeb-kube-api-access-hs4jf\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.688926 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv6bc\" (UniqueName: \"kubernetes.io/projected/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-kube-api-access-pv6bc\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.688949 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1089ea24-add9-482e-9276-e6ded12052d7-config\") pod \"kube-apiserver-operator-8b68b9d9b-qv4cg\" (UID: \"1089ea24-add9-482e-9276-e6ded12052d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.688975 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-k8s-cni-cncf-io\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.689003 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f08c5930-44f0-48e4-80dd-2563f2733b2f-config\") pod \"openshift-apiserver-operator-d65958b8-mjs7x\" (UID: \"f08c5930-44f0-48e4-80dd-2563f2733b2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.689026 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khv2z\" (UniqueName: \"kubernetes.io/projected/a7747954-a222-4809-8656-818203b55ee8-kube-api-access-khv2z\") pod \"csi-snapshot-controller-operator-5f5d689c6b-2chdm\" (UID: \"a7747954-a222-4809-8656-818203b55ee8\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-2chdm" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.689052 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tqdb\" (UniqueName: \"kubernetes.io/projected/b0f5939c-48b1-4d6c-9712-9128a78d603b-kube-api-access-6tqdb\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.689078 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.689106 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c2dbd8b3-0e02-4747-a166-80aa6a94b060-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-cg9pq\" (UID: \"c2dbd8b3-0e02-4747-a166-80aa6a94b060\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.689392 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/661b8957-a890-4032-9e57-45e2e0b35249-serving-cert\") pod \"service-ca-operator-b865698dc-sxsxt\" (UID: \"661b8957-a890-4032-9e57-45e2e0b35249\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.689499 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1089ea24-add9-482e-9276-e6ded12052d7-config\") pod \"kube-apiserver-operator-8b68b9d9b-qv4cg\" (UID: \"1089ea24-add9-482e-9276-e6ded12052d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.689624 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c2dbd8b3-0e02-4747-a166-80aa6a94b060-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-cg9pq\" (UID: \"c2dbd8b3-0e02-4747-a166-80aa6a94b060\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.689896 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.690126 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.690254 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f08c5930-44f0-48e4-80dd-2563f2733b2f-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-mjs7x\" (UID: \"f08c5930-44f0-48e4-80dd-2563f2733b2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.690285 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl6d7\" (UniqueName: \"kubernetes.io/projected/ab54833d-e57b-479d-b171-68155f6566f1-kube-api-access-gl6d7\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.690310 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/c2dbd8b3-0e02-4747-a166-80aa6a94b060-operand-assets\") pod \"cluster-olm-operator-67dcd4998-cg9pq\" (UID: \"c2dbd8b3-0e02-4747-a166-80aa6a94b060\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.690457 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ed2dbd1-aec4-4009-917a-933533912ab5-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-gx4w8\" (UID: \"9ed2dbd1-aec4-4009-917a-933533912ab5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.690498 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.690570 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-os-release\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.690582 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f08c5930-44f0-48e4-80dd-2563f2733b2f-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-mjs7x\" (UID: \"f08c5930-44f0-48e4-80dd-2563f2733b2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.690601 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-kubelet\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.690629 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnd9c\" (UniqueName: \"kubernetes.io/projected/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-kube-api-access-jnd9c\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.690679 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.690828 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ed2dbd1-aec4-4009-917a-933533912ab5-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-gx4w8\" (UID: \"9ed2dbd1-aec4-4009-917a-933533912ab5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.690876 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.690899 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-client\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691107 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-socket-dir-parent\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691161 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/7241bf11-192e-47db-9d80-2324938ed34c-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691192 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691222 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06df1b1b-154e-46f9-aee0-79a137c6c928-config\") pod \"openshift-kube-scheduler-operator-dddff6458-ptqdh\" (UID: \"06df1b1b-154e-46f9-aee0-79a137c6c928\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691238 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/c2dbd8b3-0e02-4747-a166-80aa6a94b060-operand-assets\") pod \"cluster-olm-operator-67dcd4998-cg9pq\" (UID: \"c2dbd8b3-0e02-4747-a166-80aa6a94b060\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691110 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-client\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691312 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-ca\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691340 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691371 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5n89\" (UniqueName: \"kubernetes.io/projected/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-kube-api-access-h5n89\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk\" (UID: \"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691400 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-netns\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691428 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bf226d89-450d-4876-a113-345632b94ee9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691472 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/7241bf11-192e-47db-9d80-2324938ed34c-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691523 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-cni-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691549 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-etc-kubernetes\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691576 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk\" (UID: \"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691601 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5bmd\" (UniqueName: \"kubernetes.io/projected/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-kube-api-access-c5bmd\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691630 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-images\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691660 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcxqj\" (UniqueName: \"kubernetes.io/projected/bf226d89-450d-4876-a113-345632b94ee9-kube-api-access-wcxqj\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691684 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691727 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691782 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hq8f\" (UniqueName: \"kubernetes.io/projected/661b8957-a890-4032-9e57-45e2e0b35249-kube-api-access-8hq8f\") pod \"service-ca-operator-b865698dc-sxsxt\" (UID: \"661b8957-a890-4032-9e57-45e2e0b35249\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691858 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fe245927-c937-4ec7-ab83-4900bade72cf-multus-daemon-config\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691885 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-multus-certs\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691919 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bf226d89-450d-4876-a113-345632b94ee9-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691942 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ed2dbd1-aec4-4009-917a-933533912ab5-config\") pod \"openshift-controller-manager-operator-8c94f4649-gx4w8\" (UID: \"9ed2dbd1-aec4-4009-917a-933533912ab5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691967 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-cni-multus\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.691991 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-config\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 12:14:22.691999 master-0 kubenswrapper[31830]: I0319 12:14:22.692015 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fe245927-c937-4ec7-ab83-4900bade72cf-cni-binary-copy\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.694036 master-0 kubenswrapper[31830]: I0319 12:14:22.692224 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-etcd-ca\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 12:14:22.694036 master-0 kubenswrapper[31830]: I0319 12:14:22.692238 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fe245927-c937-4ec7-ab83-4900bade72cf-multus-daemon-config\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.694036 master-0 kubenswrapper[31830]: I0319 12:14:22.692272 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-images\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 12:14:22.694036 master-0 kubenswrapper[31830]: I0319 12:14:22.692316 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef8e03f-0363-4e13-b7ca-4fa871d77c62-serving-cert\") pod \"openshift-config-operator-95bf4f4d-nhvl4\" (UID: \"aef8e03f-0363-4e13-b7ca-4fa871d77c62\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 12:14:22.694036 master-0 kubenswrapper[31830]: I0319 12:14:22.692347 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 12:14:22.694036 master-0 kubenswrapper[31830]: I0319 12:14:22.692375 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhqhb\" (UniqueName: \"kubernetes.io/projected/398bcaca-1bea-4633-a78f-717e3d015ddd-kube-api-access-fhqhb\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 12:14:22.694036 master-0 kubenswrapper[31830]: I0319 12:14:22.692442 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bf226d89-450d-4876-a113-345632b94ee9-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 12:14:22.694036 master-0 kubenswrapper[31830]: I0319 12:14:22.692474 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-srv-cert\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 12:14:22.694036 master-0 kubenswrapper[31830]: I0319 12:14:22.692638 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 12:14:22.694036 master-0 kubenswrapper[31830]: I0319 12:14:22.692671 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 12:14:22.694036 master-0 kubenswrapper[31830]: I0319 12:14:22.692696 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06df1b1b-154e-46f9-aee0-79a137c6c928-config\") pod \"openshift-kube-scheduler-operator-dddff6458-ptqdh\" (UID: \"06df1b1b-154e-46f9-aee0-79a137c6c928\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 12:14:22.694036 master-0 kubenswrapper[31830]: I0319 12:14:22.692704 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/661b8957-a890-4032-9e57-45e2e0b35249-config\") pod \"service-ca-operator-b865698dc-sxsxt\" (UID: \"661b8957-a890-4032-9e57-45e2e0b35249\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 12:14:22.694036 master-0 kubenswrapper[31830]: I0319 12:14:22.692735 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 12:14:22.694036 master-0 kubenswrapper[31830]: I0319 12:14:22.692764 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bf226d89-450d-4876-a113-345632b94ee9-env-overrides\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 12:14:22.694036 master-0 kubenswrapper[31830]: I0319 12:14:22.692906 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk\" (UID: \"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 12:14:22.694036 master-0 kubenswrapper[31830]: I0319 12:14:22.692976 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bf226d89-450d-4876-a113-345632b94ee9-env-overrides\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 12:14:22.694036 master-0 kubenswrapper[31830]: I0319 12:14:22.693199 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 12:14:22.694036 master-0 kubenswrapper[31830]: I0319 12:14:22.693523 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef8e03f-0363-4e13-b7ca-4fa871d77c62-serving-cert\") pod \"openshift-config-operator-95bf4f4d-nhvl4\" (UID: \"aef8e03f-0363-4e13-b7ca-4fa871d77c62\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 12:14:22.694036 master-0 kubenswrapper[31830]: I0319 12:14:22.693761 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63c12a89-1b49-4eba-8f5a-551b10d2246b-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 12:14:22.694036 master-0 kubenswrapper[31830]: I0319 12:14:22.694012 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/beb562de-402b-4d9f-b5ed-090b60847a95-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 12:14:22.694592 master-0 kubenswrapper[31830]: I0319 12:14:22.694228 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:14:22.694592 master-0 kubenswrapper[31830]: I0319 12:14:22.694253 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/661b8957-a890-4032-9e57-45e2e0b35249-config\") pod \"service-ca-operator-b865698dc-sxsxt\" (UID: \"661b8957-a890-4032-9e57-45e2e0b35249\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 12:14:22.694592 master-0 kubenswrapper[31830]: I0319 12:14:22.694488 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/19de6601-10d4-4112-a21f-0398d2b160d1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 12:14:22.696268 master-0 kubenswrapper[31830]: I0319 12:14:22.694854 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b80027fd-7b39-477a-a337-ff9bb08e7eeb-trusted-ca\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 12:14:22.696268 master-0 kubenswrapper[31830]: I0319 12:14:22.694898 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fe245927-c937-4ec7-ab83-4900bade72cf-cni-binary-copy\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.696268 master-0 kubenswrapper[31830]: I0319 12:14:22.695050 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9702fc8c-4fe0-413b-b2d4-db23021d42b8-config\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 12:14:22.696268 master-0 kubenswrapper[31830]: I0319 12:14:22.695148 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bf226d89-450d-4876-a113-345632b94ee9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 12:14:22.696268 master-0 kubenswrapper[31830]: I0319 12:14:22.695204 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ed2dbd1-aec4-4009-917a-933533912ab5-config\") pod \"openshift-controller-manager-operator-8c94f4649-gx4w8\" (UID: \"9ed2dbd1-aec4-4009-917a-933533912ab5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 12:14:22.696268 master-0 kubenswrapper[31830]: I0319 12:14:22.695367 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 12:14:22.696268 master-0 kubenswrapper[31830]: I0319 12:14:22.695420 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 12:14:22.696268 master-0 kubenswrapper[31830]: I0319 12:14:22.695485 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 12:14:22.696268 master-0 kubenswrapper[31830]: I0319 12:14:22.695508 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 19 12:14:22.696268 master-0 kubenswrapper[31830]: I0319 12:14:22.695572 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 19 12:14:22.696268 master-0 kubenswrapper[31830]: I0319 12:14:22.695702 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 19 12:14:22.696268 master-0 kubenswrapper[31830]: I0319 12:14:22.695381 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 12:14:22.696268 master-0 kubenswrapper[31830]: I0319 12:14:22.695747 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 19 12:14:22.696268 master-0 kubenswrapper[31830]: I0319 12:14:22.696041 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 12:14:22.700054 master-0 kubenswrapper[31830]: I0319 12:14:22.700006 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 12:14:22.700260 master-0 kubenswrapper[31830]: I0319 12:14:22.700225 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/398bcaca-1bea-4633-a78f-717e3d015ddd-metrics-certs\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 12:14:22.700728 master-0 kubenswrapper[31830]: I0319 12:14:22.700698 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3017b5e-178e-49de-89d2-817a18398203-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 12:14:22.703948 master-0 kubenswrapper[31830]: I0319 12:14:22.703905 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0f5939c-48b1-4d6c-9712-9128a78d603b-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 12:14:22.707465 master-0 kubenswrapper[31830]: I0319 12:14:22.707431 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 19 12:14:22.711209 master-0 kubenswrapper[31830]: I0319 12:14:22.711091 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7044a7b3-4fac-40af-a31c-054a1a1db26b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 12:14:22.728710 master-0 kubenswrapper[31830]: I0319 12:14:22.728642 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 19 12:14:22.741485 master-0 kubenswrapper[31830]: I0319 12:14:22.741420 31830 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 19 12:14:22.749445 master-0 kubenswrapper[31830]: I0319 12:14:22.748958 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 19 12:14:22.768915 master-0 kubenswrapper[31830]: I0319 12:14:22.768829 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 19 12:14:22.788163 master-0 kubenswrapper[31830]: I0319 12:14:22.788065 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 19 12:14:22.794217 master-0 kubenswrapper[31830]: I0319 12:14:22.794162 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4800b72f-7e54-4069-b771-87fb459eeb78-hosts-file\") pod \"node-resolver-jqzxt\" (UID: \"4800b72f-7e54-4069-b771-87fb459eeb78\") " pod="openshift-dns/node-resolver-jqzxt" Mar 19 12:14:22.794217 master-0 kubenswrapper[31830]: I0319 12:14:22.794219 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6db3fcbe-0dbf-464f-944b-62427173c8d3-audit-log\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:22.794563 master-0 kubenswrapper[31830]: I0319 12:14:22.794273 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/919daf8d-763a-44bc-8916-86b425a27cbd-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 12:14:22.794563 master-0 kubenswrapper[31830]: I0319 12:14:22.794300 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-client-ca-bundle\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:22.794563 master-0 kubenswrapper[31830]: I0319 12:14:22.794346 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/979ba8cc-5a7b-4188-bf9e-c22d810888e9-trusted-ca-bundle\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:22.794563 master-0 kubenswrapper[31830]: I0319 12:14:22.794371 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c52bbbe7-bc16-432f-a471-bc561083a853-catalog-content\") pod \"certified-operators-tdnkp\" (UID: \"c52bbbe7-bc16-432f-a471-bc561083a853\") " pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 12:14:22.794563 master-0 kubenswrapper[31830]: I0319 12:14:22.794396 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-systemd\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.794563 master-0 kubenswrapper[31830]: I0319 12:14:22.794421 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/91112ce6-4f9d-44c1-a4e7-fea126554bcf-default-certificate\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:14:22.794563 master-0 kubenswrapper[31830]: I0319 12:14:22.794446 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-etc-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.794563 master-0 kubenswrapper[31830]: I0319 12:14:22.794479 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 12:14:22.794563 master-0 kubenswrapper[31830]: I0319 12:14:22.794504 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-kubernetes\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.794563 master-0 kubenswrapper[31830]: I0319 12:14:22.794539 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvzcn\" (UniqueName: \"kubernetes.io/projected/da9becfb-a504-4ef7-92ed-cd2db439d5db-kube-api-access-lvzcn\") pod \"route-controller-manager-fdb67f9cf-vkmd9\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 12:14:22.794563 master-0 kubenswrapper[31830]: I0319 12:14:22.794565 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-tls\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:22.795046 master-0 kubenswrapper[31830]: I0319 12:14:22.794591 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2b87f8c3-1898-46dd-bcac-e8f22f31e812-mcd-auth-proxy-config\") pod \"machine-config-daemon-ms2wn\" (UID: \"2b87f8c3-1898-46dd-bcac-e8f22f31e812\") " pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 12:14:22.795046 master-0 kubenswrapper[31830]: I0319 12:14:22.794626 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f236a5ab-b400-46fc-94ee-1fff476d6458-metrics-tls\") pod \"dns-default-zjdkm\" (UID: \"f236a5ab-b400-46fc-94ee-1fff476d6458\") " pod="openshift-dns/dns-default-zjdkm" Mar 19 12:14:22.795046 master-0 kubenswrapper[31830]: I0319 12:14:22.794652 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/5238840f-3bef-43ad-ae68-ac187f073019-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 12:14:22.795046 master-0 kubenswrapper[31830]: I0319 12:14:22.794681 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:22.795046 master-0 kubenswrapper[31830]: I0319 12:14:22.794708 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:22.795046 master-0 kubenswrapper[31830]: I0319 12:14:22.794737 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ee3529ac-6135-438b-9334-40c63c1fbd3d-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 12:14:22.795046 master-0 kubenswrapper[31830]: I0319 12:14:22.794760 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fgj5\" (UniqueName: \"kubernetes.io/projected/ad327a59-7879-4215-bb95-3f2be64cb97f-kube-api-access-9fgj5\") pod \"cloud-credential-operator-744f9dbf77-nr2k4\" (UID: \"ad327a59-7879-4215-bb95-3f2be64cb97f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" Mar 19 12:14:22.795046 master-0 kubenswrapper[31830]: I0319 12:14:22.794783 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-cnibin\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.795046 master-0 kubenswrapper[31830]: I0319 12:14:22.794818 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbzj2\" (UniqueName: \"kubernetes.io/projected/be4349fa-5c67-4135-80a7-b8a694553662-kube-api-access-jbzj2\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 12:14:22.795046 master-0 kubenswrapper[31830]: I0319 12:14:22.794842 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fd40498c-f50a-408c-9a50-5d85ae666124-machine-approver-tls\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 12:14:22.795046 master-0 kubenswrapper[31830]: I0319 12:14:22.794864 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-etcd-client\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:22.795046 master-0 kubenswrapper[31830]: I0319 12:14:22.794898 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f236a5ab-b400-46fc-94ee-1fff476d6458-config-volume\") pod \"dns-default-zjdkm\" (UID: \"f236a5ab-b400-46fc-94ee-1fff476d6458\") " pod="openshift-dns/dns-default-zjdkm" Mar 19 12:14:22.795046 master-0 kubenswrapper[31830]: I0319 12:14:22.794921 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/de39c80c-acfa-4bc1-a844-95b170169b44-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 12:14:22.795046 master-0 kubenswrapper[31830]: I0319 12:14:22.794944 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-ovn\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.795046 master-0 kubenswrapper[31830]: I0319 12:14:22.794969 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc94p\" (UniqueName: \"kubernetes.io/projected/667757ee-2670-4019-ad93-156521d3c2e7-kube-api-access-rc94p\") pod \"cluster-samples-operator-85f7577d78-cx8l9\" (UID: \"667757ee-2670-4019-ad93-156521d3c2e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-cx8l9" Mar 19 12:14:22.795046 master-0 kubenswrapper[31830]: I0319 12:14:22.795003 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fd40498c-f50a-408c-9a50-5d85ae666124-auth-proxy-config\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 12:14:22.795046 master-0 kubenswrapper[31830]: I0319 12:14:22.795024 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lkzv\" (UniqueName: \"kubernetes.io/projected/4800b72f-7e54-4069-b771-87fb459eeb78-kube-api-access-4lkzv\") pod \"node-resolver-jqzxt\" (UID: \"4800b72f-7e54-4069-b771-87fb459eeb78\") " pod="openshift-dns/node-resolver-jqzxt" Mar 19 12:14:22.795046 master-0 kubenswrapper[31830]: I0319 12:14:22.795046 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-run-netns\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.795046 master-0 kubenswrapper[31830]: I0319 12:14:22.795068 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-var-lib-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795093 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/979ba8cc-5a7b-4188-bf9e-c22d810888e9-audit-dir\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795119 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdpj4\" (UniqueName: \"kubernetes.io/projected/06f67c28-34fd-4356-92f0-edd0986ad34e-kube-api-access-bdpj4\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795143 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxw6t\" (UniqueName: \"kubernetes.io/projected/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-kube-api-access-dxw6t\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795205 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/91112ce6-4f9d-44c1-a4e7-fea126554bcf-stats-auth\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795233 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-sysctl-conf\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795258 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8414b6b0-ee16-47a5-982b-ee58b136cfcf-webhook-cert\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795281 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-864rg\" (UniqueName: \"kubernetes.io/projected/8414b6b0-ee16-47a5-982b-ee58b136cfcf-kube-api-access-864rg\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795305 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-run-ovn-kubernetes\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795332 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da9becfb-a504-4ef7-92ed-cd2db439d5db-config\") pod \"route-controller-manager-fdb67f9cf-vkmd9\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795355 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-cni-bin\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795381 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovnkube-script-lib\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795405 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-client-tls\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795431 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hrkb\" (UniqueName: \"kubernetes.io/projected/91112ce6-4f9d-44c1-a4e7-fea126554bcf-kube-api-access-8hrkb\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795453 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnp9l\" (UniqueName: \"kubernetes.io/projected/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-kube-api-access-jnp9l\") pod \"redhat-marketplace-cjgpg\" (UID: \"0ed7eded-1e67-49ad-9777-c2ed1e006ce3\") " pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795477 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wshb2\" (UniqueName: \"kubernetes.io/projected/9d2db220-4d5b-4819-a910-b186e1e9fb3e-kube-api-access-wshb2\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795503 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/86884445-e29b-492b-8810-b63b938b9170-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795527 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kcbw\" (UniqueName: \"kubernetes.io/projected/86884445-e29b-492b-8810-b63b938b9170-kube-api-access-5kcbw\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795550 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad327a59-7879-4215-bb95-3f2be64cb97f-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-nr2k4\" (UID: \"ad327a59-7879-4215-bb95-3f2be64cb97f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795579 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/8414b6b0-ee16-47a5-982b-ee58b136cfcf-ovnkube-identity-cm\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795601 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5238840f-3bef-43ad-ae68-ac187f073019-cache\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795625 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-audit\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795647 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a6b082a-649b-43f6-8e24-cf222873fe39-serving-cert\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795667 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-modprobe-d\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795699 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/36e5fec9-7fb5-4460-8bb4-4b9e36fae978-cert\") pod \"ingress-canary-w8jqs\" (UID: \"36e5fec9-7fb5-4460-8bb4-4b9e36fae978\") " pod="openshift-ingress-canary/ingress-canary-w8jqs" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795720 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxdts\" (UniqueName: \"kubernetes.io/projected/5238840f-3bef-43ad-ae68-ac187f073019-kube-api-access-vxdts\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795743 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/06f67c28-34fd-4356-92f0-edd0986ad34e-iptables-alerter-script\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795761 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-var-lib-kubelet\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795777 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-log-socket\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795811 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795832 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-multus-certs\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795849 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.795832 master-0 kubenswrapper[31830]: I0319 12:14:22.795872 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4264e82c-387f-4aa6-9ef6-b7beb61e098c-serving-cert\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.795893 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd40498c-f50a-408c-9a50-5d85ae666124-config\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.795914 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8n22\" (UniqueName: \"kubernetes.io/projected/1c2a33ba-76d0-4b81-a41d-9da16fd46209-kube-api-access-k8n22\") pod \"multus-admission-controller-58c9f8fc64-dqgd9\" (UID: \"1c2a33ba-76d0-4b81-a41d-9da16fd46209\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.795933 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-client-certs\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.795949 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-client-ca\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.795970 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-proxy-ca-bundles\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.795994 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7383e647-63b0-452d-a39b-02ad27a9b053-catalog-content\") pod \"community-operators-s22fd\" (UID: \"7383e647-63b0-452d-a39b-02ad27a9b053\") " pod="openshift-marketplace/community-operators-s22fd" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796015 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-cni-multus\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796048 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hk8l\" (UniqueName: \"kubernetes.io/projected/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-api-access-6hk8l\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796066 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-server-tls\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796083 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6863b35c-44ac-4333-97b5-e8e38b440a20-signing-cabundle\") pod \"service-ca-79bc6b8d76-5rbp5\" (UID: \"6863b35c-44ac-4333-97b5-e8e38b440a20\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5rbp5" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796099 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-os-release\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796115 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/de39c80c-acfa-4bc1-a844-95b170169b44-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796131 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-cni-netd\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796149 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srbt4\" (UniqueName: \"kubernetes.io/projected/3a6b082a-649b-43f6-8e24-cf222873fe39-kube-api-access-srbt4\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796167 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-serving-cert\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796186 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ztf7\" (UniqueName: \"kubernetes.io/projected/c52bbbe7-bc16-432f-a471-bc561083a853-kube-api-access-4ztf7\") pod \"certified-operators-tdnkp\" (UID: \"c52bbbe7-bc16-432f-a471-bc561083a853\") " pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796203 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ad327a59-7879-4215-bb95-3f2be64cb97f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-nr2k4\" (UID: \"ad327a59-7879-4215-bb95-3f2be64cb97f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796230 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-serving-cert\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796246 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4264e82c-387f-4aa6-9ef6-b7beb61e098c-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796262 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8414b6b0-ee16-47a5-982b-ee58b136cfcf-env-overrides\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796278 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2b87f8c3-1898-46dd-bcac-e8f22f31e812-proxy-tls\") pod \"machine-config-daemon-ms2wn\" (UID: \"2b87f8c3-1898-46dd-bcac-e8f22f31e812\") " pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796310 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-service-ca\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796327 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9k5t\" (UniqueName: \"kubernetes.io/projected/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-kube-api-access-r9k5t\") pod \"machine-config-server-g7mqg\" (UID: \"0e25d4ed-4ad0-4706-ad25-7822c9a1d07e\") " pod="openshift-machine-config-operator/machine-config-server-g7mqg" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796343 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9hck\" (UniqueName: \"kubernetes.io/projected/36e5fec9-7fb5-4460-8bb4-4b9e36fae978-kube-api-access-z9hck\") pod \"ingress-canary-w8jqs\" (UID: \"36e5fec9-7fb5-4460-8bb4-4b9e36fae978\") " pod="openshift-ingress-canary/ingress-canary-w8jqs" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796362 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c52bbbe7-bc16-432f-a471-bc561083a853-utilities\") pod \"certified-operators-tdnkp\" (UID: \"c52bbbe7-bc16-432f-a471-bc561083a853\") " pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796381 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjfpq\" (UniqueName: \"kubernetes.io/projected/311b8bab-6cee-406d-8e0e-5b18a743d5fa-kube-api-access-hjfpq\") pod \"machine-config-controller-b4f87c5b9-rdpvm\" (UID: \"311b8bab-6cee-406d-8e0e-5b18a743d5fa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796399 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7zpw\" (UniqueName: \"kubernetes.io/projected/44469a78-9300-4260-89e9-ea939de1357b-kube-api-access-t7zpw\") pod \"control-plane-machine-set-operator-6f97756bc8-tql86\" (UID: \"44469a78-9300-4260-89e9-ea939de1357b\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796432 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq9p4\" (UniqueName: \"kubernetes.io/projected/a9d191d1-631d-4091-af8b-382283c18a5a-kube-api-access-cq9p4\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796461 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-kubelet\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796482 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796507 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xz8h\" (UniqueName: \"kubernetes.io/projected/7383e647-63b0-452d-a39b-02ad27a9b053-kube-api-access-2xz8h\") pod \"community-operators-s22fd\" (UID: \"7383e647-63b0-452d-a39b-02ad27a9b053\") " pod="openshift-marketplace/community-operators-s22fd" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796529 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-metrics-client-ca\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796549 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-image-import-ca\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796565 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13503fef-09b2-4dbe-9537-a5b361e7b591-audit-dir\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796584 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/bb1000ab-4419-43ce-b1b7-8f43413e017f-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796602 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796647 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-metrics-server-audit-profiles\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796668 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-conf-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796684 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de39c80c-acfa-4bc1-a844-95b170169b44-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796708 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-config\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796730 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e559e487-18b0-4622-92fa-d06e7397b312-tmp\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796747 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28ljd\" (UniqueName: \"kubernetes.io/projected/979ba8cc-5a7b-4188-bf9e-c22d810888e9-kube-api-access-28ljd\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796763 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf6b6560-1731-4fb1-b3c2-8257002842d6-cert\") pod \"cluster-autoscaler-operator-866dc4744-fblgs\" (UID: \"cf6b6560-1731-4fb1-b3c2-8257002842d6\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796781 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4p7s\" (UniqueName: \"kubernetes.io/projected/e559e487-18b0-4622-92fa-d06e7397b312-kube-api-access-c4p7s\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796843 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ee3529ac-6135-438b-9334-40c63c1fbd3d-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796864 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee3529ac-6135-438b-9334-40c63c1fbd3d-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796881 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796904 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796925 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/311b8bab-6cee-406d-8e0e-5b18a743d5fa-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-rdpvm\" (UID: \"311b8bab-6cee-406d-8e0e-5b18a743d5fa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796947 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-lib-modules\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796978 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da9becfb-a504-4ef7-92ed-cd2db439d5db-client-ca\") pod \"route-controller-manager-fdb67f9cf-vkmd9\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.796998 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-host\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.797024 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/979ba8cc-5a7b-4188-bf9e-c22d810888e9-audit-policies\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.797047 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6863b35c-44ac-4333-97b5-e8e38b440a20-signing-key\") pod \"service-ca-79bc6b8d76-5rbp5\" (UID: \"6863b35c-44ac-4333-97b5-e8e38b440a20\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5rbp5" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.797073 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-hostroot\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.797107 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lllml\" (UniqueName: \"kubernetes.io/projected/6db3fcbe-0dbf-464f-944b-62427173c8d3-kube-api-access-lllml\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.797133 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-trusted-ca-bundle\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.797160 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-images\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.797186 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/5238840f-3bef-43ad-ae68-ac187f073019-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.797213 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8hpg\" (UniqueName: \"kubernetes.io/projected/ee3529ac-6135-438b-9334-40c63c1fbd3d-kube-api-access-c8hpg\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.797235 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/e559e487-18b0-4622-92fa-d06e7397b312-etc-tuned\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.797260 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-system-cni-dir\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 12:14:22.797202 master-0 kubenswrapper[31830]: I0319 12:14:22.797283 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g6zz\" (UniqueName: \"kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz\") pod \"network-check-target-v66z4\" (UID: \"616dbb32-6b65-4e44-a217-6b1be2844cc9\") " pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797311 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4264e82c-387f-4aa6-9ef6-b7beb61e098c-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797340 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wfsr\" (UniqueName: \"kubernetes.io/projected/4264e82c-387f-4aa6-9ef6-b7beb61e098c-kube-api-access-8wfsr\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797365 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797391 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-textfile\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797413 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-system-cni-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797440 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91112ce6-4f9d-44c1-a4e7-fea126554bcf-service-ca-bundle\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797464 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-var-lock\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797484 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797512 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-node-log\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797534 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a9d191d1-631d-4091-af8b-382283c18a5a-sys\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797570 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-sys\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797603 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-k8s-cni-cncf-io\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797629 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/919daf8d-763a-44bc-8916-86b425a27cbd-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797656 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/44469a78-9300-4260-89e9-ea939de1357b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-tql86\" (UID: \"44469a78-9300-4260-89e9-ea939de1357b\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797691 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a9d191d1-631d-4091-af8b-382283c18a5a-metrics-client-ca\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797723 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/be4349fa-5c67-4135-80a7-b8a694553662-webhook-cert\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797747 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-os-release\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797776 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-kubelet\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797820 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/882fd952-1914-47be-96bf-cac6341ca877-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-z8xf6\" (UID: \"882fd952-1914-47be-96bf-cac6341ca877\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797847 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-socket-dir-parent\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797870 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-systemd-units\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797899 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-netns\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797923 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-systemd\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797948 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/919daf8d-763a-44bc-8916-86b425a27cbd-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.797977 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-config\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798005 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddl8k\" (UniqueName: \"kubernetes.io/projected/6863b35c-44ac-4333-97b5-e8e38b440a20-kube-api-access-ddl8k\") pod \"service-ca-79bc6b8d76-5rbp5\" (UID: \"6863b35c-44ac-4333-97b5-e8e38b440a20\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5rbp5" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798029 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2b87f8c3-1898-46dd-bcac-e8f22f31e812-rootfs\") pod \"machine-config-daemon-ms2wn\" (UID: \"2b87f8c3-1898-46dd-bcac-e8f22f31e812\") " pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798054 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps4k8\" (UniqueName: \"kubernetes.io/projected/f236a5ab-b400-46fc-94ee-1fff476d6458-kube-api-access-ps4k8\") pod \"dns-default-zjdkm\" (UID: \"f236a5ab-b400-46fc-94ee-1fff476d6458\") " pod="openshift-dns/dns-default-zjdkm" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798078 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-node-bootstrap-token\") pod \"machine-config-server-g7mqg\" (UID: \"0e25d4ed-4ad0-4706-ad25-7822c9a1d07e\") " pod="openshift-machine-config-operator/machine-config-server-g7mqg" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798104 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-env-overrides\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798125 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-cni-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798148 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-etc-kubernetes\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798170 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-slash\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798199 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm9zf\" (UniqueName: \"kubernetes.io/projected/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-kube-api-access-vm9zf\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798221 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06f67c28-34fd-4356-92f0-edd0986ad34e-host-slash\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798246 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-utilities\") pod \"redhat-marketplace-cjgpg\" (UID: \"0ed7eded-1e67-49ad-9777-c2ed1e006ce3\") " pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798271 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798300 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86r6z\" (UniqueName: \"kubernetes.io/projected/d975e831-7348-41b9-9622-f4a503674c38-kube-api-access-86r6z\") pod \"migrator-8487694857-99fgs\" (UID: \"d975e831-7348-41b9-9622-f4a503674c38\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-99fgs" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798333 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/5238840f-3bef-43ad-ae68-ac187f073019-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798360 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-encryption-config\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798384 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-wtmp\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798420 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x2v6\" (UniqueName: \"kubernetes.io/projected/de39c80c-acfa-4bc1-a844-95b170169b44-kube-api-access-6x2v6\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798445 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovnkube-config\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798470 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/919daf8d-763a-44bc-8916-86b425a27cbd-cache\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798497 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bb1000ab-4419-43ce-b1b7-8f43413e017f-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798524 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-serving-certs-ca-bundle\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798548 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf6b6560-1731-4fb1-b3c2-8257002842d6-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-fblgs\" (UID: \"cf6b6560-1731-4fb1-b3c2-8257002842d6\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798573 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/13503fef-09b2-4dbe-9537-a5b361e7b591-node-pullsecrets\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798597 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/919daf8d-763a-44bc-8916-86b425a27cbd-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798625 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8brwr\" (UniqueName: \"kubernetes.io/projected/919daf8d-763a-44bc-8916-86b425a27cbd-kube-api-access-8brwr\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798652 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91112ce6-4f9d-44c1-a4e7-fea126554bcf-metrics-certs\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798676 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-federate-client-tls\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798703 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-cnibin\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798726 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/284768b8-9d70-4cf7-bace-8adc6b587186-host-etc-kube\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798749 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798775 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovn-node-metrics-cert\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798816 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/979ba8cc-5a7b-4188-bf9e-c22d810888e9-etcd-serving-ca\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798845 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgdlc\" (UniqueName: \"kubernetes.io/projected/13503fef-09b2-4dbe-9537-a5b361e7b591-kube-api-access-mgdlc\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798884 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-sysctl-d\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798906 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-run\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798932 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/be4349fa-5c67-4135-80a7-b8a694553662-tmpfs\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798954 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1c2a33ba-76d0-4b81-a41d-9da16fd46209-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-dqgd9\" (UID: \"1c2a33ba-76d0-4b81-a41d-9da16fd46209\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798972 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64twc\" (UniqueName: \"kubernetes.io/projected/cf6b6560-1731-4fb1-b3c2-8257002842d6-kube-api-access-64twc\") pod \"cluster-autoscaler-operator-866dc4744-fblgs\" (UID: \"cf6b6560-1731-4fb1-b3c2-8257002842d6\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.798990 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee3529ac-6135-438b-9334-40c63c1fbd3d-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799013 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/86884445-e29b-492b-8810-b63b938b9170-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799040 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799065 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-certs\") pod \"machine-config-server-g7mqg\" (UID: \"0e25d4ed-4ad0-4706-ad25-7822c9a1d07e\") " pod="openshift-machine-config-operator/machine-config-server-g7mqg" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799111 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rmw5\" (UniqueName: \"kubernetes.io/projected/fd40498c-f50a-408c-9a50-5d85ae666124-kube-api-access-2rmw5\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799140 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-kube-api-access\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799165 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f05dca6c-7626-4970-a869-4208ff5605a2-utilities\") pod \"redhat-operators-fbd5s\" (UID: \"f05dca6c-7626-4970-a869-4208ff5605a2\") " pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799190 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fz85\" (UniqueName: \"kubernetes.io/projected/f05dca6c-7626-4970-a869-4208ff5605a2-kube-api-access-5fz85\") pod \"redhat-operators-fbd5s\" (UID: \"f05dca6c-7626-4970-a869-4208ff5605a2\") " pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799238 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-config\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799267 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/667757ee-2670-4019-ad93-156521d3c2e7-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-cx8l9\" (UID: \"667757ee-2670-4019-ad93-156521d3c2e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-cx8l9" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799294 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbddm\" (UniqueName: \"kubernetes.io/projected/2b87f8c3-1898-46dd-bcac-e8f22f31e812-kube-api-access-kbddm\") pod \"machine-config-daemon-ms2wn\" (UID: \"2b87f8c3-1898-46dd-bcac-e8f22f31e812\") " pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799320 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7383e647-63b0-452d-a39b-02ad27a9b053-utilities\") pod \"community-operators-s22fd\" (UID: \"7383e647-63b0-452d-a39b-02ad27a9b053\") " pod="openshift-marketplace/community-operators-s22fd" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799345 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-cni-bin\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799372 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f05dca6c-7626-4970-a869-4208ff5605a2-catalog-content\") pod \"redhat-operators-fbd5s\" (UID: \"f05dca6c-7626-4970-a869-4208ff5605a2\") " pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799402 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/be4349fa-5c67-4135-80a7-b8a694553662-apiservice-cert\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799439 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-encryption-config\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799469 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2mdn\" (UniqueName: \"kubernetes.io/projected/944eac68-e72b-4aed-b5dc-d7d9703178a3-kube-api-access-m2mdn\") pod \"csi-snapshot-controller-64854d9cff-6m654\" (UID: \"944eac68-e72b-4aed-b5dc-d7d9703178a3\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799502 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/86884445-e29b-492b-8810-b63b938b9170-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799530 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-sysconfig\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799558 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799596 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-etcd-client\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799624 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-etcd-serving-ca\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799650 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-serving-cert\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799677 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/4264e82c-387f-4aa6-9ef6-b7beb61e098c-snapshots\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799705 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da9becfb-a504-4ef7-92ed-cd2db439d5db-serving-cert\") pod \"route-controller-manager-fdb67f9cf-vkmd9\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799744 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-catalog-content\") pod \"redhat-marketplace-cjgpg\" (UID: \"0ed7eded-1e67-49ad-9777-c2ed1e006ce3\") " pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 12:14:22.799738 master-0 kubenswrapper[31830]: I0319 12:14:22.799780 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a9d191d1-631d-4091-af8b-382283c18a5a-root\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.800157 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6db3fcbe-0dbf-464f-944b-62427173c8d3-audit-log\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.800333 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c52bbbe7-bc16-432f-a471-bc561083a853-catalog-content\") pod \"certified-operators-tdnkp\" (UID: \"c52bbbe7-bc16-432f-a471-bc561083a853\") " pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.800433 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.800684 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2b87f8c3-1898-46dd-bcac-e8f22f31e812-mcd-auth-proxy-config\") pod \"machine-config-daemon-ms2wn\" (UID: \"2b87f8c3-1898-46dd-bcac-e8f22f31e812\") " pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.800963 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovnkube-config\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.801081 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/311b8bab-6cee-406d-8e0e-5b18a743d5fa-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-rdpvm\" (UID: \"311b8bab-6cee-406d-8e0e-5b18a743d5fa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.801287 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/311b8bab-6cee-406d-8e0e-5b18a743d5fa-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-rdpvm\" (UID: \"311b8bab-6cee-406d-8e0e-5b18a743d5fa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.801697 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-utilities\") pod \"redhat-marketplace-cjgpg\" (UID: \"0ed7eded-1e67-49ad-9777-c2ed1e006ce3\") " pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.801755 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/919daf8d-763a-44bc-8916-86b425a27cbd-cache\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.801925 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-k8s-cni-cncf-io\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.801992 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/e559e487-18b0-4622-92fa-d06e7397b312-etc-tuned\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.802019 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-os-release\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.802103 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-system-cni-dir\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.802132 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e559e487-18b0-4622-92fa-d06e7397b312-tmp\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.802239 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-socket-dir-parent\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.802301 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-cni-bin\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.802395 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-cnibin\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.802437 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/4264e82c-387f-4aa6-9ef6-b7beb61e098c-snapshots\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.802448 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-etc-kubernetes\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.802509 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-env-overrides\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.802510 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-textfile\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.802564 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-system-cni-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.802579 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f05dca6c-7626-4970-a869-4208ff5605a2-utilities\") pod \"redhat-operators-fbd5s\" (UID: \"f05dca6c-7626-4970-a869-4208ff5605a2\") " pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.802645 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-hostroot\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.802722 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/be4349fa-5c67-4135-80a7-b8a694553662-tmpfs\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.802780 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-netns\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.802828 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/284768b8-9d70-4cf7-bace-8adc6b587186-host-etc-kube\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.802874 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5238840f-3bef-43ad-ae68-ac187f073019-cache\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.802950 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovnkube-script-lib\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.803356 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-cni-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.803394 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-cnibin\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.803557 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-run-multus-certs\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.803907 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7044a7b3-4fac-40af-a31c-054a1a1db26b-os-release\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.804176 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7383e647-63b0-452d-a39b-02ad27a9b053-catalog-content\") pod \"community-operators-s22fd\" (UID: \"7383e647-63b0-452d-a39b-02ad27a9b053\") " pod="openshift-marketplace/community-operators-s22fd" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.804336 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.804374 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-kubelet\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.804712 master-0 kubenswrapper[31830]: I0319 12:14:22.804451 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-catalog-content\") pod \"redhat-marketplace-cjgpg\" (UID: \"0ed7eded-1e67-49ad-9777-c2ed1e006ce3\") " pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 12:14:22.806215 master-0 kubenswrapper[31830]: I0319 12:14:22.804832 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c52bbbe7-bc16-432f-a471-bc561083a853-utilities\") pod \"certified-operators-tdnkp\" (UID: \"c52bbbe7-bc16-432f-a471-bc561083a853\") " pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 12:14:22.806215 master-0 kubenswrapper[31830]: I0319 12:14:22.804962 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7383e647-63b0-452d-a39b-02ad27a9b053-utilities\") pod \"community-operators-s22fd\" (UID: \"7383e647-63b0-452d-a39b-02ad27a9b053\") " pod="openshift-marketplace/community-operators-s22fd" Mar 19 12:14:22.806215 master-0 kubenswrapper[31830]: I0319 12:14:22.805182 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/bb1000ab-4419-43ce-b1b7-8f43413e017f-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 12:14:22.806215 master-0 kubenswrapper[31830]: I0319 12:14:22.805329 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/06f67c28-34fd-4356-92f0-edd0986ad34e-iptables-alerter-script\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 12:14:22.806215 master-0 kubenswrapper[31830]: I0319 12:14:22.805414 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-host-var-lib-cni-multus\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.806215 master-0 kubenswrapper[31830]: I0319 12:14:22.805450 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f05dca6c-7626-4970-a869-4208ff5605a2-catalog-content\") pod \"redhat-operators-fbd5s\" (UID: \"f05dca6c-7626-4970-a869-4208ff5605a2\") " pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 12:14:22.806215 master-0 kubenswrapper[31830]: I0319 12:14:22.805470 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fe245927-c937-4ec7-ab83-4900bade72cf-multus-conf-dir\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:22.808011 master-0 kubenswrapper[31830]: I0319 12:14:22.807957 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 19 12:14:22.827809 master-0 kubenswrapper[31830]: I0319 12:14:22.827754 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 19 12:14:22.834040 master-0 kubenswrapper[31830]: I0319 12:14:22.833996 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8414b6b0-ee16-47a5-982b-ee58b136cfcf-webhook-cert\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 12:14:22.847971 master-0 kubenswrapper[31830]: I0319 12:14:22.847917 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 19 12:14:22.856542 master-0 kubenswrapper[31830]: I0319 12:14:22.856495 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8414b6b0-ee16-47a5-982b-ee58b136cfcf-env-overrides\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 12:14:22.868930 master-0 kubenswrapper[31830]: I0319 12:14:22.868865 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 19 12:14:22.875236 master-0 kubenswrapper[31830]: I0319 12:14:22.872960 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/8414b6b0-ee16-47a5-982b-ee58b136cfcf-ovnkube-identity-cm\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 12:14:22.889277 master-0 kubenswrapper[31830]: I0319 12:14:22.889230 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906013 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-kubelet\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906061 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906124 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-kubelet\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906172 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13503fef-09b2-4dbe-9537-a5b361e7b591-audit-dir\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906277 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906343 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13503fef-09b2-4dbe-9537-a5b361e7b591-audit-dir\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906483 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ee3529ac-6135-438b-9334-40c63c1fbd3d-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906528 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-lib-modules\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906567 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-host\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906602 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ee3529ac-6135-438b-9334-40c63c1fbd3d-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906613 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-host\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906710 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-lib-modules\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906748 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/5238840f-3bef-43ad-ae68-ac187f073019-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906836 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906868 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-var-lock\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906886 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a9d191d1-631d-4091-af8b-382283c18a5a-sys\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906920 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-var-lock\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906946 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/5238840f-3bef-43ad-ae68-ac187f073019-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906963 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-node-log\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906949 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-node-log\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906990 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a9d191d1-631d-4091-af8b-382283c18a5a-sys\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.906993 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-sys\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907020 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-sys\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907038 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907087 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-systemd-units\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907166 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-systemd-units\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907222 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-systemd\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907246 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/919daf8d-763a-44bc-8916-86b425a27cbd-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907274 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-systemd\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907338 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2b87f8c3-1898-46dd-bcac-e8f22f31e812-rootfs\") pod \"machine-config-daemon-ms2wn\" (UID: \"2b87f8c3-1898-46dd-bcac-e8f22f31e812\") " pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907400 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06f67c28-34fd-4356-92f0-edd0986ad34e-host-slash\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907417 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/919daf8d-763a-44bc-8916-86b425a27cbd-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907422 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907429 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-slash\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907446 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2b87f8c3-1898-46dd-bcac-e8f22f31e812-rootfs\") pod \"machine-config-daemon-ms2wn\" (UID: \"2b87f8c3-1898-46dd-bcac-e8f22f31e812\") " pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907479 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-slash\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907450 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06f67c28-34fd-4356-92f0-edd0986ad34e-host-slash\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907490 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-wtmp\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907521 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-wtmp\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907601 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/13503fef-09b2-4dbe-9537-a5b361e7b591-node-pullsecrets\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907638 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907705 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907718 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/13503fef-09b2-4dbe-9537-a5b361e7b591-node-pullsecrets\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907737 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-run\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907785 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-run\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907787 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-sysctl-d\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907824 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-sysctl-d\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.907817 master-0 kubenswrapper[31830]: I0319 12:14:22.907869 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.907987 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-sysconfig\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908008 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908045 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908063 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-sysconfig\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908156 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a9d191d1-631d-4091-af8b-382283c18a5a-root\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908182 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4800b72f-7e54-4069-b771-87fb459eeb78-hosts-file\") pod \"node-resolver-jqzxt\" (UID: \"4800b72f-7e54-4069-b771-87fb459eeb78\") " pod="openshift-dns/node-resolver-jqzxt" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908196 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a9d191d1-631d-4091-af8b-382283c18a5a-root\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908211 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/919daf8d-763a-44bc-8916-86b425a27cbd-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908244 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4800b72f-7e54-4069-b771-87fb459eeb78-hosts-file\") pod \"node-resolver-jqzxt\" (UID: \"4800b72f-7e54-4069-b771-87fb459eeb78\") " pod="openshift-dns/node-resolver-jqzxt" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908263 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-systemd\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908283 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-etc-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908284 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/919daf8d-763a-44bc-8916-86b425a27cbd-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908304 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-etc-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908323 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-systemd\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908358 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-kubernetes\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908400 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/5238840f-3bef-43ad-ae68-ac187f073019-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908420 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-kubernetes\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908454 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/5238840f-3bef-43ad-ae68-ac187f073019-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908497 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-ovn\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908539 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-run-netns\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908555 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-var-lib-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908582 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-run-ovn\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908583 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-var-lib-openvswitch\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908598 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/979ba8cc-5a7b-4188-bf9e-c22d810888e9-audit-dir\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908593 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-run-netns\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908623 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/979ba8cc-5a7b-4188-bf9e-c22d810888e9-audit-dir\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908660 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-sysctl-conf\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908681 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-run-ovn-kubernetes\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908715 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-cni-bin\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908779 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-cni-bin\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908811 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-run-ovn-kubernetes\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908847 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-sysctl-conf\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908883 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-modprobe-d\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908929 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-var-lib-kubelet\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908953 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-log-socket\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908985 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-var-lib-kubelet\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908991 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.909004 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-log-socket\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.908988 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/e559e487-18b0-4622-92fa-d06e7397b312-etc-modprobe-d\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.909030 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.909154 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-cni-netd\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.909651 master-0 kubenswrapper[31830]: I0319 12:14:22.909231 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d2db220-4d5b-4819-a910-b186e1e9fb3e-host-cni-netd\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.915463 master-0 kubenswrapper[31830]: I0319 12:14:22.915429 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d2db220-4d5b-4819-a910-b186e1e9fb3e-ovn-node-metrics-cert\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:22.933230 master-0 kubenswrapper[31830]: I0319 12:14:22.933181 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 19 12:14:22.949635 master-0 kubenswrapper[31830]: I0319 12:14:22.947108 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2b87f8c3-1898-46dd-bcac-e8f22f31e812-proxy-tls\") pod \"machine-config-daemon-ms2wn\" (UID: \"2b87f8c3-1898-46dd-bcac-e8f22f31e812\") " pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 12:14:22.951891 master-0 kubenswrapper[31830]: I0319 12:14:22.951125 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 19 12:14:22.971865 master-0 kubenswrapper[31830]: I0319 12:14:22.971809 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 19 12:14:22.976319 master-0 kubenswrapper[31830]: I0319 12:14:22.974058 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da9becfb-a504-4ef7-92ed-cd2db439d5db-serving-cert\") pod \"route-controller-manager-fdb67f9cf-vkmd9\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 12:14:22.979068 master-0 kubenswrapper[31830]: I0319 12:14:22.978435 31830 generic.go:334] "Generic (PLEG): container finished" podID="91112ce6-4f9d-44c1-a4e7-fea126554bcf" containerID="02580d8818d0f202a13ac68e82f20d4293f3530799a86f4d7e26b5116036380f" exitCode=0 Mar 19 12:14:22.979068 master-0 kubenswrapper[31830]: I0319 12:14:22.978482 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" event={"ID":"91112ce6-4f9d-44c1-a4e7-fea126554bcf","Type":"ContainerDied","Data":"02580d8818d0f202a13ac68e82f20d4293f3530799a86f4d7e26b5116036380f"} Mar 19 12:14:22.979068 master-0 kubenswrapper[31830]: I0319 12:14:22.978985 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:14:22.988957 master-0 kubenswrapper[31830]: I0319 12:14:22.987957 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 19 12:14:22.988957 master-0 kubenswrapper[31830]: I0319 12:14:22.988932 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:14:22.995815 master-0 kubenswrapper[31830]: I0319 12:14:22.995756 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da9becfb-a504-4ef7-92ed-cd2db439d5db-config\") pod \"route-controller-manager-fdb67f9cf-vkmd9\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 12:14:23.015752 master-0 kubenswrapper[31830]: I0319 12:14:23.014176 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 19 12:14:23.029838 master-0 kubenswrapper[31830]: I0319 12:14:23.027918 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 19 12:14:23.032168 master-0 kubenswrapper[31830]: I0319 12:14:23.031913 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da9becfb-a504-4ef7-92ed-cd2db439d5db-client-ca\") pod \"route-controller-manager-fdb67f9cf-vkmd9\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 12:14:23.047975 master-0 kubenswrapper[31830]: I0319 12:14:23.047927 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 19 12:14:23.056234 master-0 kubenswrapper[31830]: I0319 12:14:23.056191 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a6b082a-649b-43f6-8e24-cf222873fe39-serving-cert\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:14:23.070251 master-0 kubenswrapper[31830]: I0319 12:14:23.070201 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 19 12:14:23.090077 master-0 kubenswrapper[31830]: I0319 12:14:23.090023 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 19 12:14:23.097353 master-0 kubenswrapper[31830]: I0319 12:14:23.097282 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-config\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:14:23.111284 master-0 kubenswrapper[31830]: I0319 12:14:23.110916 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 19 12:14:23.120067 master-0 kubenswrapper[31830]: I0319 12:14:23.120023 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-client-ca\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:14:23.120385 master-0 kubenswrapper[31830]: I0319 12:14:23.120345 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-kubelet-dir\") pod \"89890698-dd48-486b-bd64-dc909aecd9e8\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " Mar 19 12:14:23.120445 master-0 kubenswrapper[31830]: I0319 12:14:23.120423 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "89890698-dd48-486b-bd64-dc909aecd9e8" (UID: "89890698-dd48-486b-bd64-dc909aecd9e8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:14:23.120503 master-0 kubenswrapper[31830]: I0319 12:14:23.120482 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-var-lock\") pod \"89890698-dd48-486b-bd64-dc909aecd9e8\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " Mar 19 12:14:23.120583 master-0 kubenswrapper[31830]: I0319 12:14:23.120560 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-var-lock" (OuterVolumeSpecName: "var-lock") pod "89890698-dd48-486b-bd64-dc909aecd9e8" (UID: "89890698-dd48-486b-bd64-dc909aecd9e8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:14:23.123443 master-0 kubenswrapper[31830]: I0319 12:14:23.122227 31830 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 12:14:23.123443 master-0 kubenswrapper[31830]: I0319 12:14:23.122258 31830 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89890698-dd48-486b-bd64-dc909aecd9e8-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:14:23.133891 master-0 kubenswrapper[31830]: I0319 12:14:23.133335 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 19 12:14:23.135359 master-0 kubenswrapper[31830]: I0319 12:14:23.135317 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-proxy-ca-bundles\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:14:23.147654 master-0 kubenswrapper[31830]: I0319 12:14:23.147607 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 19 12:14:23.170156 master-0 kubenswrapper[31830]: I0319 12:14:23.170035 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 19 12:14:23.198033 master-0 kubenswrapper[31830]: I0319 12:14:23.197921 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 19 12:14:23.204230 master-0 kubenswrapper[31830]: I0319 12:14:23.204193 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/91112ce6-4f9d-44c1-a4e7-fea126554bcf-stats-auth\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:14:23.209860 master-0 kubenswrapper[31830]: I0319 12:14:23.207164 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 19 12:14:23.213136 master-0 kubenswrapper[31830]: I0319 12:14:23.213067 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91112ce6-4f9d-44c1-a4e7-fea126554bcf-metrics-certs\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:14:23.227382 master-0 kubenswrapper[31830]: I0319 12:14:23.227268 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 19 12:14:23.252667 master-0 kubenswrapper[31830]: I0319 12:14:23.252624 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 19 12:14:23.261414 master-0 kubenswrapper[31830]: I0319 12:14:23.261372 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/91112ce6-4f9d-44c1-a4e7-fea126554bcf-default-certificate\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:14:23.267177 master-0 kubenswrapper[31830]: I0319 12:14:23.267139 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 19 12:14:23.276498 master-0 kubenswrapper[31830]: I0319 12:14:23.276449 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-serving-cert\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 12:14:23.288881 master-0 kubenswrapper[31830]: I0319 12:14:23.288823 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 19 12:14:23.294610 master-0 kubenswrapper[31830]: I0319 12:14:23.294565 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91112ce6-4f9d-44c1-a4e7-fea126554bcf-service-ca-bundle\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:14:23.308032 master-0 kubenswrapper[31830]: I0319 12:14:23.307983 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 19 12:14:23.328056 master-0 kubenswrapper[31830]: I0319 12:14:23.327470 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 19 12:14:23.333991 master-0 kubenswrapper[31830]: I0319 12:14:23.333891 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-service-ca\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 12:14:23.348030 master-0 kubenswrapper[31830]: I0319 12:14:23.347983 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 19 12:14:23.355681 master-0 kubenswrapper[31830]: I0319 12:14:23.355607 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6863b35c-44ac-4333-97b5-e8e38b440a20-signing-key\") pod \"service-ca-79bc6b8d76-5rbp5\" (UID: \"6863b35c-44ac-4333-97b5-e8e38b440a20\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5rbp5" Mar 19 12:14:23.368018 master-0 kubenswrapper[31830]: I0319 12:14:23.367972 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 19 12:14:23.388101 master-0 kubenswrapper[31830]: I0319 12:14:23.388053 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 19 12:14:23.408502 master-0 kubenswrapper[31830]: I0319 12:14:23.408447 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 19 12:14:23.414783 master-0 kubenswrapper[31830]: I0319 12:14:23.414741 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/919daf8d-763a-44bc-8916-86b425a27cbd-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 12:14:23.431688 master-0 kubenswrapper[31830]: I0319 12:14:23.431642 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 19 12:14:23.433018 master-0 kubenswrapper[31830]: I0319 12:14:23.432979 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6863b35c-44ac-4333-97b5-e8e38b440a20-signing-cabundle\") pod \"service-ca-79bc6b8d76-5rbp5\" (UID: \"6863b35c-44ac-4333-97b5-e8e38b440a20\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5rbp5" Mar 19 12:14:23.455001 master-0 kubenswrapper[31830]: I0319 12:14:23.454954 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 19 12:14:23.467943 master-0 kubenswrapper[31830]: I0319 12:14:23.467894 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 19 12:14:23.488468 master-0 kubenswrapper[31830]: I0319 12:14:23.488347 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 19 12:14:23.493241 master-0 kubenswrapper[31830]: I0319 12:14:23.493183 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/882fd952-1914-47be-96bf-cac6341ca877-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-z8xf6\" (UID: \"882fd952-1914-47be-96bf-cac6341ca877\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6" Mar 19 12:14:23.507233 master-0 kubenswrapper[31830]: I0319 12:14:23.507186 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 19 12:14:23.513623 master-0 kubenswrapper[31830]: I0319 12:14:23.513568 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/919daf8d-763a-44bc-8916-86b425a27cbd-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 12:14:23.534143 master-0 kubenswrapper[31830]: I0319 12:14:23.532637 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 19 12:14:23.547272 master-0 kubenswrapper[31830]: I0319 12:14:23.547229 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 19 12:14:23.587674 master-0 kubenswrapper[31830]: I0319 12:14:23.587614 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 19 12:14:23.589600 master-0 kubenswrapper[31830]: I0319 12:14:23.589444 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 19 12:14:23.607757 master-0 kubenswrapper[31830]: I0319 12:14:23.607704 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 19 12:14:23.613340 master-0 kubenswrapper[31830]: I0319 12:14:23.613257 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/5238840f-3bef-43ad-ae68-ac187f073019-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 12:14:23.628565 master-0 kubenswrapper[31830]: I0319 12:14:23.628367 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 19 12:14:23.636378 master-0 kubenswrapper[31830]: I0319 12:14:23.636343 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-encryption-config\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:23.648854 master-0 kubenswrapper[31830]: I0319 12:14:23.647781 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 19 12:14:23.667036 master-0 kubenswrapper[31830]: I0319 12:14:23.666970 31830 request.go:700] Waited for 1.00084968s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Mar 19 12:14:23.668960 master-0 kubenswrapper[31830]: I0319 12:14:23.668726 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 19 12:14:23.687337 master-0 kubenswrapper[31830]: I0319 12:14:23.687292 31830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 19 12:14:23.689983 master-0 kubenswrapper[31830]: I0319 12:14:23.689853 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 19 12:14:23.707174 master-0 kubenswrapper[31830]: I0319 12:14:23.707124 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 19 12:14:23.718455 master-0 kubenswrapper[31830]: I0319 12:14:23.718354 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/979ba8cc-5a7b-4188-bf9e-c22d810888e9-audit-policies\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:23.748111 master-0 kubenswrapper[31830]: I0319 12:14:23.747998 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 19 12:14:23.754869 master-0 kubenswrapper[31830]: I0319 12:14:23.754776 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/979ba8cc-5a7b-4188-bf9e-c22d810888e9-etcd-serving-ca\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:23.768204 master-0 kubenswrapper[31830]: I0319 12:14:23.767373 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 19 12:14:23.787153 master-0 kubenswrapper[31830]: I0319 12:14:23.787111 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 19 12:14:23.790493 master-0 kubenswrapper[31830]: I0319 12:14:23.790460 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/979ba8cc-5a7b-4188-bf9e-c22d810888e9-trusted-ca-bundle\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:23.800760 master-0 kubenswrapper[31830]: E0319 12:14:23.800723 31830 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.800995 master-0 kubenswrapper[31830]: E0319 12:14:23.800837 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-images podName:7b2ecb08-a0f9-4127-967c-7087dea4c0f6 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.300819529 +0000 UTC m=+2.849780233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-images") pod "machine-api-operator-6fbb6cf6f9-75w5c" (UID: "7b2ecb08-a0f9-4127-967c-7087dea4c0f6") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.801296 master-0 kubenswrapper[31830]: E0319 12:14:23.801091 31830 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.801296 master-0 kubenswrapper[31830]: E0319 12:14:23.801134 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-machine-api-operator-tls podName:7b2ecb08-a0f9-4127-967c-7087dea4c0f6 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.301124068 +0000 UTC m=+2.850084762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-75w5c" (UID: "7b2ecb08-a0f9-4127-967c-7087dea4c0f6") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.801296 master-0 kubenswrapper[31830]: E0319 12:14:23.801150 31830 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ets52rpou52es: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.801296 master-0 kubenswrapper[31830]: E0319 12:14:23.801172 31830 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.801296 master-0 kubenswrapper[31830]: E0319 12:14:23.801172 31830 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.801296 master-0 kubenswrapper[31830]: E0319 12:14:23.801206 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-serving-certs-ca-bundle podName:7c80f8d0-ee9b-4a4d-ba92-e241b2552e58 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.301200421 +0000 UTC m=+2.850161125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-serving-certs-ca-bundle") pod "telemeter-client-6975d7769d-nvxfv" (UID: "7c80f8d0-ee9b-4a4d-ba92-e241b2552e58") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.801296 master-0 kubenswrapper[31830]: E0319 12:14:23.801246 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-client-ca-bundle podName:6db3fcbe-0dbf-464f-944b-62427173c8d3 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.301224851 +0000 UTC m=+2.850185555 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-client-ca-bundle") pod "metrics-server-86889676f6-phlgd" (UID: "6db3fcbe-0dbf-464f-944b-62427173c8d3") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.801296 master-0 kubenswrapper[31830]: E0319 12:14:23.801266 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c2a33ba-76d0-4b81-a41d-9da16fd46209-webhook-certs podName:1c2a33ba-76d0-4b81-a41d-9da16fd46209 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.301257782 +0000 UTC m=+2.850218486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1c2a33ba-76d0-4b81-a41d-9da16fd46209-webhook-certs") pod "multus-admission-controller-58c9f8fc64-dqgd9" (UID: "1c2a33ba-76d0-4b81-a41d-9da16fd46209") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.801550 master-0 kubenswrapper[31830]: E0319 12:14:23.801306 31830 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.801550 master-0 kubenswrapper[31830]: E0319 12:14:23.801418 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb1000ab-4419-43ce-b1b7-8f43413e017f-metrics-client-ca podName:bb1000ab-4419-43ce-b1b7-8f43413e017f nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.301393557 +0000 UTC m=+2.850354261 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/bb1000ab-4419-43ce-b1b7-8f43413e017f-metrics-client-ca") pod "kube-state-metrics-7bbc969446-bnf7q" (UID: "bb1000ab-4419-43ce-b1b7-8f43413e017f") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.802361 master-0 kubenswrapper[31830]: E0319 12:14:23.802331 31830 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.802604 master-0 kubenswrapper[31830]: E0319 12:14:23.802376 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-federate-client-tls podName:7c80f8d0-ee9b-4a4d-ba92-e241b2552e58 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.302366387 +0000 UTC m=+2.851327081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-federate-client-tls") pod "telemeter-client-6975d7769d-nvxfv" (UID: "7c80f8d0-ee9b-4a4d-ba92-e241b2552e58") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.802604 master-0 kubenswrapper[31830]: E0319 12:14:23.802405 31830 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.802604 master-0 kubenswrapper[31830]: E0319 12:14:23.802411 31830 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.802604 master-0 kubenswrapper[31830]: E0319 12:14:23.802426 31830 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.802604 master-0 kubenswrapper[31830]: E0319 12:14:23.802435 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-kube-rbac-proxy-config podName:bb1000ab-4419-43ce-b1b7-8f43413e017f nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.302423069 +0000 UTC m=+2.851383773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7bbc969446-bnf7q" (UID: "bb1000ab-4419-43ce-b1b7-8f43413e017f") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.802604 master-0 kubenswrapper[31830]: E0319 12:14:23.802422 31830 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.802604 master-0 kubenswrapper[31830]: E0319 12:14:23.802446 31830 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.802604 master-0 kubenswrapper[31830]: E0319 12:14:23.802469 31830 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.802604 master-0 kubenswrapper[31830]: E0319 12:14:23.802480 31830 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.802604 master-0 kubenswrapper[31830]: E0319 12:14:23.802484 31830 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.802604 master-0 kubenswrapper[31830]: E0319 12:14:23.802456 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-encryption-config podName:13503fef-09b2-4dbe-9537-a5b361e7b591 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.30244604 +0000 UTC m=+2.851406744 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-encryption-config") pod "apiserver-897cc986b-vpg2l" (UID: "13503fef-09b2-4dbe-9537-a5b361e7b591") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.802604 master-0 kubenswrapper[31830]: E0319 12:14:23.802528 31830 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.802604 master-0 kubenswrapper[31830]: E0319 12:14:23.802532 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-client-certs podName:6db3fcbe-0dbf-464f-944b-62427173c8d3 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.302517112 +0000 UTC m=+2.851477816 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-client-certs") pod "metrics-server-86889676f6-phlgd" (UID: "6db3fcbe-0dbf-464f-944b-62427173c8d3") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.802604 master-0 kubenswrapper[31830]: E0319 12:14:23.802540 31830 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.802604 master-0 kubenswrapper[31830]: E0319 12:14:23.802590 31830 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.802604 master-0 kubenswrapper[31830]: E0319 12:14:23.802550 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/311b8bab-6cee-406d-8e0e-5b18a743d5fa-proxy-tls podName:311b8bab-6cee-406d-8e0e-5b18a743d5fa nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.302541423 +0000 UTC m=+2.851502127 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/311b8bab-6cee-406d-8e0e-5b18a743d5fa-proxy-tls") pod "machine-config-controller-b4f87c5b9-rdpvm" (UID: "311b8bab-6cee-406d-8e0e-5b18a743d5fa") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802634 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be4349fa-5c67-4135-80a7-b8a694553662-webhook-cert podName:be4349fa-5c67-4135-80a7-b8a694553662 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.302628036 +0000 UTC m=+2.851588740 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/be4349fa-5c67-4135-80a7-b8a694553662-webhook-cert") pod "packageserver-77d68bd5f8-w9hmb" (UID: "be4349fa-5c67-4135-80a7-b8a694553662") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802651 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-trusted-ca-bundle podName:13503fef-09b2-4dbe-9537-a5b361e7b591 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.302644216 +0000 UTC m=+2.851604920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-trusted-ca-bundle") pod "apiserver-897cc986b-vpg2l" (UID: "13503fef-09b2-4dbe-9537-a5b361e7b591") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802663 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-client-tls podName:7c80f8d0-ee9b-4a4d-ba92-e241b2552e58 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.302658567 +0000 UTC m=+2.851619271 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-client-tls") pod "telemeter-client-6975d7769d-nvxfv" (UID: "7c80f8d0-ee9b-4a4d-ba92-e241b2552e58") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802671 31830 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802675 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4264e82c-387f-4aa6-9ef6-b7beb61e098c-service-ca-bundle podName:4264e82c-387f-4aa6-9ef6-b7beb61e098c nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.302669577 +0000 UTC m=+2.851630281 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/4264e82c-387f-4aa6-9ef6-b7beb61e098c-service-ca-bundle") pod "insights-operator-68bf6ff9d6-djdmh" (UID: "4264e82c-387f-4aa6-9ef6-b7beb61e098c") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802692 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-etcd-client podName:13503fef-09b2-4dbe-9537-a5b361e7b591 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.302685978 +0000 UTC m=+2.851646682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-etcd-client") pod "apiserver-897cc986b-vpg2l" (UID: "13503fef-09b2-4dbe-9537-a5b361e7b591") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802713 31830 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802726 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-etcd-serving-ca podName:13503fef-09b2-4dbe-9537-a5b361e7b591 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.302699258 +0000 UTC m=+2.851659962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-etcd-serving-ca") pod "apiserver-897cc986b-vpg2l" (UID: "13503fef-09b2-4dbe-9537-a5b361e7b591") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802740 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be4349fa-5c67-4135-80a7-b8a694553662-apiservice-cert podName:be4349fa-5c67-4135-80a7-b8a694553662 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.302735519 +0000 UTC m=+2.851696223 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/be4349fa-5c67-4135-80a7-b8a694553662-apiservice-cert") pod "packageserver-77d68bd5f8-w9hmb" (UID: "be4349fa-5c67-4135-80a7-b8a694553662") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802752 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-kube-rbac-proxy-config podName:a9d191d1-631d-4091-af8b-382283c18a5a nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.302746409 +0000 UTC m=+2.851707113 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-kube-rbac-proxy-config") pod "node-exporter-lpndz" (UID: "a9d191d1-631d-4091-af8b-382283c18a5a") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802759 31830 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802764 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a9d191d1-631d-4091-af8b-382283c18a5a-metrics-client-ca podName:a9d191d1-631d-4091-af8b-382283c18a5a nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.30275858 +0000 UTC m=+2.851719284 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/a9d191d1-631d-4091-af8b-382283c18a5a-metrics-client-ca") pod "node-exporter-lpndz" (UID: "a9d191d1-631d-4091-af8b-382283c18a5a") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802780 31830 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802809 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd40498c-f50a-408c-9a50-5d85ae666124-auth-proxy-config podName:fd40498c-f50a-408c-9a50-5d85ae666124 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.302783901 +0000 UTC m=+2.851744725 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/fd40498c-f50a-408c-9a50-5d85ae666124-auth-proxy-config") pod "machine-approver-5c6485487f-qv29l" (UID: "fd40498c-f50a-408c-9a50-5d85ae666124") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802824 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44469a78-9300-4260-89e9-ea939de1357b-control-plane-machine-set-operator-tls podName:44469a78-9300-4260-89e9-ea939de1357b nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.302818772 +0000 UTC m=+2.851779476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/44469a78-9300-4260-89e9-ea939de1357b-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6f97756bc8-tql86" (UID: "44469a78-9300-4260-89e9-ea939de1357b") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802849 31830 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802854 31830 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802874 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f236a5ab-b400-46fc-94ee-1fff476d6458-config-volume podName:f236a5ab-b400-46fc-94ee-1fff476d6458 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.302869073 +0000 UTC m=+2.851829777 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f236a5ab-b400-46fc-94ee-1fff476d6458-config-volume") pod "dns-default-zjdkm" (UID: "f236a5ab-b400-46fc-94ee-1fff476d6458") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802889 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cf6b6560-1731-4fb1-b3c2-8257002842d6-auth-proxy-config podName:cf6b6560-1731-4fb1-b3c2-8257002842d6 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.302880384 +0000 UTC m=+2.851841198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/cf6b6560-1731-4fb1-b3c2-8257002842d6-auth-proxy-config") pod "cluster-autoscaler-operator-866dc4744-fblgs" (UID: "cf6b6560-1731-4fb1-b3c2-8257002842d6") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802896 31830 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802911 31830 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802921 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client podName:7c80f8d0-ee9b-4a4d-ba92-e241b2552e58 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.302913425 +0000 UTC m=+2.851874139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client") pod "telemeter-client-6975d7769d-nvxfv" (UID: "7c80f8d0-ee9b-4a4d-ba92-e241b2552e58") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.803320 master-0 kubenswrapper[31830]: E0319 12:14:23.802947 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-audit podName:13503fef-09b2-4dbe-9537-a5b361e7b591 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.302936485 +0000 UTC m=+2.851897329 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-audit") pod "apiserver-897cc986b-vpg2l" (UID: "13503fef-09b2-4dbe-9537-a5b361e7b591") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.804055 master-0 kubenswrapper[31830]: E0319 12:14:23.804045 31830 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.804087 master-0 kubenswrapper[31830]: E0319 12:14:23.804074 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-config podName:7b2ecb08-a0f9-4127-967c-7087dea4c0f6 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.30406627 +0000 UTC m=+2.853026974 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-config") pod "machine-api-operator-6fbb6cf6f9-75w5c" (UID: "7b2ecb08-a0f9-4127-967c-7087dea4c0f6") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.804128 master-0 kubenswrapper[31830]: E0319 12:14:23.804090 31830 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.804128 master-0 kubenswrapper[31830]: E0319 12:14:23.804115 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client-kube-rbac-proxy-config podName:7c80f8d0-ee9b-4a4d-ba92-e241b2552e58 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.304106411 +0000 UTC m=+2.853067115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6975d7769d-nvxfv" (UID: "7c80f8d0-ee9b-4a4d-ba92-e241b2552e58") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.804194 master-0 kubenswrapper[31830]: E0319 12:14:23.804136 31830 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.804194 master-0 kubenswrapper[31830]: E0319 12:14:23.804158 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee3529ac-6135-438b-9334-40c63c1fbd3d-images podName:ee3529ac-6135-438b-9334-40c63c1fbd3d nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.304152753 +0000 UTC m=+2.853113457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/ee3529ac-6135-438b-9334-40c63c1fbd3d-images") pod "cluster-cloud-controller-manager-operator-7dff898856-84gh4" (UID: "ee3529ac-6135-438b-9334-40c63c1fbd3d") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.804194 master-0 kubenswrapper[31830]: E0319 12:14:23.804177 31830 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.804194 master-0 kubenswrapper[31830]: E0319 12:14:23.804198 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-config podName:13503fef-09b2-4dbe-9537-a5b361e7b591 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.304192594 +0000 UTC m=+2.853153298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-config") pod "apiserver-897cc986b-vpg2l" (UID: "13503fef-09b2-4dbe-9537-a5b361e7b591") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.804323 master-0 kubenswrapper[31830]: E0319 12:14:23.804211 31830 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.804323 master-0 kubenswrapper[31830]: E0319 12:14:23.804233 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd40498c-f50a-408c-9a50-5d85ae666124-machine-approver-tls podName:fd40498c-f50a-408c-9a50-5d85ae666124 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.304227815 +0000 UTC m=+2.853188519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/fd40498c-f50a-408c-9a50-5d85ae666124-machine-approver-tls") pod "machine-approver-5c6485487f-qv29l" (UID: "fd40498c-f50a-408c-9a50-5d85ae666124") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.804323 master-0 kubenswrapper[31830]: E0319 12:14:23.804245 31830 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.804323 master-0 kubenswrapper[31830]: E0319 12:14:23.804268 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de39c80c-acfa-4bc1-a844-95b170169b44-openshift-state-metrics-tls podName:de39c80c-acfa-4bc1-a844-95b170169b44 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.304262706 +0000 UTC m=+2.853223410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/de39c80c-acfa-4bc1-a844-95b170169b44-openshift-state-metrics-tls") pod "openshift-state-metrics-5dc6c74576-k464h" (UID: "de39c80c-acfa-4bc1-a844-95b170169b44") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.804323 master-0 kubenswrapper[31830]: E0319 12:14:23.804281 31830 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.804323 master-0 kubenswrapper[31830]: E0319 12:14:23.804303 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/667757ee-2670-4019-ad93-156521d3c2e7-samples-operator-tls podName:667757ee-2670-4019-ad93-156521d3c2e7 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.304297907 +0000 UTC m=+2.853258611 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/667757ee-2670-4019-ad93-156521d3c2e7-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-cx8l9" (UID: "667757ee-2670-4019-ad93-156521d3c2e7") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.804323 master-0 kubenswrapper[31830]: E0319 12:14:23.804316 31830 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.804510 master-0 kubenswrapper[31830]: E0319 12:14:23.804337 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86884445-e29b-492b-8810-b63b938b9170-prometheus-operator-tls podName:86884445-e29b-492b-8810-b63b938b9170 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.304332479 +0000 UTC m=+2.853293183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/86884445-e29b-492b-8810-b63b938b9170-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-qsrjj" (UID: "86884445-e29b-492b-8810-b63b938b9170") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.804510 master-0 kubenswrapper[31830]: E0319 12:14:23.804349 31830 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.804510 master-0 kubenswrapper[31830]: E0319 12:14:23.804371 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-etcd-client podName:979ba8cc-5a7b-4188-bf9e-c22d810888e9 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.3043664 +0000 UTC m=+2.853327104 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-etcd-client") pod "apiserver-fdc5db968-8zh6r" (UID: "979ba8cc-5a7b-4188-bf9e-c22d810888e9") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.804510 master-0 kubenswrapper[31830]: E0319 12:14:23.804392 31830 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.804510 master-0 kubenswrapper[31830]: E0319 12:14:23.804412 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-custom-resource-state-configmap podName:bb1000ab-4419-43ce-b1b7-8f43413e017f nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.304406201 +0000 UTC m=+2.853366905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7bbc969446-bnf7q" (UID: "bb1000ab-4419-43ce-b1b7-8f43413e017f") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.804510 master-0 kubenswrapper[31830]: E0319 12:14:23.804448 31830 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.804510 master-0 kubenswrapper[31830]: E0319 12:14:23.804472 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de39c80c-acfa-4bc1-a844-95b170169b44-metrics-client-ca podName:de39c80c-acfa-4bc1-a844-95b170169b44 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.304465483 +0000 UTC m=+2.853426187 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/de39c80c-acfa-4bc1-a844-95b170169b44-metrics-client-ca") pod "openshift-state-metrics-5dc6c74576-k464h" (UID: "de39c80c-acfa-4bc1-a844-95b170169b44") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.804510 master-0 kubenswrapper[31830]: E0319 12:14:23.804474 31830 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.804510 master-0 kubenswrapper[31830]: E0319 12:14:23.804490 31830 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.804510 master-0 kubenswrapper[31830]: E0319 12:14:23.804512 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-tls podName:a9d191d1-631d-4091-af8b-382283c18a5a nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.304505804 +0000 UTC m=+2.853466508 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-tls") pod "node-exporter-lpndz" (UID: "a9d191d1-631d-4091-af8b-382283c18a5a") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.804890 master-0 kubenswrapper[31830]: E0319 12:14:23.804531 31830 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.804890 master-0 kubenswrapper[31830]: E0319 12:14:23.804533 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee3529ac-6135-438b-9334-40c63c1fbd3d-cloud-controller-manager-operator-tls podName:ee3529ac-6135-438b-9334-40c63c1fbd3d nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.304520225 +0000 UTC m=+2.853481039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/ee3529ac-6135-438b-9334-40c63c1fbd3d-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7dff898856-84gh4" (UID: "ee3529ac-6135-438b-9334-40c63c1fbd3d") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.804890 master-0 kubenswrapper[31830]: E0319 12:14:23.804551 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f236a5ab-b400-46fc-94ee-1fff476d6458-metrics-tls podName:f236a5ab-b400-46fc-94ee-1fff476d6458 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.304546315 +0000 UTC m=+2.853507019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f236a5ab-b400-46fc-94ee-1fff476d6458-metrics-tls") pod "dns-default-zjdkm" (UID: "f236a5ab-b400-46fc-94ee-1fff476d6458") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.804890 master-0 kubenswrapper[31830]: E0319 12:14:23.804566 31830 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.804890 master-0 kubenswrapper[31830]: E0319 12:14:23.804579 31830 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.804890 master-0 kubenswrapper[31830]: E0319 12:14:23.804615 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-node-bootstrap-token podName:0e25d4ed-4ad0-4706-ad25-7822c9a1d07e nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.304604347 +0000 UTC m=+2.853565131 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-node-bootstrap-token") pod "machine-config-server-g7mqg" (UID: "0e25d4ed-4ad0-4706-ad25-7822c9a1d07e") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.804890 master-0 kubenswrapper[31830]: E0319 12:14:23.804640 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee3529ac-6135-438b-9334-40c63c1fbd3d-auth-proxy-config podName:ee3529ac-6135-438b-9334-40c63c1fbd3d nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.304626058 +0000 UTC m=+2.853586862 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee3529ac-6135-438b-9334-40c63c1fbd3d-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-7dff898856-84gh4" (UID: "ee3529ac-6135-438b-9334-40c63c1fbd3d") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805696 31830 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805774 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ad327a59-7879-4215-bb95-3f2be64cb97f-cco-trusted-ca podName:ad327a59-7879-4215-bb95-3f2be64cb97f nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.305728593 +0000 UTC m=+2.854689387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/ad327a59-7879-4215-bb95-3f2be64cb97f-cco-trusted-ca") pod "cloud-credential-operator-744f9dbf77-nr2k4" (UID: "ad327a59-7879-4215-bb95-3f2be64cb97f") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805803 31830 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805823 31830 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805845 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd40498c-f50a-408c-9a50-5d85ae666124-config podName:fd40498c-f50a-408c-9a50-5d85ae666124 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.305835616 +0000 UTC m=+2.854796320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/fd40498c-f50a-408c-9a50-5d85ae666124-config") pod "machine-approver-5c6485487f-qv29l" (UID: "fd40498c-f50a-408c-9a50-5d85ae666124") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805853 31830 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805862 31830 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805880 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-image-import-ca podName:13503fef-09b2-4dbe-9537-a5b361e7b591 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.305864717 +0000 UTC m=+2.854825521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-image-import-ca") pod "apiserver-897cc986b-vpg2l" (UID: "13503fef-09b2-4dbe-9537-a5b361e7b591") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805892 31830 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805892 31830 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805901 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86884445-e29b-492b-8810-b63b938b9170-prometheus-operator-kube-rbac-proxy-config podName:86884445-e29b-492b-8810-b63b938b9170 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.305890718 +0000 UTC m=+2.854851552 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/86884445-e29b-492b-8810-b63b938b9170-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-6c8df6d4b-qsrjj" (UID: "86884445-e29b-492b-8810-b63b938b9170") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805916 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-serving-cert podName:979ba8cc-5a7b-4188-bf9e-c22d810888e9 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.305909828 +0000 UTC m=+2.854870532 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-serving-cert") pod "apiserver-fdc5db968-8zh6r" (UID: "979ba8cc-5a7b-4188-bf9e-c22d810888e9") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805917 31830 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805932 31830 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805869 31830 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805932 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/86884445-e29b-492b-8810-b63b938b9170-metrics-client-ca podName:86884445-e29b-492b-8810-b63b938b9170 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.305924229 +0000 UTC m=+2.854884933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/86884445-e29b-492b-8810-b63b938b9170-metrics-client-ca") pod "prometheus-operator-6c8df6d4b-qsrjj" (UID: "86884445-e29b-492b-8810-b63b938b9170") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805830 31830 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805960 31830 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805926 31830 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805966 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-trusted-ca-bundle podName:7c80f8d0-ee9b-4a4d-ba92-e241b2552e58 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.3059588 +0000 UTC m=+2.854919624 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-trusted-ca-bundle") pod "telemeter-client-6975d7769d-nvxfv" (UID: "7c80f8d0-ee9b-4a4d-ba92-e241b2552e58") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805995 31830 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805950 31830 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.806011 31830 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.806020 31830 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.805998 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-serving-cert podName:13503fef-09b2-4dbe-9537-a5b361e7b591 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.305990681 +0000 UTC m=+2.854951385 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-serving-cert") pod "apiserver-897cc986b-vpg2l" (UID: "13503fef-09b2-4dbe-9537-a5b361e7b591") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.806035 31830 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.806049 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-certs podName:0e25d4ed-4ad0-4706-ad25-7822c9a1d07e nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.306037012 +0000 UTC m=+2.854997716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-certs") pod "machine-config-server-g7mqg" (UID: "0e25d4ed-4ad0-4706-ad25-7822c9a1d07e") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.806063 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-configmap-kubelet-serving-ca-bundle podName:6db3fcbe-0dbf-464f-944b-62427173c8d3 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.306054733 +0000 UTC m=+2.855015437 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-configmap-kubelet-serving-ca-bundle") pod "metrics-server-86889676f6-phlgd" (UID: "6db3fcbe-0dbf-464f-944b-62427173c8d3") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.806065 31830 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.806080 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad327a59-7879-4215-bb95-3f2be64cb97f-cloud-credential-operator-serving-cert podName:ad327a59-7879-4215-bb95-3f2be64cb97f nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.306072384 +0000 UTC m=+2.855033078 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/ad327a59-7879-4215-bb95-3f2be64cb97f-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-744f9dbf77-nr2k4" (UID: "ad327a59-7879-4215-bb95-3f2be64cb97f") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.806082 31830 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.806094 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-server-tls podName:6db3fcbe-0dbf-464f-944b-62427173c8d3 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.306088404 +0000 UTC m=+2.855049098 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-server-tls") pod "metrics-server-86889676f6-phlgd" (UID: "6db3fcbe-0dbf-464f-944b-62427173c8d3") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.806107 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-metrics-client-ca podName:7c80f8d0-ee9b-4a4d-ba92-e241b2552e58 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.306102364 +0000 UTC m=+2.855063068 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-metrics-client-ca") pod "telemeter-client-6975d7769d-nvxfv" (UID: "7c80f8d0-ee9b-4a4d-ba92-e241b2552e58") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.806123 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4264e82c-387f-4aa6-9ef6-b7beb61e098c-serving-cert podName:4264e82c-387f-4aa6-9ef6-b7beb61e098c nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.306115935 +0000 UTC m=+2.855076639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4264e82c-387f-4aa6-9ef6-b7beb61e098c-serving-cert") pod "insights-operator-68bf6ff9d6-djdmh" (UID: "4264e82c-387f-4aa6-9ef6-b7beb61e098c") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.806104 master-0 kubenswrapper[31830]: E0319 12:14:23.806137 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-metrics-server-audit-profiles podName:6db3fcbe-0dbf-464f-944b-62427173c8d3 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.306131715 +0000 UTC m=+2.855092419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-metrics-server-audit-profiles") pod "metrics-server-86889676f6-phlgd" (UID: "6db3fcbe-0dbf-464f-944b-62427173c8d3") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.807335 master-0 kubenswrapper[31830]: E0319 12:14:23.806186 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36e5fec9-7fb5-4460-8bb4-4b9e36fae978-cert podName:36e5fec9-7fb5-4460-8bb4-4b9e36fae978 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.306145506 +0000 UTC m=+2.855106210 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/36e5fec9-7fb5-4460-8bb4-4b9e36fae978-cert") pod "ingress-canary-w8jqs" (UID: "36e5fec9-7fb5-4460-8bb4-4b9e36fae978") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.807335 master-0 kubenswrapper[31830]: E0319 12:14:23.806198 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4264e82c-387f-4aa6-9ef6-b7beb61e098c-trusted-ca-bundle podName:4264e82c-387f-4aa6-9ef6-b7beb61e098c nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.306193366 +0000 UTC m=+2.855154070 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/4264e82c-387f-4aa6-9ef6-b7beb61e098c-trusted-ca-bundle") pod "insights-operator-68bf6ff9d6-djdmh" (UID: "4264e82c-387f-4aa6-9ef6-b7beb61e098c") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:23.807335 master-0 kubenswrapper[31830]: E0319 12:14:23.806212 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-tls podName:bb1000ab-4419-43ce-b1b7-8f43413e017f nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.306205487 +0000 UTC m=+2.855166191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-tls") pod "kube-state-metrics-7bbc969446-bnf7q" (UID: "bb1000ab-4419-43ce-b1b7-8f43413e017f") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.807335 master-0 kubenswrapper[31830]: E0319 12:14:23.806230 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de39c80c-acfa-4bc1-a844-95b170169b44-openshift-state-metrics-kube-rbac-proxy-config podName:de39c80c-acfa-4bc1-a844-95b170169b44 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.306221777 +0000 UTC m=+2.855182471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/de39c80c-acfa-4bc1-a844-95b170169b44-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-5dc6c74576-k464h" (UID: "de39c80c-acfa-4bc1-a844-95b170169b44") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.807335 master-0 kubenswrapper[31830]: E0319 12:14:23.806245 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf6b6560-1731-4fb1-b3c2-8257002842d6-cert podName:cf6b6560-1731-4fb1-b3c2-8257002842d6 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:24.306237908 +0000 UTC m=+2.855198612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cf6b6560-1731-4fb1-b3c2-8257002842d6-cert") pod "cluster-autoscaler-operator-866dc4744-fblgs" (UID: "cf6b6560-1731-4fb1-b3c2-8257002842d6") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:23.808986 master-0 kubenswrapper[31830]: I0319 12:14:23.808726 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 19 12:14:23.827967 master-0 kubenswrapper[31830]: I0319 12:14:23.827917 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 19 12:14:23.847364 master-0 kubenswrapper[31830]: I0319 12:14:23.847205 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 19 12:14:23.868748 master-0 kubenswrapper[31830]: I0319 12:14:23.868701 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 19 12:14:23.887924 master-0 kubenswrapper[31830]: I0319 12:14:23.887882 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 19 12:14:23.907489 master-0 kubenswrapper[31830]: I0319 12:14:23.907447 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 19 12:14:23.927240 master-0 kubenswrapper[31830]: I0319 12:14:23.927187 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 19 12:14:23.947602 master-0 kubenswrapper[31830]: I0319 12:14:23.947561 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 19 12:14:23.989752 master-0 kubenswrapper[31830]: I0319 12:14:23.989712 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 19 12:14:23.990002 master-0 kubenswrapper[31830]: I0319 12:14:23.989914 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 19 12:14:23.990701 master-0 kubenswrapper[31830]: I0319 12:14:23.990667 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:14:24.013648 master-0 kubenswrapper[31830]: I0319 12:14:24.013415 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 19 12:14:24.029001 master-0 kubenswrapper[31830]: I0319 12:14:24.028769 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 19 12:14:24.047254 master-0 kubenswrapper[31830]: I0319 12:14:24.047026 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 19 12:14:24.068443 master-0 kubenswrapper[31830]: I0319 12:14:24.068007 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 19 12:14:24.095061 master-0 kubenswrapper[31830]: I0319 12:14:24.094991 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 19 12:14:24.108329 master-0 kubenswrapper[31830]: I0319 12:14:24.108277 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 19 12:14:24.127336 master-0 kubenswrapper[31830]: I0319 12:14:24.127273 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-h5t8s" Mar 19 12:14:24.149853 master-0 kubenswrapper[31830]: I0319 12:14:24.147780 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 19 12:14:24.168678 master-0 kubenswrapper[31830]: I0319 12:14:24.168628 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 19 12:14:24.188144 master-0 kubenswrapper[31830]: I0319 12:14:24.188075 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-lcm2r" Mar 19 12:14:24.208311 master-0 kubenswrapper[31830]: I0319 12:14:24.208262 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-v8nqn" Mar 19 12:14:24.228305 master-0 kubenswrapper[31830]: I0319 12:14:24.228245 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-djzws" Mar 19 12:14:24.250132 master-0 kubenswrapper[31830]: I0319 12:14:24.250069 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-67sx5" Mar 19 12:14:24.269130 master-0 kubenswrapper[31830]: I0319 12:14:24.269009 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-gfs2v" Mar 19 12:14:24.290070 master-0 kubenswrapper[31830]: I0319 12:14:24.290022 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 19 12:14:24.307951 master-0 kubenswrapper[31830]: I0319 12:14:24.307870 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 19 12:14:24.328383 master-0 kubenswrapper[31830]: I0319 12:14:24.328326 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 19 12:14:24.347807 master-0 kubenswrapper[31830]: I0319 12:14:24.347743 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 19 12:14:24.357639 master-0 kubenswrapper[31830]: I0319 12:14:24.357583 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-trusted-ca-bundle\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:24.357748 master-0 kubenswrapper[31830]: I0319 12:14:24.357646 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-images\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 12:14:24.358021 master-0 kubenswrapper[31830]: I0319 12:14:24.357978 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4264e82c-387f-4aa6-9ef6-b7beb61e098c-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 12:14:24.358085 master-0 kubenswrapper[31830]: I0319 12:14:24.358060 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 12:14:24.358085 master-0 kubenswrapper[31830]: I0319 12:14:24.358068 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-trusted-ca-bundle\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:24.358179 master-0 kubenswrapper[31830]: I0319 12:14:24.358160 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/44469a78-9300-4260-89e9-ea939de1357b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-tql86\" (UID: \"44469a78-9300-4260-89e9-ea939de1357b\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86" Mar 19 12:14:24.358213 master-0 kubenswrapper[31830]: I0319 12:14:24.358204 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a9d191d1-631d-4091-af8b-382283c18a5a-metrics-client-ca\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:24.358255 master-0 kubenswrapper[31830]: I0319 12:14:24.358239 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/be4349fa-5c67-4135-80a7-b8a694553662-webhook-cert\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 12:14:24.358303 master-0 kubenswrapper[31830]: I0319 12:14:24.358289 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-config\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:24.358396 master-0 kubenswrapper[31830]: I0319 12:14:24.358374 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-node-bootstrap-token\") pod \"machine-config-server-g7mqg\" (UID: \"0e25d4ed-4ad0-4706-ad25-7822c9a1d07e\") " pod="openshift-machine-config-operator/machine-config-server-g7mqg" Mar 19 12:14:24.358437 master-0 kubenswrapper[31830]: I0319 12:14:24.358416 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/44469a78-9300-4260-89e9-ea939de1357b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-tql86\" (UID: \"44469a78-9300-4260-89e9-ea939de1357b\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86" Mar 19 12:14:24.358519 master-0 kubenswrapper[31830]: I0319 12:14:24.358490 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 12:14:24.358601 master-0 kubenswrapper[31830]: I0319 12:14:24.358584 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-encryption-config\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:24.358651 master-0 kubenswrapper[31830]: I0319 12:14:24.358633 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-serving-certs-ca-bundle\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:24.358651 master-0 kubenswrapper[31830]: I0319 12:14:24.358639 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-config\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:24.358711 master-0 kubenswrapper[31830]: I0319 12:14:24.358660 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf6b6560-1731-4fb1-b3c2-8257002842d6-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-fblgs\" (UID: \"cf6b6560-1731-4fb1-b3c2-8257002842d6\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" Mar 19 12:14:24.358711 master-0 kubenswrapper[31830]: I0319 12:14:24.358658 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/be4349fa-5c67-4135-80a7-b8a694553662-webhook-cert\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 12:14:24.358779 master-0 kubenswrapper[31830]: I0319 12:14:24.358758 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bb1000ab-4419-43ce-b1b7-8f43413e017f-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 12:14:24.358851 master-0 kubenswrapper[31830]: I0319 12:14:24.358830 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-encryption-config\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:24.358892 master-0 kubenswrapper[31830]: I0319 12:14:24.358837 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-federate-client-tls\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:24.358892 master-0 kubenswrapper[31830]: I0319 12:14:24.358866 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-node-bootstrap-token\") pod \"machine-config-server-g7mqg\" (UID: \"0e25d4ed-4ad0-4706-ad25-7822c9a1d07e\") " pod="openshift-machine-config-operator/machine-config-server-g7mqg" Mar 19 12:14:24.358996 master-0 kubenswrapper[31830]: I0319 12:14:24.358975 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1c2a33ba-76d0-4b81-a41d-9da16fd46209-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-dqgd9\" (UID: \"1c2a33ba-76d0-4b81-a41d-9da16fd46209\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9" Mar 19 12:14:24.359039 master-0 kubenswrapper[31830]: I0319 12:14:24.359012 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-certs\") pod \"machine-config-server-g7mqg\" (UID: \"0e25d4ed-4ad0-4706-ad25-7822c9a1d07e\") " pod="openshift-machine-config-operator/machine-config-server-g7mqg" Mar 19 12:14:24.359111 master-0 kubenswrapper[31830]: I0319 12:14:24.359088 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee3529ac-6135-438b-9334-40c63c1fbd3d-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 12:14:24.359148 master-0 kubenswrapper[31830]: I0319 12:14:24.359120 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/86884445-e29b-492b-8810-b63b938b9170-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 12:14:24.359231 master-0 kubenswrapper[31830]: I0319 12:14:24.359212 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-config\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 12:14:24.359276 master-0 kubenswrapper[31830]: I0319 12:14:24.359230 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-certs\") pod \"machine-config-server-g7mqg\" (UID: \"0e25d4ed-4ad0-4706-ad25-7822c9a1d07e\") " pod="openshift-machine-config-operator/machine-config-server-g7mqg" Mar 19 12:14:24.359276 master-0 kubenswrapper[31830]: I0319 12:14:24.359241 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/667757ee-2670-4019-ad93-156521d3c2e7-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-cx8l9\" (UID: \"667757ee-2670-4019-ad93-156521d3c2e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-cx8l9" Mar 19 12:14:24.359359 master-0 kubenswrapper[31830]: I0319 12:14:24.359314 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/be4349fa-5c67-4135-80a7-b8a694553662-apiservice-cert\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 12:14:24.359397 master-0 kubenswrapper[31830]: I0319 12:14:24.359362 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/86884445-e29b-492b-8810-b63b938b9170-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 12:14:24.359397 master-0 kubenswrapper[31830]: I0319 12:14:24.359392 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-etcd-client\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:24.359468 master-0 kubenswrapper[31830]: I0319 12:14:24.359413 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-etcd-serving-ca\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:24.359468 master-0 kubenswrapper[31830]: I0319 12:14:24.359445 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-serving-cert\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:24.359544 master-0 kubenswrapper[31830]: I0319 12:14:24.359479 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:24.359544 master-0 kubenswrapper[31830]: I0319 12:14:24.359499 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/311b8bab-6cee-406d-8e0e-5b18a743d5fa-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-rdpvm\" (UID: \"311b8bab-6cee-406d-8e0e-5b18a743d5fa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" Mar 19 12:14:24.359544 master-0 kubenswrapper[31830]: I0319 12:14:24.359516 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/667757ee-2670-4019-ad93-156521d3c2e7-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-cx8l9\" (UID: \"667757ee-2670-4019-ad93-156521d3c2e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-cx8l9" Mar 19 12:14:24.359655 master-0 kubenswrapper[31830]: I0319 12:14:24.359560 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-client-ca-bundle\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:24.359694 master-0 kubenswrapper[31830]: I0319 12:14:24.359679 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-tls\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:24.359732 master-0 kubenswrapper[31830]: I0319 12:14:24.359709 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f236a5ab-b400-46fc-94ee-1fff476d6458-metrics-tls\") pod \"dns-default-zjdkm\" (UID: \"f236a5ab-b400-46fc-94ee-1fff476d6458\") " pod="openshift-dns/dns-default-zjdkm" Mar 19 12:14:24.359732 master-0 kubenswrapper[31830]: I0319 12:14:24.359710 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/be4349fa-5c67-4135-80a7-b8a694553662-apiservice-cert\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 12:14:24.359820 master-0 kubenswrapper[31830]: I0319 12:14:24.359787 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:24.359871 master-0 kubenswrapper[31830]: I0319 12:14:24.359835 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:24.359871 master-0 kubenswrapper[31830]: I0319 12:14:24.359861 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ee3529ac-6135-438b-9334-40c63c1fbd3d-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 12:14:24.359950 master-0 kubenswrapper[31830]: I0319 12:14:24.359891 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fd40498c-f50a-408c-9a50-5d85ae666124-machine-approver-tls\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 12:14:24.359950 master-0 kubenswrapper[31830]: I0319 12:14:24.359908 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-etcd-client\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:24.359950 master-0 kubenswrapper[31830]: I0319 12:14:24.359934 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fd40498c-f50a-408c-9a50-5d85ae666124-auth-proxy-config\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 12:14:24.360122 master-0 kubenswrapper[31830]: I0319 12:14:24.359951 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f236a5ab-b400-46fc-94ee-1fff476d6458-config-volume\") pod \"dns-default-zjdkm\" (UID: \"f236a5ab-b400-46fc-94ee-1fff476d6458\") " pod="openshift-dns/dns-default-zjdkm" Mar 19 12:14:24.360122 master-0 kubenswrapper[31830]: I0319 12:14:24.359964 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-serving-cert\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:24.360122 master-0 kubenswrapper[31830]: I0319 12:14:24.359970 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/de39c80c-acfa-4bc1-a844-95b170169b44-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 12:14:24.360247 master-0 kubenswrapper[31830]: I0319 12:14:24.360172 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-etcd-serving-ca\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:24.360247 master-0 kubenswrapper[31830]: I0319 12:14:24.360190 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/311b8bab-6cee-406d-8e0e-5b18a743d5fa-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-rdpvm\" (UID: \"311b8bab-6cee-406d-8e0e-5b18a743d5fa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" Mar 19 12:14:24.360371 master-0 kubenswrapper[31830]: I0319 12:14:24.360340 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-client-tls\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:24.360427 master-0 kubenswrapper[31830]: I0319 12:14:24.360383 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f236a5ab-b400-46fc-94ee-1fff476d6458-config-volume\") pod \"dns-default-zjdkm\" (UID: \"f236a5ab-b400-46fc-94ee-1fff476d6458\") " pod="openshift-dns/dns-default-zjdkm" Mar 19 12:14:24.360427 master-0 kubenswrapper[31830]: I0319 12:14:24.360409 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/86884445-e29b-492b-8810-b63b938b9170-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 12:14:24.360500 master-0 kubenswrapper[31830]: I0319 12:14:24.360439 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad327a59-7879-4215-bb95-3f2be64cb97f-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-nr2k4\" (UID: \"ad327a59-7879-4215-bb95-3f2be64cb97f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" Mar 19 12:14:24.360500 master-0 kubenswrapper[31830]: I0319 12:14:24.360469 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-audit\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:24.360500 master-0 kubenswrapper[31830]: I0319 12:14:24.360498 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/36e5fec9-7fb5-4460-8bb4-4b9e36fae978-cert\") pod \"ingress-canary-w8jqs\" (UID: \"36e5fec9-7fb5-4460-8bb4-4b9e36fae978\") " pod="openshift-ingress-canary/ingress-canary-w8jqs" Mar 19 12:14:24.360627 master-0 kubenswrapper[31830]: I0319 12:14:24.360528 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:24.360627 master-0 kubenswrapper[31830]: I0319 12:14:24.360554 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4264e82c-387f-4aa6-9ef6-b7beb61e098c-serving-cert\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 12:14:24.360627 master-0 kubenswrapper[31830]: I0319 12:14:24.360574 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd40498c-f50a-408c-9a50-5d85ae666124-config\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 12:14:24.360627 master-0 kubenswrapper[31830]: I0319 12:14:24.360588 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f236a5ab-b400-46fc-94ee-1fff476d6458-metrics-tls\") pod \"dns-default-zjdkm\" (UID: \"f236a5ab-b400-46fc-94ee-1fff476d6458\") " pod="openshift-dns/dns-default-zjdkm" Mar 19 12:14:24.360627 master-0 kubenswrapper[31830]: I0319 12:14:24.360603 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-client-certs\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:24.360840 master-0 kubenswrapper[31830]: I0319 12:14:24.360635 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-server-tls\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:24.360882 master-0 kubenswrapper[31830]: I0319 12:14:24.360837 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13503fef-09b2-4dbe-9537-a5b361e7b591-etcd-client\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:24.360920 master-0 kubenswrapper[31830]: I0319 12:14:24.360904 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-audit\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:24.361038 master-0 kubenswrapper[31830]: I0319 12:14:24.361005 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/de39c80c-acfa-4bc1-a844-95b170169b44-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 12:14:24.361101 master-0 kubenswrapper[31830]: I0319 12:14:24.361058 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ad327a59-7879-4215-bb95-3f2be64cb97f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-nr2k4\" (UID: \"ad327a59-7879-4215-bb95-3f2be64cb97f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" Mar 19 12:14:24.361101 master-0 kubenswrapper[31830]: I0319 12:14:24.361099 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-serving-cert\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:24.361161 master-0 kubenswrapper[31830]: I0319 12:14:24.361119 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4264e82c-387f-4aa6-9ef6-b7beb61e098c-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 12:14:24.361195 master-0 kubenswrapper[31830]: I0319 12:14:24.361183 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-etcd-client\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:24.361227 master-0 kubenswrapper[31830]: I0319 12:14:24.361202 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de39c80c-acfa-4bc1-a844-95b170169b44-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 12:14:24.361270 master-0 kubenswrapper[31830]: I0319 12:14:24.361223 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-metrics-client-ca\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:24.361325 master-0 kubenswrapper[31830]: I0319 12:14:24.361302 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-image-import-ca\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:24.361372 master-0 kubenswrapper[31830]: I0319 12:14:24.361328 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 12:14:24.361372 master-0 kubenswrapper[31830]: I0319 12:14:24.361333 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/979ba8cc-5a7b-4188-bf9e-c22d810888e9-serving-cert\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:24.361372 master-0 kubenswrapper[31830]: I0319 12:14:24.361349 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-metrics-server-audit-profiles\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:24.361477 master-0 kubenswrapper[31830]: I0319 12:14:24.361410 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf6b6560-1731-4fb1-b3c2-8257002842d6-cert\") pod \"cluster-autoscaler-operator-866dc4744-fblgs\" (UID: \"cf6b6560-1731-4fb1-b3c2-8257002842d6\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" Mar 19 12:14:24.361477 master-0 kubenswrapper[31830]: I0319 12:14:24.361441 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee3529ac-6135-438b-9334-40c63c1fbd3d-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 12:14:24.361477 master-0 kubenswrapper[31830]: I0319 12:14:24.361444 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/13503fef-09b2-4dbe-9537-a5b361e7b591-image-import-ca\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:24.361477 master-0 kubenswrapper[31830]: I0319 12:14:24.361463 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:24.361631 master-0 kubenswrapper[31830]: I0319 12:14:24.361492 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 12:14:24.367481 master-0 kubenswrapper[31830]: I0319 12:14:24.367445 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-sklzz" Mar 19 12:14:24.387649 master-0 kubenswrapper[31830]: I0319 12:14:24.387603 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 19 12:14:24.391390 master-0 kubenswrapper[31830]: I0319 12:14:24.391358 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fd40498c-f50a-408c-9a50-5d85ae666124-auth-proxy-config\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 12:14:24.407602 master-0 kubenswrapper[31830]: I0319 12:14:24.407557 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 19 12:14:24.411298 master-0 kubenswrapper[31830]: I0319 12:14:24.411269 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fd40498c-f50a-408c-9a50-5d85ae666124-machine-approver-tls\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 12:14:24.427525 master-0 kubenswrapper[31830]: I0319 12:14:24.427461 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 19 12:14:24.431582 master-0 kubenswrapper[31830]: I0319 12:14:24.431548 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd40498c-f50a-408c-9a50-5d85ae666124-config\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 12:14:24.447304 master-0 kubenswrapper[31830]: I0319 12:14:24.447267 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 19 12:14:24.468152 master-0 kubenswrapper[31830]: I0319 12:14:24.468115 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 19 12:14:24.487933 master-0 kubenswrapper[31830]: I0319 12:14:24.487875 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-mc2cj" Mar 19 12:14:24.524593 master-0 kubenswrapper[31830]: I0319 12:14:24.524478 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpdts\" (UniqueName: \"kubernetes.io/projected/9702fc8c-4fe0-413b-b2d4-db23021d42b8-kube-api-access-tpdts\") pod \"etcd-operator-8544cbcf9c-sc4kz\" (UID: \"9702fc8c-4fe0-413b-b2d4-db23021d42b8\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-sc4kz" Mar 19 12:14:24.539369 master-0 kubenswrapper[31830]: I0319 12:14:24.539322 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b80027fd-7b39-477a-a337-ff9bb08e7eeb-bound-sa-token\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 12:14:24.568722 master-0 kubenswrapper[31830]: I0319 12:14:24.568663 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06df1b1b-154e-46f9-aee0-79a137c6c928-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-ptqdh\" (UID: \"06df1b1b-154e-46f9-aee0-79a137c6c928\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-ptqdh" Mar 19 12:14:24.583309 master-0 kubenswrapper[31830]: I0319 12:14:24.583265 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 12:14:24.603445 master-0 kubenswrapper[31830]: I0319 12:14:24.603392 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mr6d\" (UniqueName: \"kubernetes.io/projected/beb562de-402b-4d9f-b5ed-090b60847a95-kube-api-access-9mr6d\") pod \"package-server-manager-7b95f86987-6j2nj\" (UID: \"beb562de-402b-4d9f-b5ed-090b60847a95\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 12:14:24.607631 master-0 kubenswrapper[31830]: I0319 12:14:24.607588 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 19 12:14:24.612569 master-0 kubenswrapper[31830]: I0319 12:14:24.612534 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 12:14:24.627545 master-0 kubenswrapper[31830]: I0319 12:14:24.627508 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-ww9m4" Mar 19 12:14:24.648266 master-0 kubenswrapper[31830]: I0319 12:14:24.648221 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 19 12:14:24.650612 master-0 kubenswrapper[31830]: I0319 12:14:24.650569 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-config\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 12:14:24.680837 master-0 kubenswrapper[31830]: I0319 12:14:24.680772 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x252z\" (UniqueName: \"kubernetes.io/projected/aef8e03f-0363-4e13-b7ca-4fa871d77c62-kube-api-access-x252z\") pod \"openshift-config-operator-95bf4f4d-nhvl4\" (UID: \"aef8e03f-0363-4e13-b7ca-4fa871d77c62\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 12:14:24.686665 master-0 kubenswrapper[31830]: I0319 12:14:24.686621 31830 request.go:700] Waited for 2.0071386s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Mar 19 12:14:24.688067 master-0 kubenswrapper[31830]: I0319 12:14:24.688036 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 19 12:14:24.707666 master-0 kubenswrapper[31830]: I0319 12:14:24.707598 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-cmchf" Mar 19 12:14:24.727677 master-0 kubenswrapper[31830]: I0319 12:14:24.727617 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 19 12:14:24.731251 master-0 kubenswrapper[31830]: I0319 12:14:24.731215 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4264e82c-387f-4aa6-9ef6-b7beb61e098c-serving-cert\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 12:14:24.753810 master-0 kubenswrapper[31830]: I0319 12:14:24.753742 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 19 12:14:24.761975 master-0 kubenswrapper[31830]: I0319 12:14:24.761928 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4264e82c-387f-4aa6-9ef6-b7beb61e098c-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 12:14:24.767296 master-0 kubenswrapper[31830]: I0319 12:14:24.767259 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 19 12:14:24.769433 master-0 kubenswrapper[31830]: I0319 12:14:24.768861 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4264e82c-387f-4aa6-9ef6-b7beb61e098c-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 12:14:24.787616 master-0 kubenswrapper[31830]: I0319 12:14:24.787484 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 19 12:14:24.808066 master-0 kubenswrapper[31830]: I0319 12:14:24.808013 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 19 12:14:24.811510 master-0 kubenswrapper[31830]: I0319 12:14:24.811466 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ad327a59-7879-4215-bb95-3f2be64cb97f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-nr2k4\" (UID: \"ad327a59-7879-4215-bb95-3f2be64cb97f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" Mar 19 12:14:24.828319 master-0 kubenswrapper[31830]: I0319 12:14:24.828227 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-gz8pl" Mar 19 12:14:24.847593 master-0 kubenswrapper[31830]: I0319 12:14:24.847495 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 19 12:14:24.848635 master-0 kubenswrapper[31830]: I0319 12:14:24.848576 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-images\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 12:14:24.874160 master-0 kubenswrapper[31830]: I0319 12:14:24.874072 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 19 12:14:24.881236 master-0 kubenswrapper[31830]: I0319 12:14:24.881189 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad327a59-7879-4215-bb95-3f2be64cb97f-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-nr2k4\" (UID: \"ad327a59-7879-4215-bb95-3f2be64cb97f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" Mar 19 12:14:24.888032 master-0 kubenswrapper[31830]: I0319 12:14:24.887981 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 19 12:14:24.907911 master-0 kubenswrapper[31830]: I0319 12:14:24.907862 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 19 12:14:24.943306 master-0 kubenswrapper[31830]: I0319 12:14:24.943251 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npc2t\" (UniqueName: \"kubernetes.io/projected/c2dbd8b3-0e02-4747-a166-80aa6a94b060-kube-api-access-npc2t\") pod \"cluster-olm-operator-67dcd4998-cg9pq\" (UID: \"c2dbd8b3-0e02-4747-a166-80aa6a94b060\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cg9pq" Mar 19 12:14:24.968213 master-0 kubenswrapper[31830]: I0319 12:14:24.968118 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvvk8\" (UniqueName: \"kubernetes.io/projected/0316c374-f812-4e0a-8645-727e8372f16e-kube-api-access-tvvk8\") pod \"network-check-source-b4bf74f6-6dmt7\" (UID: \"0316c374-f812-4e0a-8645-727e8372f16e\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-6dmt7" Mar 19 12:14:24.979688 master-0 kubenswrapper[31830]: I0319 12:14:24.979611 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shfs6\" (UniqueName: \"kubernetes.io/projected/7044a7b3-4fac-40af-a31c-054a1a1db26b-kube-api-access-shfs6\") pod \"multus-additional-cni-plugins-2z4h8\" (UID: \"7044a7b3-4fac-40af-a31c-054a1a1db26b\") " pod="openshift-multus/multus-additional-cni-plugins-2z4h8" Mar 19 12:14:24.999385 master-0 kubenswrapper[31830]: I0319 12:14:24.999330 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6wm6\" (UniqueName: \"kubernetes.io/projected/d3017b5e-178e-49de-89d2-817a18398203-kube-api-access-b6wm6\") pod \"authentication-operator-5885bfd7f4-pkgvq\" (UID: \"d3017b5e-178e-49de-89d2-817a18398203\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-pkgvq" Mar 19 12:14:25.007678 master-0 kubenswrapper[31830]: I0319 12:14:25.007641 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 19 12:14:25.010110 master-0 kubenswrapper[31830]: I0319 12:14:25.010078 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee3529ac-6135-438b-9334-40c63c1fbd3d-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 12:14:25.039554 master-0 kubenswrapper[31830]: I0319 12:14:25.039433 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwfg5\" (UniqueName: \"kubernetes.io/projected/87a3f546-e1c1-42a1-b80e-d45b6d5c0a04-kube-api-access-hwfg5\") pod \"olm-operator-5c9796789-8cldl\" (UID: \"87a3f546-e1c1-42a1-b80e-d45b6d5c0a04\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 12:14:25.049165 master-0 kubenswrapper[31830]: I0319 12:14:25.049131 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 19 12:14:25.085121 master-0 kubenswrapper[31830]: I0319 12:14:25.085062 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5mkm\" (UniqueName: \"kubernetes.io/projected/7241bf11-192e-47db-9d80-2324938ed34c-kube-api-access-s5mkm\") pod \"cluster-monitoring-operator-58845fbb57-92c5d\" (UID: \"7241bf11-192e-47db-9d80-2324938ed34c\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-92c5d" Mar 19 12:14:25.088347 master-0 kubenswrapper[31830]: I0319 12:14:25.088316 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-dr8qt" Mar 19 12:14:25.108605 master-0 kubenswrapper[31830]: I0319 12:14:25.108564 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 19 12:14:25.112339 master-0 kubenswrapper[31830]: I0319 12:14:25.112290 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee3529ac-6135-438b-9334-40c63c1fbd3d-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 12:14:25.127653 master-0 kubenswrapper[31830]: I0319 12:14:25.127609 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 19 12:14:25.148131 master-0 kubenswrapper[31830]: I0319 12:14:25.148070 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 19 12:14:25.151427 master-0 kubenswrapper[31830]: I0319 12:14:25.151392 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ee3529ac-6135-438b-9334-40c63c1fbd3d-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 12:14:25.168086 master-0 kubenswrapper[31830]: I0319 12:14:25.168025 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 19 12:14:25.169946 master-0 kubenswrapper[31830]: I0319 12:14:25.169909 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/86884445-e29b-492b-8810-b63b938b9170-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 12:14:25.187888 master-0 kubenswrapper[31830]: I0319 12:14:25.187773 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-hjms6" Mar 19 12:14:25.208727 master-0 kubenswrapper[31830]: I0319 12:14:25.208672 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 19 12:14:25.210129 master-0 kubenswrapper[31830]: I0319 12:14:25.210103 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/86884445-e29b-492b-8810-b63b938b9170-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 12:14:25.228534 master-0 kubenswrapper[31830]: I0319 12:14:25.228486 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 19 12:14:25.239671 master-0 kubenswrapper[31830]: I0319 12:14:25.239569 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 12:14:25.248258 master-0 kubenswrapper[31830]: I0319 12:14:25.248215 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-48w96" Mar 19 12:14:25.268713 master-0 kubenswrapper[31830]: I0319 12:14:25.268676 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 19 12:14:25.272224 master-0 kubenswrapper[31830]: I0319 12:14:25.272183 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/de39c80c-acfa-4bc1-a844-95b170169b44-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 12:14:25.288150 master-0 kubenswrapper[31830]: I0319 12:14:25.288092 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 19 12:14:25.290556 master-0 kubenswrapper[31830]: I0319 12:14:25.290477 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/de39c80c-acfa-4bc1-a844-95b170169b44-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 12:14:25.307996 master-0 kubenswrapper[31830]: I0319 12:14:25.307935 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 19 12:14:25.309241 master-0 kubenswrapper[31830]: I0319 12:14:25.309203 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 12:14:25.328716 master-0 kubenswrapper[31830]: I0319 12:14:25.328655 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-shkfs" Mar 19 12:14:25.348402 master-0 kubenswrapper[31830]: I0319 12:14:25.348329 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 19 12:14:25.352319 master-0 kubenswrapper[31830]: I0319 12:14:25.352266 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 12:14:25.358700 master-0 kubenswrapper[31830]: E0319 12:14:25.358660 31830 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:25.358784 master-0 kubenswrapper[31830]: E0319 12:14:25.358758 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a9d191d1-631d-4091-af8b-382283c18a5a-metrics-client-ca podName:a9d191d1-631d-4091-af8b-382283c18a5a nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.358736399 +0000 UTC m=+4.907697113 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/a9d191d1-631d-4091-af8b-382283c18a5a-metrics-client-ca") pod "node-exporter-lpndz" (UID: "a9d191d1-631d-4091-af8b-382283c18a5a") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:25.358874 master-0 kubenswrapper[31830]: E0319 12:14:25.358855 31830 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:25.358908 master-0 kubenswrapper[31830]: E0319 12:14:25.358868 31830 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:25.358937 master-0 kubenswrapper[31830]: E0319 12:14:25.358915 31830 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.358937 master-0 kubenswrapper[31830]: E0319 12:14:25.358878 31830 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:25.358997 master-0 kubenswrapper[31830]: E0319 12:14:25.358926 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-serving-certs-ca-bundle podName:7c80f8d0-ee9b-4a4d-ba92-e241b2552e58 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.358908934 +0000 UTC m=+4.907869638 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-serving-certs-ca-bundle") pod "telemeter-client-6975d7769d-nvxfv" (UID: "7c80f8d0-ee9b-4a4d-ba92-e241b2552e58") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:25.358997 master-0 kubenswrapper[31830]: E0319 12:14:25.358989 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cf6b6560-1731-4fb1-b3c2-8257002842d6-auth-proxy-config podName:cf6b6560-1731-4fb1-b3c2-8257002842d6 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.358970946 +0000 UTC m=+4.907931660 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/cf6b6560-1731-4fb1-b3c2-8257002842d6-auth-proxy-config") pod "cluster-autoscaler-operator-866dc4744-fblgs" (UID: "cf6b6560-1731-4fb1-b3c2-8257002842d6") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:25.359061 master-0 kubenswrapper[31830]: E0319 12:14:25.359006 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb1000ab-4419-43ce-b1b7-8f43413e017f-metrics-client-ca podName:bb1000ab-4419-43ce-b1b7-8f43413e017f nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.358998097 +0000 UTC m=+4.907958811 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/bb1000ab-4419-43ce-b1b7-8f43413e017f-metrics-client-ca") pod "kube-state-metrics-7bbc969446-bnf7q" (UID: "bb1000ab-4419-43ce-b1b7-8f43413e017f") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:25.359061 master-0 kubenswrapper[31830]: E0319 12:14:25.359021 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-federate-client-tls podName:7c80f8d0-ee9b-4a4d-ba92-e241b2552e58 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.359013838 +0000 UTC m=+4.907974552 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-federate-client-tls") pod "telemeter-client-6975d7769d-nvxfv" (UID: "7c80f8d0-ee9b-4a4d-ba92-e241b2552e58") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.359303 master-0 kubenswrapper[31830]: E0319 12:14:25.359271 31830 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.359346 master-0 kubenswrapper[31830]: E0319 12:14:25.359337 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c2a33ba-76d0-4b81-a41d-9da16fd46209-webhook-certs podName:1c2a33ba-76d0-4b81-a41d-9da16fd46209 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.359320067 +0000 UTC m=+4.908280781 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1c2a33ba-76d0-4b81-a41d-9da16fd46209-webhook-certs") pod "multus-admission-controller-58c9f8fc64-dqgd9" (UID: "1c2a33ba-76d0-4b81-a41d-9da16fd46209") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.361043 master-0 kubenswrapper[31830]: E0319 12:14:25.361016 31830 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.361084 master-0 kubenswrapper[31830]: E0319 12:14:25.361062 31830 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.361122 master-0 kubenswrapper[31830]: E0319 12:14:25.361085 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36e5fec9-7fb5-4460-8bb4-4b9e36fae978-cert podName:36e5fec9-7fb5-4460-8bb4-4b9e36fae978 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.361067681 +0000 UTC m=+4.910028425 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/36e5fec9-7fb5-4460-8bb4-4b9e36fae978-cert") pod "ingress-canary-w8jqs" (UID: "36e5fec9-7fb5-4460-8bb4-4b9e36fae978") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.361122 master-0 kubenswrapper[31830]: E0319 12:14:25.361105 31830 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.361122 master-0 kubenswrapper[31830]: E0319 12:14:25.361119 31830 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ets52rpou52es: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.361210 master-0 kubenswrapper[31830]: E0319 12:14:25.361139 31830 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:25.361210 master-0 kubenswrapper[31830]: E0319 12:14:25.361112 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client-kube-rbac-proxy-config podName:7c80f8d0-ee9b-4a4d-ba92-e241b2552e58 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.361100122 +0000 UTC m=+4.910060836 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6975d7769d-nvxfv" (UID: "7c80f8d0-ee9b-4a4d-ba92-e241b2552e58") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.361210 master-0 kubenswrapper[31830]: E0319 12:14:25.361159 31830 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.361210 master-0 kubenswrapper[31830]: E0319 12:14:25.361180 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-client-ca-bundle podName:6db3fcbe-0dbf-464f-944b-62427173c8d3 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.361162654 +0000 UTC m=+4.910123488 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-client-ca-bundle") pod "metrics-server-86889676f6-phlgd" (UID: "6db3fcbe-0dbf-464f-944b-62427173c8d3") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.361210 master-0 kubenswrapper[31830]: E0319 12:14:25.361181 31830 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.361210 master-0 kubenswrapper[31830]: E0319 12:14:25.361181 31830 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:25.361210 master-0 kubenswrapper[31830]: E0319 12:14:25.361199 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-client-tls podName:7c80f8d0-ee9b-4a4d-ba92-e241b2552e58 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.361191135 +0000 UTC m=+4.910151849 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-client-tls") pod "telemeter-client-6975d7769d-nvxfv" (UID: "7c80f8d0-ee9b-4a4d-ba92-e241b2552e58") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.361210 master-0 kubenswrapper[31830]: E0319 12:14:25.361206 31830 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.361210 master-0 kubenswrapper[31830]: E0319 12:14:25.361218 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/86884445-e29b-492b-8810-b63b938b9170-metrics-client-ca podName:86884445-e29b-492b-8810-b63b938b9170 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.361211616 +0000 UTC m=+4.910172330 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/86884445-e29b-492b-8810-b63b938b9170-metrics-client-ca") pod "prometheus-operator-6c8df6d4b-qsrjj" (UID: "86884445-e29b-492b-8810-b63b938b9170") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:25.361471 master-0 kubenswrapper[31830]: E0319 12:14:25.361202 31830 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.361471 master-0 kubenswrapper[31830]: E0319 12:14:25.361235 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client podName:7c80f8d0-ee9b-4a4d-ba92-e241b2552e58 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.361227537 +0000 UTC m=+4.910188251 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client") pod "telemeter-client-6975d7769d-nvxfv" (UID: "7c80f8d0-ee9b-4a4d-ba92-e241b2552e58") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.361471 master-0 kubenswrapper[31830]: E0319 12:14:25.361243 31830 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:25.361471 master-0 kubenswrapper[31830]: E0319 12:14:25.361257 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-client-certs podName:6db3fcbe-0dbf-464f-944b-62427173c8d3 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.361247737 +0000 UTC m=+4.910208451 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-client-certs") pod "metrics-server-86889676f6-phlgd" (UID: "6db3fcbe-0dbf-464f-944b-62427173c8d3") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.361471 master-0 kubenswrapper[31830]: E0319 12:14:25.361274 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-server-tls podName:6db3fcbe-0dbf-464f-944b-62427173c8d3 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.361267008 +0000 UTC m=+4.910227722 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-server-tls") pod "metrics-server-86889676f6-phlgd" (UID: "6db3fcbe-0dbf-464f-944b-62427173c8d3") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.361471 master-0 kubenswrapper[31830]: E0319 12:14:25.361289 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-configmap-kubelet-serving-ca-bundle podName:6db3fcbe-0dbf-464f-944b-62427173c8d3 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.361282888 +0000 UTC m=+4.910243602 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-configmap-kubelet-serving-ca-bundle") pod "metrics-server-86889676f6-phlgd" (UID: "6db3fcbe-0dbf-464f-944b-62427173c8d3") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:25.361471 master-0 kubenswrapper[31830]: E0319 12:14:25.361289 31830 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:25.361471 master-0 kubenswrapper[31830]: E0319 12:14:25.361304 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-tls podName:a9d191d1-631d-4091-af8b-382283c18a5a nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.361296739 +0000 UTC m=+4.910257453 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-tls") pod "node-exporter-lpndz" (UID: "a9d191d1-631d-4091-af8b-382283c18a5a") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.361471 master-0 kubenswrapper[31830]: E0319 12:14:25.361323 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-trusted-ca-bundle podName:7c80f8d0-ee9b-4a4d-ba92-e241b2552e58 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.361312279 +0000 UTC m=+4.910272993 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-trusted-ca-bundle") pod "telemeter-client-6975d7769d-nvxfv" (UID: "7c80f8d0-ee9b-4a4d-ba92-e241b2552e58") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:25.361471 master-0 kubenswrapper[31830]: E0319 12:14:25.361341 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-metrics-client-ca podName:7c80f8d0-ee9b-4a4d-ba92-e241b2552e58 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.36133323 +0000 UTC m=+4.910293944 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-metrics-client-ca") pod "telemeter-client-6975d7769d-nvxfv" (UID: "7c80f8d0-ee9b-4a4d-ba92-e241b2552e58") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:25.361471 master-0 kubenswrapper[31830]: E0319 12:14:25.361342 31830 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:25.361471 master-0 kubenswrapper[31830]: E0319 12:14:25.361385 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de39c80c-acfa-4bc1-a844-95b170169b44-metrics-client-ca podName:de39c80c-acfa-4bc1-a844-95b170169b44 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.361374311 +0000 UTC m=+4.910335055 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/de39c80c-acfa-4bc1-a844-95b170169b44-metrics-client-ca") pod "openshift-state-metrics-5dc6c74576-k464h" (UID: "de39c80c-acfa-4bc1-a844-95b170169b44") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:25.362458 master-0 kubenswrapper[31830]: E0319 12:14:25.362409 31830 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:25.362526 master-0 kubenswrapper[31830]: E0319 12:14:25.362464 31830 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.362526 master-0 kubenswrapper[31830]: E0319 12:14:25.362492 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-metrics-server-audit-profiles podName:6db3fcbe-0dbf-464f-944b-62427173c8d3 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.362474726 +0000 UTC m=+4.911435600 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-metrics-server-audit-profiles") pod "metrics-server-86889676f6-phlgd" (UID: "6db3fcbe-0dbf-464f-944b-62427173c8d3") : failed to sync configmap cache: timed out waiting for the condition Mar 19 12:14:25.362613 master-0 kubenswrapper[31830]: E0319 12:14:25.362484 31830 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.362652 master-0 kubenswrapper[31830]: E0319 12:14:25.362519 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf6b6560-1731-4fb1-b3c2-8257002842d6-cert podName:cf6b6560-1731-4fb1-b3c2-8257002842d6 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.362506127 +0000 UTC m=+4.911466861 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cf6b6560-1731-4fb1-b3c2-8257002842d6-cert") pod "cluster-autoscaler-operator-866dc4744-fblgs" (UID: "cf6b6560-1731-4fb1-b3c2-8257002842d6") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.362694 master-0 kubenswrapper[31830]: E0319 12:14:25.362673 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-kube-rbac-proxy-config podName:a9d191d1-631d-4091-af8b-382283c18a5a nodeName:}" failed. No retries permitted until 2026-03-19 12:14:26.362657992 +0000 UTC m=+4.911618706 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-kube-rbac-proxy-config") pod "node-exporter-lpndz" (UID: "a9d191d1-631d-4091-af8b-382283c18a5a") : failed to sync secret cache: timed out waiting for the condition Mar 19 12:14:25.384775 master-0 kubenswrapper[31830]: I0319 12:14:25.384719 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bst2w\" (UniqueName: \"kubernetes.io/projected/63c12a89-1b49-4eba-8f5a-551b10d2246b-kube-api-access-bst2w\") pod \"cluster-node-tuning-operator-598fbc5f8f-zfsqt\" (UID: \"63c12a89-1b49-4eba-8f5a-551b10d2246b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zfsqt" Mar 19 12:14:25.388300 master-0 kubenswrapper[31830]: I0319 12:14:25.388252 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-pcp8m" Mar 19 12:14:25.431750 master-0 kubenswrapper[31830]: I0319 12:14:25.431689 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p6vn\" (UniqueName: \"kubernetes.io/projected/284768b8-9d70-4cf7-bace-8adc6b587186-kube-api-access-8p6vn\") pod \"network-operator-7bd846bfc4-nb8bk\" (UID: \"284768b8-9d70-4cf7-bace-8adc6b587186\") " pod="openshift-network-operator/network-operator-7bd846bfc4-nb8bk" Mar 19 12:14:25.444906 master-0 kubenswrapper[31830]: I0319 12:14:25.444862 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1089ea24-add9-482e-9276-e6ded12052d7-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-qv4cg\" (UID: \"1089ea24-add9-482e-9276-e6ded12052d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-qv4cg" Mar 19 12:14:25.459950 master-0 kubenswrapper[31830]: I0319 12:14:25.459905 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsk9d\" (UniqueName: \"kubernetes.io/projected/9ed2dbd1-aec4-4009-917a-933533912ab5-kube-api-access-gsk9d\") pod \"openshift-controller-manager-operator-8c94f4649-gx4w8\" (UID: \"9ed2dbd1-aec4-4009-917a-933533912ab5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-gx4w8" Mar 19 12:14:25.467326 master-0 kubenswrapper[31830]: I0319 12:14:25.467290 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 19 12:14:25.506652 master-0 kubenswrapper[31830]: I0319 12:14:25.506600 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4hsp\" (UniqueName: \"kubernetes.io/projected/fe245927-c937-4ec7-ab83-4900bade72cf-kube-api-access-s4hsp\") pod \"multus-w82cg\" (UID: \"fe245927-c937-4ec7-ab83-4900bade72cf\") " pod="openshift-multus/multus-w82cg" Mar 19 12:14:25.517723 master-0 kubenswrapper[31830]: I0319 12:14:25.517677 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xpc2\" (UniqueName: \"kubernetes.io/projected/19de6601-10d4-4112-a21f-0398d2b160d1-kube-api-access-6xpc2\") pod \"cluster-baremetal-operator-6f69995874-ftml6\" (UID: \"19de6601-10d4-4112-a21f-0398d2b160d1\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" Mar 19 12:14:25.539023 master-0 kubenswrapper[31830]: I0319 12:14:25.538969 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs4jf\" (UniqueName: \"kubernetes.io/projected/b80027fd-7b39-477a-a337-ff9bb08e7eeb-kube-api-access-hs4jf\") pod \"ingress-operator-66b84d69b-btppx\" (UID: \"b80027fd-7b39-477a-a337-ff9bb08e7eeb\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" Mar 19 12:14:25.559759 master-0 kubenswrapper[31830]: I0319 12:14:25.559667 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv6bc\" (UniqueName: \"kubernetes.io/projected/d3541cbe-3be0-40d3-89d2-b5937b6a8f47-kube-api-access-pv6bc\") pod \"machine-config-operator-84d549f6d5-lswqw\" (UID: \"d3541cbe-3be0-40d3-89d2-b5937b6a8f47\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-lswqw" Mar 19 12:14:25.587202 master-0 kubenswrapper[31830]: I0319 12:14:25.587152 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khv2z\" (UniqueName: \"kubernetes.io/projected/a7747954-a222-4809-8656-818203b55ee8-kube-api-access-khv2z\") pod \"csi-snapshot-controller-operator-5f5d689c6b-2chdm\" (UID: \"a7747954-a222-4809-8656-818203b55ee8\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-2chdm" Mar 19 12:14:25.599584 master-0 kubenswrapper[31830]: I0319 12:14:25.599113 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tqdb\" (UniqueName: \"kubernetes.io/projected/b0f5939c-48b1-4d6c-9712-9128a78d603b-kube-api-access-6tqdb\") pod \"marketplace-operator-89ccd998f-pr7gk\" (UID: \"b0f5939c-48b1-4d6c-9712-9128a78d603b\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 12:14:25.618999 master-0 kubenswrapper[31830]: I0319 12:14:25.618944 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2151eb84-177e-459c-be71-f48465323ac2-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-fjn4b\" (UID: \"2151eb84-177e-459c-be71-f48465323ac2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fjn4b" Mar 19 12:14:25.627574 master-0 kubenswrapper[31830]: I0319 12:14:25.627533 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 19 12:14:25.639106 master-0 kubenswrapper[31830]: I0319 12:14:25.639063 31830 scope.go:117] "RemoveContainer" containerID="f6cff670ffd3b7d67c924d90e3c87c5305a542b098fae4298d16010aa46c7cd3" Mar 19 12:14:25.647099 master-0 kubenswrapper[31830]: I0319 12:14:25.647064 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-xbkxv" Mar 19 12:14:25.667972 master-0 kubenswrapper[31830]: I0319 12:14:25.667909 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 19 12:14:25.700148 master-0 kubenswrapper[31830]: I0319 12:14:25.700092 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h84l9\" (UniqueName: \"kubernetes.io/projected/f08c5930-44f0-48e4-80dd-2563f2733b2f-kube-api-access-h84l9\") pod \"openshift-apiserver-operator-d65958b8-mjs7x\" (UID: \"f08c5930-44f0-48e4-80dd-2563f2733b2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-mjs7x" Mar 19 12:14:25.705871 master-0 kubenswrapper[31830]: I0319 12:14:25.705836 31830 request.go:700] Waited for 3.015335681s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/cluster-storage-operator/token Mar 19 12:14:25.719557 master-0 kubenswrapper[31830]: I0319 12:14:25.719513 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zntzt\" (UniqueName: \"kubernetes.io/projected/0f97d998-530c-4d9d-a030-ca1d9d2d4490-kube-api-access-zntzt\") pod \"cluster-storage-operator-7d87854d6-6wzws\" (UID: \"0f97d998-530c-4d9d-a030-ca1d9d2d4490\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-6wzws" Mar 19 12:14:25.728331 master-0 kubenswrapper[31830]: I0319 12:14:25.728283 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 19 12:14:25.761065 master-0 kubenswrapper[31830]: I0319 12:14:25.760986 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnd9c\" (UniqueName: \"kubernetes.io/projected/bdcdb23d-ef1f-45e2-b9ac-7abf707637b6-kube-api-access-jnd9c\") pod \"catalog-operator-68f85b4d6c-2trz4\" (UID: \"bdcdb23d-ef1f-45e2-b9ac-7abf707637b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 12:14:25.779592 master-0 kubenswrapper[31830]: I0319 12:14:25.779538 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl6d7\" (UniqueName: \"kubernetes.io/projected/ab54833d-e57b-479d-b171-68155f6566f1-kube-api-access-gl6d7\") pod \"dns-operator-9c5679d8f-z6kvm\" (UID: \"ab54833d-e57b-479d-b171-68155f6566f1\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-z6kvm" Mar 19 12:14:25.798316 master-0 kubenswrapper[31830]: I0319 12:14:25.798080 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5bmd\" (UniqueName: \"kubernetes.io/projected/82b98dca-59f9-42be-94ca-4a2a2b6fea0f-kube-api-access-c5bmd\") pod \"cluster-image-registry-operator-5549dc66cb-g6sn6\" (UID: \"82b98dca-59f9-42be-94ca-4a2a2b6fea0f\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-g6sn6" Mar 19 12:14:25.819014 master-0 kubenswrapper[31830]: I0319 12:14:25.818879 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcxqj\" (UniqueName: \"kubernetes.io/projected/bf226d89-450d-4876-a113-345632b94ee9-kube-api-access-wcxqj\") pod \"ovnkube-control-plane-57f769d897-f6m2t\" (UID: \"bf226d89-450d-4876-a113-345632b94ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-f6m2t" Mar 19 12:14:25.842679 master-0 kubenswrapper[31830]: I0319 12:14:25.842624 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hq8f\" (UniqueName: \"kubernetes.io/projected/661b8957-a890-4032-9e57-45e2e0b35249-kube-api-access-8hq8f\") pod \"service-ca-operator-b865698dc-sxsxt\" (UID: \"661b8957-a890-4032-9e57-45e2e0b35249\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-sxsxt" Mar 19 12:14:25.860150 master-0 kubenswrapper[31830]: I0319 12:14:25.860102 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5n89\" (UniqueName: \"kubernetes.io/projected/d9ab6ec4-eec9-4d27-8b43-2aaf954f098f-kube-api-access-h5n89\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk\" (UID: \"d9ab6ec4-eec9-4d27-8b43-2aaf954f098f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s4qsk" Mar 19 12:14:25.868573 master-0 kubenswrapper[31830]: I0319 12:14:25.868527 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 19 12:14:25.887648 master-0 kubenswrapper[31830]: I0319 12:14:25.887599 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 19 12:14:25.908256 master-0 kubenswrapper[31830]: I0319 12:14:25.908203 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 19 12:14:25.927865 master-0 kubenswrapper[31830]: I0319 12:14:25.927747 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-6flh6" Mar 19 12:14:25.948338 master-0 kubenswrapper[31830]: I0319 12:14:25.948245 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 19 12:14:25.968230 master-0 kubenswrapper[31830]: I0319 12:14:25.968179 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 19 12:14:25.988359 master-0 kubenswrapper[31830]: I0319 12:14:25.988098 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-ets52rpou52es" Mar 19 12:14:26.005821 master-0 kubenswrapper[31830]: I0319 12:14:26.005754 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/5.log" Mar 19 12:14:26.007728 master-0 kubenswrapper[31830]: I0319 12:14:26.007700 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 19 12:14:26.028044 master-0 kubenswrapper[31830]: I0319 12:14:26.027972 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 19 12:14:26.047685 master-0 kubenswrapper[31830]: I0319 12:14:26.047639 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-77kwj" Mar 19 12:14:26.068240 master-0 kubenswrapper[31830]: I0319 12:14:26.068184 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 19 12:14:26.088222 master-0 kubenswrapper[31830]: I0319 12:14:26.088072 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-hw4t4" Mar 19 12:14:26.108423 master-0 kubenswrapper[31830]: I0319 12:14:26.108371 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 19 12:14:26.134142 master-0 kubenswrapper[31830]: I0319 12:14:26.133515 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 19 12:14:26.148625 master-0 kubenswrapper[31830]: I0319 12:14:26.148524 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 19 12:14:26.168370 master-0 kubenswrapper[31830]: I0319 12:14:26.168321 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 19 12:14:26.188449 master-0 kubenswrapper[31830]: I0319 12:14:26.188399 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 19 12:14:26.207408 master-0 kubenswrapper[31830]: I0319 12:14:26.207341 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-svqv2" Mar 19 12:14:26.229154 master-0 kubenswrapper[31830]: I0319 12:14:26.229060 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 19 12:14:26.247767 master-0 kubenswrapper[31830]: I0319 12:14:26.247712 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 19 12:14:26.279206 master-0 kubenswrapper[31830]: I0319 12:14:26.279163 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhqhb\" (UniqueName: \"kubernetes.io/projected/398bcaca-1bea-4633-a78f-717e3d015ddd-kube-api-access-fhqhb\") pod \"network-metrics-daemon-6t6sn\" (UID: \"398bcaca-1bea-4633-a78f-717e3d015ddd\") " pod="openshift-multus/network-metrics-daemon-6t6sn" Mar 19 12:14:26.300079 master-0 kubenswrapper[31830]: I0319 12:14:26.300005 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4p7s\" (UniqueName: \"kubernetes.io/projected/e559e487-18b0-4622-92fa-d06e7397b312-kube-api-access-c4p7s\") pod \"tuned-dc5br\" (UID: \"e559e487-18b0-4622-92fa-d06e7397b312\") " pod="openshift-cluster-node-tuning-operator/tuned-dc5br" Mar 19 12:14:26.319124 master-0 kubenswrapper[31830]: I0319 12:14:26.319063 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86r6z\" (UniqueName: \"kubernetes.io/projected/d975e831-7348-41b9-9622-f4a503674c38-kube-api-access-86r6z\") pod \"migrator-8487694857-99fgs\" (UID: \"d975e831-7348-41b9-9622-f4a503674c38\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-99fgs" Mar 19 12:14:26.339505 master-0 kubenswrapper[31830]: I0319 12:14:26.339397 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lllml\" (UniqueName: \"kubernetes.io/projected/6db3fcbe-0dbf-464f-944b-62427173c8d3-kube-api-access-lllml\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:26.358606 master-0 kubenswrapper[31830]: I0319 12:14:26.358571 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq9p4\" (UniqueName: \"kubernetes.io/projected/a9d191d1-631d-4091-af8b-382283c18a5a-kube-api-access-cq9p4\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:26.378691 master-0 kubenswrapper[31830]: I0319 12:14:26.378628 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8hpg\" (UniqueName: \"kubernetes.io/projected/ee3529ac-6135-438b-9334-40c63c1fbd3d-kube-api-access-c8hpg\") pod \"cluster-cloud-controller-manager-operator-7dff898856-84gh4\" (UID: \"ee3529ac-6135-438b-9334-40c63c1fbd3d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-84gh4" Mar 19 12:14:26.400502 master-0 kubenswrapper[31830]: I0319 12:14:26.400399 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-client-certs\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:26.400502 master-0 kubenswrapper[31830]: I0319 12:14:26.400479 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-server-tls\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:26.400725 master-0 kubenswrapper[31830]: I0319 12:14:26.400573 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-metrics-server-audit-profiles\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:26.400725 master-0 kubenswrapper[31830]: I0319 12:14:26.400599 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de39c80c-acfa-4bc1-a844-95b170169b44-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 12:14:26.400725 master-0 kubenswrapper[31830]: I0319 12:14:26.400627 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-metrics-client-ca\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:26.400725 master-0 kubenswrapper[31830]: I0319 12:14:26.400670 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf6b6560-1731-4fb1-b3c2-8257002842d6-cert\") pod \"cluster-autoscaler-operator-866dc4744-fblgs\" (UID: \"cf6b6560-1731-4fb1-b3c2-8257002842d6\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" Mar 19 12:14:26.400725 master-0 kubenswrapper[31830]: I0319 12:14:26.400702 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:26.400941 master-0 kubenswrapper[31830]: I0319 12:14:26.400758 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a9d191d1-631d-4091-af8b-382283c18a5a-metrics-client-ca\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:26.400941 master-0 kubenswrapper[31830]: I0319 12:14:26.400842 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-serving-certs-ca-bundle\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:26.400941 master-0 kubenswrapper[31830]: I0319 12:14:26.400892 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf6b6560-1731-4fb1-b3c2-8257002842d6-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-fblgs\" (UID: \"cf6b6560-1731-4fb1-b3c2-8257002842d6\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" Mar 19 12:14:26.400941 master-0 kubenswrapper[31830]: I0319 12:14:26.400941 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bb1000ab-4419-43ce-b1b7-8f43413e017f-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 12:14:26.401108 master-0 kubenswrapper[31830]: I0319 12:14:26.400976 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-federate-client-tls\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:26.401108 master-0 kubenswrapper[31830]: I0319 12:14:26.401010 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1c2a33ba-76d0-4b81-a41d-9da16fd46209-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-dqgd9\" (UID: \"1c2a33ba-76d0-4b81-a41d-9da16fd46209\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9" Mar 19 12:14:26.401108 master-0 kubenswrapper[31830]: I0319 12:14:26.401102 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:26.401225 master-0 kubenswrapper[31830]: I0319 12:14:26.401131 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-client-ca-bundle\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:26.401225 master-0 kubenswrapper[31830]: I0319 12:14:26.401169 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-tls\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:26.401225 master-0 kubenswrapper[31830]: I0319 12:14:26.401201 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:26.401303 master-0 kubenswrapper[31830]: I0319 12:14:26.401226 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:26.401336 master-0 kubenswrapper[31830]: I0319 12:14:26.401305 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-client-tls\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:26.401566 master-0 kubenswrapper[31830]: I0319 12:14:26.401360 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/86884445-e29b-492b-8810-b63b938b9170-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 12:14:26.401566 master-0 kubenswrapper[31830]: I0319 12:14:26.401390 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/36e5fec9-7fb5-4460-8bb4-4b9e36fae978-cert\") pod \"ingress-canary-w8jqs\" (UID: \"36e5fec9-7fb5-4460-8bb4-4b9e36fae978\") " pod="openshift-ingress-canary/ingress-canary-w8jqs" Mar 19 12:14:26.401566 master-0 kubenswrapper[31830]: I0319 12:14:26.401423 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:26.401912 master-0 kubenswrapper[31830]: I0319 12:14:26.401868 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:26.402293 master-0 kubenswrapper[31830]: I0319 12:14:26.402064 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1c2a33ba-76d0-4b81-a41d-9da16fd46209-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-dqgd9\" (UID: \"1c2a33ba-76d0-4b81-a41d-9da16fd46209\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9" Mar 19 12:14:26.402293 master-0 kubenswrapper[31830]: I0319 12:14:26.402065 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:26.402293 master-0 kubenswrapper[31830]: I0319 12:14:26.402182 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-secret-telemeter-client\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:26.402293 master-0 kubenswrapper[31830]: I0319 12:14:26.402243 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf6b6560-1731-4fb1-b3c2-8257002842d6-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-fblgs\" (UID: \"cf6b6560-1731-4fb1-b3c2-8257002842d6\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" Mar 19 12:14:26.402293 master-0 kubenswrapper[31830]: I0319 12:14:26.402261 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-metrics-server-audit-profiles\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:26.402521 master-0 kubenswrapper[31830]: I0319 12:14:26.402370 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf6b6560-1731-4fb1-b3c2-8257002842d6-cert\") pod \"cluster-autoscaler-operator-866dc4744-fblgs\" (UID: \"cf6b6560-1731-4fb1-b3c2-8257002842d6\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" Mar 19 12:14:26.402521 master-0 kubenswrapper[31830]: I0319 12:14:26.402394 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:26.402521 master-0 kubenswrapper[31830]: I0319 12:14:26.402495 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a9d191d1-631d-4091-af8b-382283c18a5a-metrics-client-ca\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:26.402521 master-0 kubenswrapper[31830]: I0319 12:14:26.402508 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-client-tls\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:26.402674 master-0 kubenswrapper[31830]: I0319 12:14:26.402622 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a9d191d1-631d-4091-af8b-382283c18a5a-node-exporter-tls\") pod \"node-exporter-lpndz\" (UID: \"a9d191d1-631d-4091-af8b-382283c18a5a\") " pod="openshift-monitoring/node-exporter-lpndz" Mar 19 12:14:26.403164 master-0 kubenswrapper[31830]: I0319 12:14:26.402712 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bb1000ab-4419-43ce-b1b7-8f43413e017f-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 12:14:26.403164 master-0 kubenswrapper[31830]: I0319 12:14:26.402826 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:26.403164 master-0 kubenswrapper[31830]: I0319 12:14:26.402872 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-federate-client-tls\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:26.403164 master-0 kubenswrapper[31830]: I0319 12:14:26.402921 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-metrics-client-ca\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:26.403164 master-0 kubenswrapper[31830]: I0319 12:14:26.402998 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-server-tls\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:26.403164 master-0 kubenswrapper[31830]: I0319 12:14:26.403031 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-serving-certs-ca-bundle\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:26.403164 master-0 kubenswrapper[31830]: I0319 12:14:26.403134 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/36e5fec9-7fb5-4460-8bb4-4b9e36fae978-cert\") pod \"ingress-canary-w8jqs\" (UID: \"36e5fec9-7fb5-4460-8bb4-4b9e36fae978\") " pod="openshift-ingress-canary/ingress-canary-w8jqs" Mar 19 12:14:26.403374 master-0 kubenswrapper[31830]: I0319 12:14:26.403185 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-client-certs\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:26.403374 master-0 kubenswrapper[31830]: I0319 12:14:26.403186 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de39c80c-acfa-4bc1-a844-95b170169b44-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 12:14:26.403374 master-0 kubenswrapper[31830]: I0319 12:14:26.403302 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/86884445-e29b-492b-8810-b63b938b9170-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 12:14:26.403374 master-0 kubenswrapper[31830]: I0319 12:14:26.403352 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-client-ca-bundle\") pod \"metrics-server-86889676f6-phlgd\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:26.403770 master-0 kubenswrapper[31830]: I0319 12:14:26.403606 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hk8l\" (UniqueName: \"kubernetes.io/projected/bb1000ab-4419-43ce-b1b7-8f43413e017f-kube-api-access-6hk8l\") pod \"kube-state-metrics-7bbc969446-bnf7q\" (UID: \"bb1000ab-4419-43ce-b1b7-8f43413e017f\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-bnf7q" Mar 19 12:14:26.423713 master-0 kubenswrapper[31830]: I0319 12:14:26.423624 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdpj4\" (UniqueName: \"kubernetes.io/projected/06f67c28-34fd-4356-92f0-edd0986ad34e-kube-api-access-bdpj4\") pod \"iptables-alerter-276t5\" (UID: \"06f67c28-34fd-4356-92f0-edd0986ad34e\") " pod="openshift-network-operator/iptables-alerter-276t5" Mar 19 12:14:26.440610 master-0 kubenswrapper[31830]: I0319 12:14:26.440534 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g6zz\" (UniqueName: \"kubernetes.io/projected/616dbb32-6b65-4e44-a217-6b1be2844cc9-kube-api-access-7g6zz\") pod \"network-check-target-v66z4\" (UID: \"616dbb32-6b65-4e44-a217-6b1be2844cc9\") " pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 12:14:26.458072 master-0 kubenswrapper[31830]: I0319 12:14:26.458009 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgdlc\" (UniqueName: \"kubernetes.io/projected/13503fef-09b2-4dbe-9537-a5b361e7b591-kube-api-access-mgdlc\") pod \"apiserver-897cc986b-vpg2l\" (UID: \"13503fef-09b2-4dbe-9537-a5b361e7b591\") " pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:26.486215 master-0 kubenswrapper[31830]: I0319 12:14:26.486120 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxw6t\" (UniqueName: \"kubernetes.io/projected/7b2ecb08-a0f9-4127-967c-7087dea4c0f6-kube-api-access-dxw6t\") pod \"machine-api-operator-6fbb6cf6f9-75w5c\" (UID: \"7b2ecb08-a0f9-4127-967c-7087dea4c0f6\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-75w5c" Mar 19 12:14:26.506521 master-0 kubenswrapper[31830]: I0319 12:14:26.506453 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hrkb\" (UniqueName: \"kubernetes.io/projected/91112ce6-4f9d-44c1-a4e7-fea126554bcf-kube-api-access-8hrkb\") pod \"router-default-7dcf5569b5-lkpgl\" (UID: \"91112ce6-4f9d-44c1-a4e7-fea126554bcf\") " pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:14:26.523584 master-0 kubenswrapper[31830]: I0319 12:14:26.523526 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wfsr\" (UniqueName: \"kubernetes.io/projected/4264e82c-387f-4aa6-9ef6-b7beb61e098c-kube-api-access-8wfsr\") pod \"insights-operator-68bf6ff9d6-djdmh\" (UID: \"4264e82c-387f-4aa6-9ef6-b7beb61e098c\") " pod="openshift-insights/insights-operator-68bf6ff9d6-djdmh" Mar 19 12:14:26.539667 master-0 kubenswrapper[31830]: I0319 12:14:26.539608 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kcbw\" (UniqueName: \"kubernetes.io/projected/86884445-e29b-492b-8810-b63b938b9170-kube-api-access-5kcbw\") pod \"prometheus-operator-6c8df6d4b-qsrjj\" (UID: \"86884445-e29b-492b-8810-b63b938b9170\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-qsrjj" Mar 19 12:14:26.557483 master-0 kubenswrapper[31830]: I0319 12:14:26.557419 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9k5t\" (UniqueName: \"kubernetes.io/projected/0e25d4ed-4ad0-4706-ad25-7822c9a1d07e-kube-api-access-r9k5t\") pod \"machine-config-server-g7mqg\" (UID: \"0e25d4ed-4ad0-4706-ad25-7822c9a1d07e\") " pod="openshift-machine-config-operator/machine-config-server-g7mqg" Mar 19 12:14:26.565342 master-0 kubenswrapper[31830]: I0319 12:14:26.565300 31830 scope.go:117] "RemoveContainer" containerID="02580d8818d0f202a13ac68e82f20d4293f3530799a86f4d7e26b5116036380f" Mar 19 12:14:26.582793 master-0 kubenswrapper[31830]: I0319 12:14:26.582735 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64twc\" (UniqueName: \"kubernetes.io/projected/cf6b6560-1731-4fb1-b3c2-8257002842d6-kube-api-access-64twc\") pod \"cluster-autoscaler-operator-866dc4744-fblgs\" (UID: \"cf6b6560-1731-4fb1-b3c2-8257002842d6\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-fblgs" Mar 19 12:14:26.600255 master-0 kubenswrapper[31830]: I0319 12:14:26.600216 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wshb2\" (UniqueName: \"kubernetes.io/projected/9d2db220-4d5b-4819-a910-b186e1e9fb3e-kube-api-access-wshb2\") pod \"ovnkube-node-lk9x9\" (UID: \"9d2db220-4d5b-4819-a910-b186e1e9fb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:26.618602 master-0 kubenswrapper[31830]: I0319 12:14:26.618537 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28ljd\" (UniqueName: \"kubernetes.io/projected/979ba8cc-5a7b-4188-bf9e-c22d810888e9-kube-api-access-28ljd\") pod \"apiserver-fdc5db968-8zh6r\" (UID: \"979ba8cc-5a7b-4188-bf9e-c22d810888e9\") " pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:26.641268 master-0 kubenswrapper[31830]: I0319 12:14:26.641221 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srbt4\" (UniqueName: \"kubernetes.io/projected/3a6b082a-649b-43f6-8e24-cf222873fe39-kube-api-access-srbt4\") pod \"controller-manager-7cdddc6cb-q222c\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:14:26.664604 master-0 kubenswrapper[31830]: I0319 12:14:26.664561 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xz8h\" (UniqueName: \"kubernetes.io/projected/7383e647-63b0-452d-a39b-02ad27a9b053-kube-api-access-2xz8h\") pod \"community-operators-s22fd\" (UID: \"7383e647-63b0-452d-a39b-02ad27a9b053\") " pod="openshift-marketplace/community-operators-s22fd" Mar 19 12:14:26.679493 master-0 kubenswrapper[31830]: I0319 12:14:26.679456 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3661faaa-2c9d-4fcd-a41f-71aa71a2e464-kube-api-access\") pod \"cluster-version-operator-7d58488df-czxxt\" (UID: \"3661faaa-2c9d-4fcd-a41f-71aa71a2e464\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-czxxt" Mar 19 12:14:26.701437 master-0 kubenswrapper[31830]: I0319 12:14:26.701406 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fgj5\" (UniqueName: \"kubernetes.io/projected/ad327a59-7879-4215-bb95-3f2be64cb97f-kube-api-access-9fgj5\") pod \"cloud-credential-operator-744f9dbf77-nr2k4\" (UID: \"ad327a59-7879-4215-bb95-3f2be64cb97f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-nr2k4" Mar 19 12:14:26.705957 master-0 kubenswrapper[31830]: I0319 12:14:26.705910 31830 request.go:700] Waited for 3.902396357s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token Mar 19 12:14:26.724365 master-0 kubenswrapper[31830]: I0319 12:14:26.724327 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbzj2\" (UniqueName: \"kubernetes.io/projected/be4349fa-5c67-4135-80a7-b8a694553662-kube-api-access-jbzj2\") pod \"packageserver-77d68bd5f8-w9hmb\" (UID: \"be4349fa-5c67-4135-80a7-b8a694553662\") " pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 12:14:26.740389 master-0 kubenswrapper[31830]: I0319 12:14:26.740357 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps4k8\" (UniqueName: \"kubernetes.io/projected/f236a5ab-b400-46fc-94ee-1fff476d6458-kube-api-access-ps4k8\") pod \"dns-default-zjdkm\" (UID: \"f236a5ab-b400-46fc-94ee-1fff476d6458\") " pod="openshift-dns/dns-default-zjdkm" Mar 19 12:14:26.766124 master-0 kubenswrapper[31830]: I0319 12:14:26.766081 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc94p\" (UniqueName: \"kubernetes.io/projected/667757ee-2670-4019-ad93-156521d3c2e7-kube-api-access-rc94p\") pod \"cluster-samples-operator-85f7577d78-cx8l9\" (UID: \"667757ee-2670-4019-ad93-156521d3c2e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-cx8l9" Mar 19 12:14:26.779384 master-0 kubenswrapper[31830]: I0319 12:14:26.779337 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x2v6\" (UniqueName: \"kubernetes.io/projected/de39c80c-acfa-4bc1-a844-95b170169b44-kube-api-access-6x2v6\") pod \"openshift-state-metrics-5dc6c74576-k464h\" (UID: \"de39c80c-acfa-4bc1-a844-95b170169b44\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-k464h" Mar 19 12:14:26.803514 master-0 kubenswrapper[31830]: I0319 12:14:26.803470 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lkzv\" (UniqueName: \"kubernetes.io/projected/4800b72f-7e54-4069-b771-87fb459eeb78-kube-api-access-4lkzv\") pod \"node-resolver-jqzxt\" (UID: \"4800b72f-7e54-4069-b771-87fb459eeb78\") " pod="openshift-dns/node-resolver-jqzxt" Mar 19 12:14:26.819868 master-0 kubenswrapper[31830]: I0319 12:14:26.819832 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnp9l\" (UniqueName: \"kubernetes.io/projected/0ed7eded-1e67-49ad-9777-c2ed1e006ce3-kube-api-access-jnp9l\") pod \"redhat-marketplace-cjgpg\" (UID: \"0ed7eded-1e67-49ad-9777-c2ed1e006ce3\") " pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 12:14:26.837753 master-0 kubenswrapper[31830]: I0319 12:14:26.837709 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8brwr\" (UniqueName: \"kubernetes.io/projected/919daf8d-763a-44bc-8916-86b425a27cbd-kube-api-access-8brwr\") pod \"catalogd-controller-manager-6864dc98f7-j2w8z\" (UID: \"919daf8d-763a-44bc-8916-86b425a27cbd\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 12:14:26.859846 master-0 kubenswrapper[31830]: I0319 12:14:26.859701 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-864rg\" (UniqueName: \"kubernetes.io/projected/8414b6b0-ee16-47a5-982b-ee58b136cfcf-kube-api-access-864rg\") pod \"network-node-identity-wd4nx\" (UID: \"8414b6b0-ee16-47a5-982b-ee58b136cfcf\") " pod="openshift-network-node-identity/network-node-identity-wd4nx" Mar 19 12:14:26.881086 master-0 kubenswrapper[31830]: I0319 12:14:26.881034 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxdts\" (UniqueName: \"kubernetes.io/projected/5238840f-3bef-43ad-ae68-ac187f073019-kube-api-access-vxdts\") pod \"operator-controller-controller-manager-57777556ff-9mpxd\" (UID: \"5238840f-3bef-43ad-ae68-ac187f073019\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 12:14:26.899581 master-0 kubenswrapper[31830]: I0319 12:14:26.899523 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvzcn\" (UniqueName: \"kubernetes.io/projected/da9becfb-a504-4ef7-92ed-cd2db439d5db-kube-api-access-lvzcn\") pod \"route-controller-manager-fdb67f9cf-vkmd9\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 12:14:26.918503 master-0 kubenswrapper[31830]: I0319 12:14:26.918455 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ztf7\" (UniqueName: \"kubernetes.io/projected/c52bbbe7-bc16-432f-a471-bc561083a853-kube-api-access-4ztf7\") pod \"certified-operators-tdnkp\" (UID: \"c52bbbe7-bc16-432f-a471-bc561083a853\") " pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 12:14:26.943213 master-0 kubenswrapper[31830]: I0319 12:14:26.943179 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9hck\" (UniqueName: \"kubernetes.io/projected/36e5fec9-7fb5-4460-8bb4-4b9e36fae978-kube-api-access-z9hck\") pod \"ingress-canary-w8jqs\" (UID: \"36e5fec9-7fb5-4460-8bb4-4b9e36fae978\") " pod="openshift-ingress-canary/ingress-canary-w8jqs" Mar 19 12:14:26.962633 master-0 kubenswrapper[31830]: I0319 12:14:26.962609 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddl8k\" (UniqueName: \"kubernetes.io/projected/6863b35c-44ac-4333-97b5-e8e38b440a20-kube-api-access-ddl8k\") pod \"service-ca-79bc6b8d76-5rbp5\" (UID: \"6863b35c-44ac-4333-97b5-e8e38b440a20\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5rbp5" Mar 19 12:14:26.979752 master-0 kubenswrapper[31830]: I0319 12:14:26.979729 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbddm\" (UniqueName: \"kubernetes.io/projected/2b87f8c3-1898-46dd-bcac-e8f22f31e812-kube-api-access-kbddm\") pod \"machine-config-daemon-ms2wn\" (UID: \"2b87f8c3-1898-46dd-bcac-e8f22f31e812\") " pod="openshift-machine-config-operator/machine-config-daemon-ms2wn" Mar 19 12:14:26.999436 master-0 kubenswrapper[31830]: I0319 12:14:26.999404 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjfpq\" (UniqueName: \"kubernetes.io/projected/311b8bab-6cee-406d-8e0e-5b18a743d5fa-kube-api-access-hjfpq\") pod \"machine-config-controller-b4f87c5b9-rdpvm\" (UID: \"311b8bab-6cee-406d-8e0e-5b18a743d5fa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rdpvm" Mar 19 12:14:27.031897 master-0 kubenswrapper[31830]: I0319 12:14:27.031845 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm9zf\" (UniqueName: \"kubernetes.io/projected/7c80f8d0-ee9b-4a4d-ba92-e241b2552e58-kube-api-access-vm9zf\") pod \"telemeter-client-6975d7769d-nvxfv\" (UID: \"7c80f8d0-ee9b-4a4d-ba92-e241b2552e58\") " pod="openshift-monitoring/telemeter-client-6975d7769d-nvxfv" Mar 19 12:14:27.039432 master-0 kubenswrapper[31830]: I0319 12:14:27.039407 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2mdn\" (UniqueName: \"kubernetes.io/projected/944eac68-e72b-4aed-b5dc-d7d9703178a3-kube-api-access-m2mdn\") pod \"csi-snapshot-controller-64854d9cff-6m654\" (UID: \"944eac68-e72b-4aed-b5dc-d7d9703178a3\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-6m654" Mar 19 12:14:27.058640 master-0 kubenswrapper[31830]: I0319 12:14:27.058598 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rmw5\" (UniqueName: \"kubernetes.io/projected/fd40498c-f50a-408c-9a50-5d85ae666124-kube-api-access-2rmw5\") pod \"machine-approver-5c6485487f-qv29l\" (UID: \"fd40498c-f50a-408c-9a50-5d85ae666124\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-qv29l" Mar 19 12:14:27.079469 master-0 kubenswrapper[31830]: I0319 12:14:27.079408 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7zpw\" (UniqueName: \"kubernetes.io/projected/44469a78-9300-4260-89e9-ea939de1357b-kube-api-access-t7zpw\") pod \"control-plane-machine-set-operator-6f97756bc8-tql86\" (UID: \"44469a78-9300-4260-89e9-ea939de1357b\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tql86" Mar 19 12:14:27.105508 master-0 kubenswrapper[31830]: I0319 12:14:27.105460 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8n22\" (UniqueName: \"kubernetes.io/projected/1c2a33ba-76d0-4b81-a41d-9da16fd46209-kube-api-access-k8n22\") pod \"multus-admission-controller-58c9f8fc64-dqgd9\" (UID: \"1c2a33ba-76d0-4b81-a41d-9da16fd46209\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-dqgd9" Mar 19 12:14:27.118772 master-0 kubenswrapper[31830]: I0319 12:14:27.118642 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fz85\" (UniqueName: \"kubernetes.io/projected/f05dca6c-7626-4970-a869-4208ff5605a2-kube-api-access-5fz85\") pod \"redhat-operators-fbd5s\" (UID: \"f05dca6c-7626-4970-a869-4208ff5605a2\") " pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 12:14:27.143182 master-0 kubenswrapper[31830]: E0319 12:14:27.143096 31830 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 12:14:27.143182 master-0 kubenswrapper[31830]: E0319 12:14:27.143145 31830 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 12:14:27.143559 master-0 kubenswrapper[31830]: E0319 12:14:27.143232 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access podName:89890698-dd48-486b-bd64-dc909aecd9e8 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:27.643203115 +0000 UTC m=+6.192163839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access") pod "installer-3-master-0" (UID: "89890698-dd48-486b-bd64-dc909aecd9e8") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 12:14:27.162105 master-0 kubenswrapper[31830]: E0319 12:14:27.162036 31830 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.485s" Mar 19 12:14:27.162105 master-0 kubenswrapper[31830]: I0319 12:14:27.162115 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 19 12:14:27.162433 master-0 kubenswrapper[31830]: I0319 12:14:27.162162 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-btppx" event={"ID":"b80027fd-7b39-477a-a337-ff9bb08e7eeb","Type":"ContainerStarted","Data":"d7e4f8160975545e212395cfce68cd940892d70e341a5ef0b4a1e16ee121e45f"} Mar 19 12:14:27.162433 master-0 kubenswrapper[31830]: I0319 12:14:27.162256 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 12:14:27.162547 master-0 kubenswrapper[31830]: I0319 12:14:27.162467 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 19 12:14:27.162547 master-0 kubenswrapper[31830]: I0319 12:14:27.162517 31830 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="b560fc59-8f41-478e-a914-b16b6c35032a" Mar 19 12:14:27.163659 master-0 kubenswrapper[31830]: I0319 12:14:27.162762 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-nhvl4" Mar 19 12:14:27.163659 master-0 kubenswrapper[31830]: I0319 12:14:27.162817 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 19 12:14:27.163659 master-0 kubenswrapper[31830]: I0319 12:14:27.162832 31830 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="b560fc59-8f41-478e-a914-b16b6c35032a" Mar 19 12:14:27.163659 master-0 kubenswrapper[31830]: I0319 12:14:27.162849 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" event={"ID":"91112ce6-4f9d-44c1-a4e7-fea126554bcf","Type":"ContainerStarted","Data":"acf944a7882195d0d5f8474c8b38013c2f2f8417792017703b01519215038295"} Mar 19 12:14:27.163659 master-0 kubenswrapper[31830]: I0319 12:14:27.162973 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 19 12:14:27.163659 master-0 kubenswrapper[31830]: I0319 12:14:27.162995 31830 status_manager.go:379] "Container startup changed for unknown container" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" containerID="cri-o://02580d8818d0f202a13ac68e82f20d4293f3530799a86f4d7e26b5116036380f" Mar 19 12:14:27.163659 master-0 kubenswrapper[31830]: I0319 12:14:27.163003 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:14:27.163659 master-0 kubenswrapper[31830]: I0319 12:14:27.163031 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 12:14:27.163659 master-0 kubenswrapper[31830]: I0319 12:14:27.163044 31830 status_manager.go:379] "Container startup changed for unknown container" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" containerID="cri-o://02580d8818d0f202a13ac68e82f20d4293f3530799a86f4d7e26b5116036380f" Mar 19 12:14:27.163659 master-0 kubenswrapper[31830]: I0319 12:14:27.163051 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:14:27.163659 master-0 kubenswrapper[31830]: I0319 12:14:27.163105 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:27.163659 master-0 kubenswrapper[31830]: I0319 12:14:27.163123 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6j2nj" Mar 19 12:14:27.171440 master-0 kubenswrapper[31830]: I0319 12:14:27.171382 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:27.171440 master-0 kubenswrapper[31830]: I0319 12:14:27.171430 31830 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" containerID="cri-o://02580d8818d0f202a13ac68e82f20d4293f3530799a86f4d7e26b5116036380f" Mar 19 12:14:27.171440 master-0 kubenswrapper[31830]: I0319 12:14:27.171441 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:14:27.171723 master-0 kubenswrapper[31830]: I0319 12:14:27.171476 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6" Mar 19 12:14:27.171723 master-0 kubenswrapper[31830]: I0319 12:14:27.171501 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-z8xf6" Mar 19 12:14:27.171723 master-0 kubenswrapper[31830]: I0319 12:14:27.171518 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:27.171723 master-0 kubenswrapper[31830]: I0319 12:14:27.171557 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-zjdkm" Mar 19 12:14:27.171723 master-0 kubenswrapper[31830]: I0319 12:14:27.171590 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-zjdkm" Mar 19 12:14:27.171723 master-0 kubenswrapper[31830]: I0319 12:14:27.171619 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:27.171723 master-0 kubenswrapper[31830]: I0319 12:14:27.171632 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 19 12:14:27.173333 master-0 kubenswrapper[31830]: I0319 12:14:27.172400 31830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 12:14:27.179409 master-0 kubenswrapper[31830]: I0319 12:14:27.179363 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 19 12:14:27.188157 master-0 kubenswrapper[31830]: I0319 12:14:27.188119 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:27.223611 master-0 kubenswrapper[31830]: I0319 12:14:27.223520 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:27.277789 master-0 kubenswrapper[31830]: I0319 12:14:27.277645 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:27.284584 master-0 kubenswrapper[31830]: I0319 12:14:27.284528 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:27.565853 master-0 kubenswrapper[31830]: I0319 12:14:27.565725 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:14:27.572547 master-0 kubenswrapper[31830]: I0319 12:14:27.572489 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:14:27.694669 master-0 kubenswrapper[31830]: I0319 12:14:27.694573 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 12:14:27.694669 master-0 kubenswrapper[31830]: I0319 12:14:27.694658 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8cldl" Mar 19 12:14:27.717305 master-0 kubenswrapper[31830]: I0319 12:14:27.717263 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:14:27.728669 master-0 kubenswrapper[31830]: I0319 12:14:27.728599 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:14:27.728911 master-0 kubenswrapper[31830]: E0319 12:14:27.728847 31830 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 12:14:27.728911 master-0 kubenswrapper[31830]: E0319 12:14:27.728883 31830 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 12:14:27.729076 master-0 kubenswrapper[31830]: E0319 12:14:27.728942 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access podName:89890698-dd48-486b-bd64-dc909aecd9e8 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:28.728918788 +0000 UTC m=+7.277879532 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access") pod "installer-3-master-0" (UID: "89890698-dd48-486b-bd64-dc909aecd9e8") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 12:14:28.027979 master-0 kubenswrapper[31830]: I0319 12:14:28.027838 31830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 12:14:28.027979 master-0 kubenswrapper[31830]: I0319 12:14:28.027862 31830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 12:14:28.028521 master-0 kubenswrapper[31830]: I0319 12:14:28.028473 31830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 12:14:28.028638 master-0 kubenswrapper[31830]: I0319 12:14:28.028553 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:14:28.035016 master-0 kubenswrapper[31830]: I0319 12:14:28.034962 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-7dcf5569b5-lkpgl" Mar 19 12:14:28.036964 master-0 kubenswrapper[31830]: I0319 12:14:28.036870 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:28.051849 master-0 kubenswrapper[31830]: I0319 12:14:28.051715 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:14:28.283248 master-0 kubenswrapper[31830]: I0319 12:14:28.283076 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=6.283050216 podStartE2EDuration="6.283050216s" podCreationTimestamp="2026-03-19 12:14:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:14:28.282593272 +0000 UTC m=+6.831554016" watchObservedRunningTime="2026-03-19 12:14:28.283050216 +0000 UTC m=+6.832010960" Mar 19 12:14:28.434841 master-0 kubenswrapper[31830]: I0319 12:14:28.434725 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:28.440883 master-0 kubenswrapper[31830]: I0319 12:14:28.440816 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:28.759588 master-0 kubenswrapper[31830]: I0319 12:14:28.759510 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:14:28.760444 master-0 kubenswrapper[31830]: E0319 12:14:28.759646 31830 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 12:14:28.760444 master-0 kubenswrapper[31830]: E0319 12:14:28.759669 31830 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 12:14:28.760444 master-0 kubenswrapper[31830]: E0319 12:14:28.759726 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access podName:89890698-dd48-486b-bd64-dc909aecd9e8 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:30.759708994 +0000 UTC m=+9.308669698 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access") pod "installer-3-master-0" (UID: "89890698-dd48-486b-bd64-dc909aecd9e8") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 12:14:28.847212 master-0 kubenswrapper[31830]: I0319 12:14:28.847127 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=6.847100204 podStartE2EDuration="6.847100204s" podCreationTimestamp="2026-03-19 12:14:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:14:28.845035669 +0000 UTC m=+7.393996373" watchObservedRunningTime="2026-03-19 12:14:28.847100204 +0000 UTC m=+7.396060918" Mar 19 12:14:29.035934 master-0 kubenswrapper[31830]: I0319 12:14:29.034163 31830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 12:14:29.147643 master-0 kubenswrapper[31830]: I0319 12:14:29.147128 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 12:14:29.152670 master-0 kubenswrapper[31830]: I0319 12:14:29.152630 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-77d68bd5f8-w9hmb" Mar 19 12:14:29.380781 master-0 kubenswrapper[31830]: I0319 12:14:29.380639 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:29.387193 master-0 kubenswrapper[31830]: I0319 12:14:29.387161 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-897cc986b-vpg2l" Mar 19 12:14:30.043390 master-0 kubenswrapper[31830]: I0319 12:14:30.043334 31830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 12:14:30.053103 master-0 kubenswrapper[31830]: I0319 12:14:30.053049 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:14:30.061365 master-0 kubenswrapper[31830]: I0319 12:14:30.061314 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:14:30.805029 master-0 kubenswrapper[31830]: I0319 12:14:30.804967 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:14:30.805250 master-0 kubenswrapper[31830]: E0319 12:14:30.805155 31830 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 12:14:30.805250 master-0 kubenswrapper[31830]: E0319 12:14:30.805176 31830 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 12:14:30.805250 master-0 kubenswrapper[31830]: E0319 12:14:30.805233 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access podName:89890698-dd48-486b-bd64-dc909aecd9e8 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:34.805217733 +0000 UTC m=+13.354178437 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access") pod "installer-3-master-0" (UID: "89890698-dd48-486b-bd64-dc909aecd9e8") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 12:14:30.887396 master-0 kubenswrapper[31830]: I0319 12:14:30.887349 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:14:30.887607 master-0 kubenswrapper[31830]: I0319 12:14:30.887515 31830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 12:14:30.890591 master-0 kubenswrapper[31830]: I0319 12:14:30.890554 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 12:14:31.096786 master-0 kubenswrapper[31830]: I0319 12:14:31.096642 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 12:14:31.165360 master-0 kubenswrapper[31830]: I0319 12:14:31.165317 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 12:14:31.291938 master-0 kubenswrapper[31830]: I0319 12:14:31.291858 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 12:14:31.293283 master-0 kubenswrapper[31830]: I0319 12:14:31.293240 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-j2w8z" Mar 19 12:14:31.642065 master-0 kubenswrapper[31830]: I0319 12:14:31.642002 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:31.687782 master-0 kubenswrapper[31830]: I0319 12:14:31.687722 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 12:14:31.688032 master-0 kubenswrapper[31830]: I0319 12:14:31.687829 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-v66z4" Mar 19 12:14:31.832310 master-0 kubenswrapper[31830]: I0319 12:14:31.832262 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:14:31.835716 master-0 kubenswrapper[31830]: I0319 12:14:31.835679 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:14:31.871775 master-0 kubenswrapper[31830]: I0319 12:14:31.871714 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-fdc5db968-8zh6r" Mar 19 12:14:31.947180 master-0 kubenswrapper[31830]: I0319 12:14:31.946951 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 12:14:31.956133 master-0 kubenswrapper[31830]: I0319 12:14:31.956105 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 12:14:32.072228 master-0 kubenswrapper[31830]: I0319 12:14:32.072002 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:14:32.381211 master-0 kubenswrapper[31830]: I0319 12:14:32.381161 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:14:32.386117 master-0 kubenswrapper[31830]: I0319 12:14:32.386074 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:14:32.525744 master-0 kubenswrapper[31830]: I0319 12:14:32.525687 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:32.526000 master-0 kubenswrapper[31830]: I0319 12:14:32.525893 31830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 12:14:32.529587 master-0 kubenswrapper[31830]: I0319 12:14:32.529564 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:14:32.568499 master-0 kubenswrapper[31830]: I0319 12:14:32.568451 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 12:14:32.572617 master-0 kubenswrapper[31830]: I0319 12:14:32.572595 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 12:14:33.074403 master-0 kubenswrapper[31830]: I0319 12:14:33.074349 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:14:33.163408 master-0 kubenswrapper[31830]: I0319 12:14:33.163357 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s22fd" Mar 19 12:14:33.201683 master-0 kubenswrapper[31830]: I0319 12:14:33.201630 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s22fd" Mar 19 12:14:33.512510 master-0 kubenswrapper[31830]: I0319 12:14:33.512453 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 12:14:33.549488 master-0 kubenswrapper[31830]: I0319 12:14:33.549449 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 12:14:33.562519 master-0 kubenswrapper[31830]: I0319 12:14:33.562476 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-2trz4" Mar 19 12:14:33.589312 master-0 kubenswrapper[31830]: I0319 12:14:33.589046 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 12:14:33.648040 master-0 kubenswrapper[31830]: I0319 12:14:33.647984 31830 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 12:14:33.648262 master-0 kubenswrapper[31830]: I0319 12:14:33.648227 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" containerID="cri-o://7783994ea3804af3822e1e8ef880d160160be30c6cc27242405255670e8fc218" gracePeriod=5 Mar 19 12:14:33.815381 master-0 kubenswrapper[31830]: I0319 12:14:33.815263 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:33.815569 master-0 kubenswrapper[31830]: I0319 12:14:33.815485 31830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 12:14:33.815569 master-0 kubenswrapper[31830]: I0319 12:14:33.815497 31830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 12:14:33.857255 master-0 kubenswrapper[31830]: I0319 12:14:33.857205 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:34.071907 master-0 kubenswrapper[31830]: I0319 12:14:34.071773 31830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 12:14:34.264811 master-0 kubenswrapper[31830]: I0319 12:14:34.264746 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 12:14:34.316829 master-0 kubenswrapper[31830]: I0319 12:14:34.311408 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 12:14:34.437885 master-0 kubenswrapper[31830]: I0319 12:14:34.437452 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 12:14:34.442410 master-0 kubenswrapper[31830]: I0319 12:14:34.442369 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9mpxd" Mar 19 12:14:34.899806 master-0 kubenswrapper[31830]: I0319 12:14:34.899743 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:14:34.900338 master-0 kubenswrapper[31830]: E0319 12:14:34.899981 31830 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 12:14:34.900338 master-0 kubenswrapper[31830]: E0319 12:14:34.900001 31830 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 12:14:34.900338 master-0 kubenswrapper[31830]: E0319 12:14:34.900067 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access podName:89890698-dd48-486b-bd64-dc909aecd9e8 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:42.90003185 +0000 UTC m=+21.448992554 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access") pod "installer-3-master-0" (UID: "89890698-dd48-486b-bd64-dc909aecd9e8") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 12:14:35.362658 master-0 kubenswrapper[31830]: I0319 12:14:35.362610 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 12:14:35.405747 master-0 kubenswrapper[31830]: I0319 12:14:35.405161 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tdnkp" Mar 19 12:14:35.633693 master-0 kubenswrapper[31830]: I0319 12:14:35.633539 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s22fd" Mar 19 12:14:35.685791 master-0 kubenswrapper[31830]: I0319 12:14:35.685756 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s22fd" Mar 19 12:14:35.828033 master-0 kubenswrapper[31830]: I0319 12:14:35.827974 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 12:14:35.864892 master-0 kubenswrapper[31830]: I0319 12:14:35.864827 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cjgpg" Mar 19 12:14:35.986207 master-0 kubenswrapper[31830]: I0319 12:14:35.986155 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 12:14:36.020544 master-0 kubenswrapper[31830]: I0319 12:14:36.020466 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fbd5s" Mar 19 12:14:39.105549 master-0 kubenswrapper[31830]: I0319 12:14:39.105505 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_8e7a82869988463543d3d8dd1f0b5fe3/startup-monitor/0.log" Mar 19 12:14:39.106401 master-0 kubenswrapper[31830]: I0319 12:14:39.105574 31830 generic.go:334] "Generic (PLEG): container finished" podID="8e7a82869988463543d3d8dd1f0b5fe3" containerID="7783994ea3804af3822e1e8ef880d160160be30c6cc27242405255670e8fc218" exitCode=137 Mar 19 12:14:39.242102 master-0 kubenswrapper[31830]: I0319 12:14:39.242033 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_8e7a82869988463543d3d8dd1f0b5fe3/startup-monitor/0.log" Mar 19 12:14:39.242400 master-0 kubenswrapper[31830]: I0319 12:14:39.242172 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:39.361895 master-0 kubenswrapper[31830]: I0319 12:14:39.361747 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 19 12:14:39.361895 master-0 kubenswrapper[31830]: I0319 12:14:39.361882 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 19 12:14:39.362174 master-0 kubenswrapper[31830]: I0319 12:14:39.361892 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests" (OuterVolumeSpecName: "manifests") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:14:39.362174 master-0 kubenswrapper[31830]: I0319 12:14:39.361956 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 19 12:14:39.362174 master-0 kubenswrapper[31830]: I0319 12:14:39.362058 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 19 12:14:39.362330 master-0 kubenswrapper[31830]: I0319 12:14:39.362188 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 19 12:14:39.362330 master-0 kubenswrapper[31830]: I0319 12:14:39.362053 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log" (OuterVolumeSpecName: "var-log") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:14:39.362330 master-0 kubenswrapper[31830]: I0319 12:14:39.362217 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock" (OuterVolumeSpecName: "var-lock") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:14:39.362476 master-0 kubenswrapper[31830]: I0319 12:14:39.362332 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:14:39.362739 master-0 kubenswrapper[31830]: I0319 12:14:39.362701 31830 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") on node \"master-0\" DevicePath \"\"" Mar 19 12:14:39.362739 master-0 kubenswrapper[31830]: I0319 12:14:39.362734 31830 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 12:14:39.362880 master-0 kubenswrapper[31830]: I0319 12:14:39.362748 31830 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:14:39.362880 master-0 kubenswrapper[31830]: I0319 12:14:39.362761 31830 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") on node \"master-0\" DevicePath \"\"" Mar 19 12:14:39.370657 master-0 kubenswrapper[31830]: I0319 12:14:39.370241 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:14:39.464059 master-0 kubenswrapper[31830]: I0319 12:14:39.463978 31830 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:14:39.695487 master-0 kubenswrapper[31830]: I0319 12:14:39.695306 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e7a82869988463543d3d8dd1f0b5fe3" path="/var/lib/kubelet/pods/8e7a82869988463543d3d8dd1f0b5fe3/volumes" Mar 19 12:14:39.695786 master-0 kubenswrapper[31830]: I0319 12:14:39.695617 31830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 19 12:14:39.722328 master-0 kubenswrapper[31830]: I0319 12:14:39.722251 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 12:14:39.722328 master-0 kubenswrapper[31830]: I0319 12:14:39.722301 31830 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="042bcff8-a006-4033-a386-09a0466709da" Mar 19 12:14:39.724918 master-0 kubenswrapper[31830]: I0319 12:14:39.724873 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 12:14:39.724918 master-0 kubenswrapper[31830]: I0319 12:14:39.724904 31830 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="042bcff8-a006-4033-a386-09a0466709da" Mar 19 12:14:40.113593 master-0 kubenswrapper[31830]: I0319 12:14:40.113554 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_8e7a82869988463543d3d8dd1f0b5fe3/startup-monitor/0.log" Mar 19 12:14:40.114162 master-0 kubenswrapper[31830]: I0319 12:14:40.113630 31830 scope.go:117] "RemoveContainer" containerID="7783994ea3804af3822e1e8ef880d160160be30c6cc27242405255670e8fc218" Mar 19 12:14:40.114162 master-0 kubenswrapper[31830]: I0319 12:14:40.113694 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:14:42.274539 master-0 kubenswrapper[31830]: I0319 12:14:42.274425 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:42.910516 master-0 kubenswrapper[31830]: I0319 12:14:42.910472 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:14:42.910782 master-0 kubenswrapper[31830]: E0319 12:14:42.910723 31830 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 12:14:42.911035 master-0 kubenswrapper[31830]: E0319 12:14:42.911003 31830 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 12:14:42.911125 master-0 kubenswrapper[31830]: E0319 12:14:42.911099 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access podName:89890698-dd48-486b-bd64-dc909aecd9e8 nodeName:}" failed. No retries permitted until 2026-03-19 12:14:58.911070134 +0000 UTC m=+37.460030878 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access") pod "installer-3-master-0" (UID: "89890698-dd48-486b-bd64-dc909aecd9e8") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.407106 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-5dzwk"] Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: E0319 12:14:51.407506 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9819a56-abb1-485c-b424-5c62e30d5afc" containerName="assisted-installer-controller" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.407527 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9819a56-abb1-485c-b424-5c62e30d5afc" containerName="assisted-installer-controller" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: E0319 12:14:51.407559 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632bdf3b-0ba0-4874-a2ec-8396683c35c5" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.407566 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="632bdf3b-0ba0-4874-a2ec-8396683c35c5" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: E0319 12:14:51.407604 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89890698-dd48-486b-bd64-dc909aecd9e8" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.407613 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="89890698-dd48-486b-bd64-dc909aecd9e8" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: E0319 12:14:51.407632 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b425669d-6f80-4a2b-b2f2-5c6766654c6c" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.407640 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b425669d-6f80-4a2b-b2f2-5c6766654c6c" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: E0319 12:14:51.407647 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.407656 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: E0319 12:14:51.407665 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12d71593-ee54-4321-bc0f-a24261873bd1" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.407671 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="12d71593-ee54-4321-bc0f-a24261873bd1" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: E0319 12:14:51.407683 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac20c616-753e-461a-9c39-2129239f47de" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.407691 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac20c616-753e-461a-9c39-2129239f47de" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: E0319 12:14:51.407702 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b49f09f-2efa-4657-9f5a-fbddd42bee0d" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.407713 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b49f09f-2efa-4657-9f5a-fbddd42bee0d" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: E0319 12:14:51.407726 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.407734 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: E0319 12:14:51.407745 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.407751 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: E0319 12:14:51.407761 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f83dfb-da04-483f-b281-ebdb39f3ab27" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.407767 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f83dfb-da04-483f-b281-ebdb39f3ab27" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: E0319 12:14:51.407777 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.407827 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: E0319 12:14:51.407845 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4442dc-19e2-42a3-b5d9-7af7765b1939" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.407854 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4442dc-19e2-42a3-b5d9-7af7765b1939" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: E0319 12:14:51.407867 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.407875 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: E0319 12:14:51.407890 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb8e49c-30e6-4939-9ef9-1323883a8d6a" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.407898 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb8e49c-30e6-4939-9ef9-1323883a8d6a" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: E0319 12:14:51.407907 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b48817c-05cd-430b-9b1f-9cc037f1ca77" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.407915 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b48817c-05cd-430b-9b1f-9cc037f1ca77" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.408088 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfb8e49c-30e6-4939-9ef9-1323883a8d6a" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.408117 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b48817c-05cd-430b-9b1f-9cc037f1ca77" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.408127 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="632bdf3b-0ba0-4874-a2ec-8396683c35c5" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.408138 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b425669d-6f80-4a2b-b2f2-5c6766654c6c" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.408158 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="12d71593-ee54-4321-bc0f-a24261873bd1" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.408180 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b49f09f-2efa-4657-9f5a-fbddd42bee0d" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.408198 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="11f83dfb-da04-483f-b281-ebdb39f3ab27" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.408215 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.408228 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="89890698-dd48-486b-bd64-dc909aecd9e8" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.408240 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.408248 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e4442dc-19e2-42a3-b5d9-7af7765b1939" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.408259 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9819a56-abb1-485c-b424-5c62e30d5afc" containerName="assisted-installer-controller" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.408268 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac20c616-753e-461a-9c39-2129239f47de" containerName="installer" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.408279 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.408296 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 19 12:14:51.408477 master-0 kubenswrapper[31830]: I0319 12:14:51.408308 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6d6f656-2d3e-4bb7-a1a6-98cf223ad25c" containerName="installer" Mar 19 12:14:51.410515 master-0 kubenswrapper[31830]: I0319 12:14:51.409026 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-76b6568d85-5dzwk" Mar 19 12:14:51.422869 master-0 kubenswrapper[31830]: I0319 12:14:51.422823 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 19 12:14:51.423077 master-0 kubenswrapper[31830]: I0319 12:14:51.422915 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4kjzz" Mar 19 12:14:51.428950 master-0 kubenswrapper[31830]: I0319 12:14:51.428287 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5-serving-cert\") pod \"console-operator-76b6568d85-5dzwk\" (UID: \"2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5\") " pod="openshift-console-operator/console-operator-76b6568d85-5dzwk" Mar 19 12:14:51.428950 master-0 kubenswrapper[31830]: I0319 12:14:51.428322 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nngnf\" (UniqueName: \"kubernetes.io/projected/2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5-kube-api-access-nngnf\") pod \"console-operator-76b6568d85-5dzwk\" (UID: \"2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5\") " pod="openshift-console-operator/console-operator-76b6568d85-5dzwk" Mar 19 12:14:51.428950 master-0 kubenswrapper[31830]: I0319 12:14:51.428360 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5-trusted-ca\") pod \"console-operator-76b6568d85-5dzwk\" (UID: \"2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5\") " pod="openshift-console-operator/console-operator-76b6568d85-5dzwk" Mar 19 12:14:51.428950 master-0 kubenswrapper[31830]: I0319 12:14:51.428407 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5-config\") pod \"console-operator-76b6568d85-5dzwk\" (UID: \"2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5\") " pod="openshift-console-operator/console-operator-76b6568d85-5dzwk" Mar 19 12:14:51.430967 master-0 kubenswrapper[31830]: I0319 12:14:51.430683 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 19 12:14:51.434822 master-0 kubenswrapper[31830]: I0319 12:14:51.431362 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 19 12:14:51.434822 master-0 kubenswrapper[31830]: I0319 12:14:51.434174 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-5dzwk"] Mar 19 12:14:51.434822 master-0 kubenswrapper[31830]: I0319 12:14:51.434203 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 19 12:14:51.436719 master-0 kubenswrapper[31830]: I0319 12:14:51.435463 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 19 12:14:51.530281 master-0 kubenswrapper[31830]: I0319 12:14:51.530208 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5-serving-cert\") pod \"console-operator-76b6568d85-5dzwk\" (UID: \"2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5\") " pod="openshift-console-operator/console-operator-76b6568d85-5dzwk" Mar 19 12:14:51.530281 master-0 kubenswrapper[31830]: I0319 12:14:51.530278 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nngnf\" (UniqueName: \"kubernetes.io/projected/2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5-kube-api-access-nngnf\") pod \"console-operator-76b6568d85-5dzwk\" (UID: \"2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5\") " pod="openshift-console-operator/console-operator-76b6568d85-5dzwk" Mar 19 12:14:51.530538 master-0 kubenswrapper[31830]: I0319 12:14:51.530331 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5-trusted-ca\") pod \"console-operator-76b6568d85-5dzwk\" (UID: \"2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5\") " pod="openshift-console-operator/console-operator-76b6568d85-5dzwk" Mar 19 12:14:51.530538 master-0 kubenswrapper[31830]: I0319 12:14:51.530383 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5-config\") pod \"console-operator-76b6568d85-5dzwk\" (UID: \"2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5\") " pod="openshift-console-operator/console-operator-76b6568d85-5dzwk" Mar 19 12:14:51.530787 master-0 kubenswrapper[31830]: I0319 12:14:51.530748 31830 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 19 12:14:51.531462 master-0 kubenswrapper[31830]: I0319 12:14:51.531438 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5-config\") pod \"console-operator-76b6568d85-5dzwk\" (UID: \"2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5\") " pod="openshift-console-operator/console-operator-76b6568d85-5dzwk" Mar 19 12:14:51.531943 master-0 kubenswrapper[31830]: I0319 12:14:51.531910 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5-trusted-ca\") pod \"console-operator-76b6568d85-5dzwk\" (UID: \"2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5\") " pod="openshift-console-operator/console-operator-76b6568d85-5dzwk" Mar 19 12:14:51.534391 master-0 kubenswrapper[31830]: I0319 12:14:51.534353 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5-serving-cert\") pod \"console-operator-76b6568d85-5dzwk\" (UID: \"2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5\") " pod="openshift-console-operator/console-operator-76b6568d85-5dzwk" Mar 19 12:14:51.548148 master-0 kubenswrapper[31830]: I0319 12:14:51.548098 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nngnf\" (UniqueName: \"kubernetes.io/projected/2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5-kube-api-access-nngnf\") pod \"console-operator-76b6568d85-5dzwk\" (UID: \"2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5\") " pod="openshift-console-operator/console-operator-76b6568d85-5dzwk" Mar 19 12:14:51.647156 master-0 kubenswrapper[31830]: I0319 12:14:51.647097 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:51.650817 master-0 kubenswrapper[31830]: I0319 12:14:51.650720 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:14:51.740427 master-0 kubenswrapper[31830]: I0319 12:14:51.740074 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-76b6568d85-5dzwk" Mar 19 12:14:52.141588 master-0 kubenswrapper[31830]: I0319 12:14:52.141539 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-5dzwk"] Mar 19 12:14:52.143565 master-0 kubenswrapper[31830]: W0319 12:14:52.143532 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d70b7a8_5cd6_4fdf_a9a5_c15cc137b2d5.slice/crio-8f887559f0a06cfd096163a1c70f10e8022c9913aa4bdcbb0116fea8a2c9fffc WatchSource:0}: Error finding container 8f887559f0a06cfd096163a1c70f10e8022c9913aa4bdcbb0116fea8a2c9fffc: Status 404 returned error can't find the container with id 8f887559f0a06cfd096163a1c70f10e8022c9913aa4bdcbb0116fea8a2c9fffc Mar 19 12:14:52.149300 master-0 kubenswrapper[31830]: I0319 12:14:52.149259 31830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 12:14:52.198732 master-0 kubenswrapper[31830]: I0319 12:14:52.198684 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-5dzwk" event={"ID":"2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5","Type":"ContainerStarted","Data":"8f887559f0a06cfd096163a1c70f10e8022c9913aa4bdcbb0116fea8a2c9fffc"} Mar 19 12:14:55.217284 master-0 kubenswrapper[31830]: I0319 12:14:55.217123 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-5dzwk_2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5/console-operator/0.log" Mar 19 12:14:55.217284 master-0 kubenswrapper[31830]: I0319 12:14:55.217202 31830 generic.go:334] "Generic (PLEG): container finished" podID="2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5" containerID="be106c13693fd64d218cb0893ae59d57beef24c9934e589476785ee7ec0c37f3" exitCode=255 Mar 19 12:14:55.217284 master-0 kubenswrapper[31830]: I0319 12:14:55.217247 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-5dzwk" event={"ID":"2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5","Type":"ContainerDied","Data":"be106c13693fd64d218cb0893ae59d57beef24c9934e589476785ee7ec0c37f3"} Mar 19 12:14:55.218475 master-0 kubenswrapper[31830]: I0319 12:14:55.217823 31830 scope.go:117] "RemoveContainer" containerID="be106c13693fd64d218cb0893ae59d57beef24c9934e589476785ee7ec0c37f3" Mar 19 12:14:56.186122 master-0 kubenswrapper[31830]: I0319 12:14:56.186071 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-66b8ffb895-264cc"] Mar 19 12:14:56.187148 master-0 kubenswrapper[31830]: I0319 12:14:56.187123 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-66b8ffb895-264cc" Mar 19 12:14:56.188970 master-0 kubenswrapper[31830]: I0319 12:14:56.188930 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-xfzn8" Mar 19 12:14:56.189506 master-0 kubenswrapper[31830]: I0319 12:14:56.189364 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 19 12:14:56.191221 master-0 kubenswrapper[31830]: I0319 12:14:56.191108 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 19 12:14:56.197291 master-0 kubenswrapper[31830]: I0319 12:14:56.197255 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-66b8ffb895-264cc"] Mar 19 12:14:56.227171 master-0 kubenswrapper[31830]: I0319 12:14:56.227138 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-5dzwk_2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5/console-operator/0.log" Mar 19 12:14:56.228062 master-0 kubenswrapper[31830]: I0319 12:14:56.227196 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-5dzwk" event={"ID":"2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5","Type":"ContainerStarted","Data":"b87567e00b00025d8e17fa71270c3ec4b4bd0324f912467968bd697feb71d8a2"} Mar 19 12:14:56.228062 master-0 kubenswrapper[31830]: I0319 12:14:56.227711 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-76b6568d85-5dzwk" Mar 19 12:14:56.260352 master-0 kubenswrapper[31830]: I0319 12:14:56.260273 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-76b6568d85-5dzwk" Mar 19 12:14:56.271153 master-0 kubenswrapper[31830]: I0319 12:14:56.271058 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-76b6568d85-5dzwk" podStartSLOduration=2.624458084 podStartE2EDuration="5.271037047s" podCreationTimestamp="2026-03-19 12:14:51 +0000 UTC" firstStartedPulling="2026-03-19 12:14:52.149169376 +0000 UTC m=+30.698130090" lastFinishedPulling="2026-03-19 12:14:54.795748349 +0000 UTC m=+33.344709053" observedRunningTime="2026-03-19 12:14:56.269478039 +0000 UTC m=+34.818438743" watchObservedRunningTime="2026-03-19 12:14:56.271037047 +0000 UTC m=+34.819997761" Mar 19 12:14:56.304820 master-0 kubenswrapper[31830]: I0319 12:14:56.300724 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9rq2\" (UniqueName: \"kubernetes.io/projected/32ddfe6f-9155-424c-979c-5b4cf426680c-kube-api-access-t9rq2\") pod \"downloads-66b8ffb895-264cc\" (UID: \"32ddfe6f-9155-424c-979c-5b4cf426680c\") " pod="openshift-console/downloads-66b8ffb895-264cc" Mar 19 12:14:56.402747 master-0 kubenswrapper[31830]: I0319 12:14:56.402702 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9rq2\" (UniqueName: \"kubernetes.io/projected/32ddfe6f-9155-424c-979c-5b4cf426680c-kube-api-access-t9rq2\") pod \"downloads-66b8ffb895-264cc\" (UID: \"32ddfe6f-9155-424c-979c-5b4cf426680c\") " pod="openshift-console/downloads-66b8ffb895-264cc" Mar 19 12:14:56.422416 master-0 kubenswrapper[31830]: I0319 12:14:56.422342 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9rq2\" (UniqueName: \"kubernetes.io/projected/32ddfe6f-9155-424c-979c-5b4cf426680c-kube-api-access-t9rq2\") pod \"downloads-66b8ffb895-264cc\" (UID: \"32ddfe6f-9155-424c-979c-5b4cf426680c\") " pod="openshift-console/downloads-66b8ffb895-264cc" Mar 19 12:14:56.473959 master-0 kubenswrapper[31830]: I0319 12:14:56.473905 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-6675579648-kj9b2"] Mar 19 12:14:56.474861 master-0 kubenswrapper[31830]: I0319 12:14:56.474833 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6675579648-kj9b2" Mar 19 12:14:56.477150 master-0 kubenswrapper[31830]: I0319 12:14:56.477118 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-mzg7v" Mar 19 12:14:56.477318 master-0 kubenswrapper[31830]: I0319 12:14:56.477298 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 19 12:14:56.490605 master-0 kubenswrapper[31830]: I0319 12:14:56.486193 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6675579648-kj9b2"] Mar 19 12:14:56.511925 master-0 kubenswrapper[31830]: I0319 12:14:56.511089 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-66b8ffb895-264cc" Mar 19 12:14:56.606146 master-0 kubenswrapper[31830]: I0319 12:14:56.606088 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/0f7377b4-649e-496a-af31-69e2ebfccb36-monitoring-plugin-cert\") pod \"monitoring-plugin-6675579648-kj9b2\" (UID: \"0f7377b4-649e-496a-af31-69e2ebfccb36\") " pod="openshift-monitoring/monitoring-plugin-6675579648-kj9b2" Mar 19 12:14:56.707888 master-0 kubenswrapper[31830]: I0319 12:14:56.707830 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/0f7377b4-649e-496a-af31-69e2ebfccb36-monitoring-plugin-cert\") pod \"monitoring-plugin-6675579648-kj9b2\" (UID: \"0f7377b4-649e-496a-af31-69e2ebfccb36\") " pod="openshift-monitoring/monitoring-plugin-6675579648-kj9b2" Mar 19 12:14:56.711325 master-0 kubenswrapper[31830]: I0319 12:14:56.711139 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/0f7377b4-649e-496a-af31-69e2ebfccb36-monitoring-plugin-cert\") pod \"monitoring-plugin-6675579648-kj9b2\" (UID: \"0f7377b4-649e-496a-af31-69e2ebfccb36\") " pod="openshift-monitoring/monitoring-plugin-6675579648-kj9b2" Mar 19 12:14:56.771622 master-0 kubenswrapper[31830]: I0319 12:14:56.770870 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:56.771622 master-0 kubenswrapper[31830]: I0319 12:14:56.771069 31830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 12:14:56.795654 master-0 kubenswrapper[31830]: I0319 12:14:56.795597 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lk9x9" Mar 19 12:14:56.797378 master-0 kubenswrapper[31830]: I0319 12:14:56.797071 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6675579648-kj9b2" Mar 19 12:14:56.917575 master-0 kubenswrapper[31830]: I0319 12:14:56.917530 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-66b8ffb895-264cc"] Mar 19 12:14:56.933263 master-0 kubenswrapper[31830]: W0319 12:14:56.933212 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32ddfe6f_9155_424c_979c_5b4cf426680c.slice/crio-676e08b12aa7e4e8efe2e85572bb81f0dbe1f6486baa69fc8a32291226ff8b4e WatchSource:0}: Error finding container 676e08b12aa7e4e8efe2e85572bb81f0dbe1f6486baa69fc8a32291226ff8b4e: Status 404 returned error can't find the container with id 676e08b12aa7e4e8efe2e85572bb81f0dbe1f6486baa69fc8a32291226ff8b4e Mar 19 12:14:57.223518 master-0 kubenswrapper[31830]: I0319 12:14:57.223477 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6675579648-kj9b2"] Mar 19 12:14:57.235821 master-0 kubenswrapper[31830]: I0319 12:14:57.235743 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-66b8ffb895-264cc" event={"ID":"32ddfe6f-9155-424c-979c-5b4cf426680c","Type":"ContainerStarted","Data":"676e08b12aa7e4e8efe2e85572bb81f0dbe1f6486baa69fc8a32291226ff8b4e"} Mar 19 12:14:57.237412 master-0 kubenswrapper[31830]: I0319 12:14:57.237369 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6675579648-kj9b2" event={"ID":"0f7377b4-649e-496a-af31-69e2ebfccb36","Type":"ContainerStarted","Data":"a9be9820c2c632bfdb84a8a95d16cbcaafecd6b3310db0963aa399161f67f567"} Mar 19 12:14:58.947454 master-0 kubenswrapper[31830]: I0319 12:14:58.947401 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:14:58.948201 master-0 kubenswrapper[31830]: E0319 12:14:58.948051 31830 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 12:14:58.948201 master-0 kubenswrapper[31830]: E0319 12:14:58.948094 31830 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 12:14:58.948336 master-0 kubenswrapper[31830]: E0319 12:14:58.948205 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access podName:89890698-dd48-486b-bd64-dc909aecd9e8 nodeName:}" failed. No retries permitted until 2026-03-19 12:15:30.948136724 +0000 UTC m=+69.497097428 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access") pod "installer-3-master-0" (UID: "89890698-dd48-486b-bd64-dc909aecd9e8") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 12:14:59.250586 master-0 kubenswrapper[31830]: I0319 12:14:59.250525 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6675579648-kj9b2" event={"ID":"0f7377b4-649e-496a-af31-69e2ebfccb36","Type":"ContainerStarted","Data":"c2c7b9521cd5581df04f8710f59e97868965bc54825c407d4a4b6fb852cc6f0f"} Mar 19 12:14:59.250851 master-0 kubenswrapper[31830]: I0319 12:14:59.250819 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-6675579648-kj9b2" Mar 19 12:14:59.266235 master-0 kubenswrapper[31830]: I0319 12:14:59.266170 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-6675579648-kj9b2" Mar 19 12:14:59.305698 master-0 kubenswrapper[31830]: I0319 12:14:59.303619 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-6675579648-kj9b2" podStartSLOduration=1.482744264 podStartE2EDuration="3.303599366s" podCreationTimestamp="2026-03-19 12:14:56 +0000 UTC" firstStartedPulling="2026-03-19 12:14:57.230028591 +0000 UTC m=+35.778989295" lastFinishedPulling="2026-03-19 12:14:59.050883693 +0000 UTC m=+37.599844397" observedRunningTime="2026-03-19 12:14:59.299912521 +0000 UTC m=+37.848873245" watchObservedRunningTime="2026-03-19 12:14:59.303599366 +0000 UTC m=+37.852560070" Mar 19 12:15:03.649342 master-0 kubenswrapper[31830]: I0319 12:15:03.649290 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-69f4fb98cb-qvvqh"] Mar 19 12:15:03.650415 master-0 kubenswrapper[31830]: I0319 12:15:03.650362 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:03.653015 master-0 kubenswrapper[31830]: I0319 12:15:03.652912 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-r8qg7" Mar 19 12:15:03.653251 master-0 kubenswrapper[31830]: I0319 12:15:03.653188 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 19 12:15:03.657318 master-0 kubenswrapper[31830]: I0319 12:15:03.657262 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 19 12:15:03.657616 master-0 kubenswrapper[31830]: I0319 12:15:03.657527 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 19 12:15:03.657690 master-0 kubenswrapper[31830]: I0319 12:15:03.657680 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 19 12:15:03.658846 master-0 kubenswrapper[31830]: I0319 12:15:03.657833 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 19 12:15:03.690821 master-0 kubenswrapper[31830]: I0319 12:15:03.690059 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69f4fb98cb-qvvqh"] Mar 19 12:15:03.730682 master-0 kubenswrapper[31830]: I0319 12:15:03.729008 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-console-config\") pod \"console-69f4fb98cb-qvvqh\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:03.730682 master-0 kubenswrapper[31830]: I0319 12:15:03.729058 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwhrt\" (UniqueName: \"kubernetes.io/projected/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-kube-api-access-vwhrt\") pod \"console-69f4fb98cb-qvvqh\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:03.730682 master-0 kubenswrapper[31830]: I0319 12:15:03.729081 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-oauth-serving-cert\") pod \"console-69f4fb98cb-qvvqh\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:03.730682 master-0 kubenswrapper[31830]: I0319 12:15:03.729098 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-service-ca\") pod \"console-69f4fb98cb-qvvqh\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:03.730682 master-0 kubenswrapper[31830]: I0319 12:15:03.729121 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-console-oauth-config\") pod \"console-69f4fb98cb-qvvqh\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:03.730682 master-0 kubenswrapper[31830]: I0319 12:15:03.729151 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-console-serving-cert\") pod \"console-69f4fb98cb-qvvqh\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:03.831879 master-0 kubenswrapper[31830]: I0319 12:15:03.831818 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-console-config\") pod \"console-69f4fb98cb-qvvqh\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:03.832107 master-0 kubenswrapper[31830]: I0319 12:15:03.831916 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwhrt\" (UniqueName: \"kubernetes.io/projected/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-kube-api-access-vwhrt\") pod \"console-69f4fb98cb-qvvqh\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:03.832107 master-0 kubenswrapper[31830]: I0319 12:15:03.831968 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-oauth-serving-cert\") pod \"console-69f4fb98cb-qvvqh\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:03.832107 master-0 kubenswrapper[31830]: I0319 12:15:03.831998 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-service-ca\") pod \"console-69f4fb98cb-qvvqh\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:03.832107 master-0 kubenswrapper[31830]: I0319 12:15:03.832039 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-console-oauth-config\") pod \"console-69f4fb98cb-qvvqh\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:03.832232 master-0 kubenswrapper[31830]: I0319 12:15:03.832106 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-console-serving-cert\") pod \"console-69f4fb98cb-qvvqh\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:03.833163 master-0 kubenswrapper[31830]: I0319 12:15:03.833090 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-console-config\") pod \"console-69f4fb98cb-qvvqh\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:03.833775 master-0 kubenswrapper[31830]: I0319 12:15:03.833746 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-oauth-serving-cert\") pod \"console-69f4fb98cb-qvvqh\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:03.833878 master-0 kubenswrapper[31830]: I0319 12:15:03.833833 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-service-ca\") pod \"console-69f4fb98cb-qvvqh\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:03.836761 master-0 kubenswrapper[31830]: I0319 12:15:03.836711 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-console-oauth-config\") pod \"console-69f4fb98cb-qvvqh\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:03.845717 master-0 kubenswrapper[31830]: I0319 12:15:03.845671 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-console-serving-cert\") pod \"console-69f4fb98cb-qvvqh\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:03.867239 master-0 kubenswrapper[31830]: I0319 12:15:03.867179 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwhrt\" (UniqueName: \"kubernetes.io/projected/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-kube-api-access-vwhrt\") pod \"console-69f4fb98cb-qvvqh\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:03.976243 master-0 kubenswrapper[31830]: I0319 12:15:03.976187 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:04.386888 master-0 kubenswrapper[31830]: I0319 12:15:04.386770 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69f4fb98cb-qvvqh"] Mar 19 12:15:04.400403 master-0 kubenswrapper[31830]: W0319 12:15:04.400368 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe06cb2e_ccfa_47fe_aaa9_5dbc83a40a26.slice/crio-f7b663ffb2bb48e4ecf06f9105fe20f74da8a02ae5301fc423a27a455c6d9d33 WatchSource:0}: Error finding container f7b663ffb2bb48e4ecf06f9105fe20f74da8a02ae5301fc423a27a455c6d9d33: Status 404 returned error can't find the container with id f7b663ffb2bb48e4ecf06f9105fe20f74da8a02ae5301fc423a27a455c6d9d33 Mar 19 12:15:05.307651 master-0 kubenswrapper[31830]: I0319 12:15:05.307578 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69f4fb98cb-qvvqh" event={"ID":"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26","Type":"ContainerStarted","Data":"f7b663ffb2bb48e4ecf06f9105fe20f74da8a02ae5301fc423a27a455c6d9d33"} Mar 19 12:15:09.376913 master-0 kubenswrapper[31830]: I0319 12:15:09.375637 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 19 12:15:09.376913 master-0 kubenswrapper[31830]: I0319 12:15:09.376458 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 12:15:09.384378 master-0 kubenswrapper[31830]: I0319 12:15:09.384262 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 19 12:15:09.384630 master-0 kubenswrapper[31830]: I0319 12:15:09.384531 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-hhsz7" Mar 19 12:15:09.425461 master-0 kubenswrapper[31830]: I0319 12:15:09.425289 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 19 12:15:09.456699 master-0 kubenswrapper[31830]: I0319 12:15:09.456635 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0f93d242-a135-4284-8ace-704d0ae01afe-var-lock\") pod \"installer-4-master-0\" (UID: \"0f93d242-a135-4284-8ace-704d0ae01afe\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 12:15:09.456944 master-0 kubenswrapper[31830]: I0319 12:15:09.456733 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f93d242-a135-4284-8ace-704d0ae01afe-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0f93d242-a135-4284-8ace-704d0ae01afe\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 12:15:09.456944 master-0 kubenswrapper[31830]: I0319 12:15:09.456832 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f93d242-a135-4284-8ace-704d0ae01afe-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"0f93d242-a135-4284-8ace-704d0ae01afe\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 12:15:09.557666 master-0 kubenswrapper[31830]: I0319 12:15:09.557593 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f93d242-a135-4284-8ace-704d0ae01afe-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"0f93d242-a135-4284-8ace-704d0ae01afe\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 12:15:09.557901 master-0 kubenswrapper[31830]: I0319 12:15:09.557691 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f93d242-a135-4284-8ace-704d0ae01afe-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"0f93d242-a135-4284-8ace-704d0ae01afe\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 12:15:09.557901 master-0 kubenswrapper[31830]: I0319 12:15:09.557737 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0f93d242-a135-4284-8ace-704d0ae01afe-var-lock\") pod \"installer-4-master-0\" (UID: \"0f93d242-a135-4284-8ace-704d0ae01afe\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 12:15:09.557901 master-0 kubenswrapper[31830]: I0319 12:15:09.557776 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0f93d242-a135-4284-8ace-704d0ae01afe-var-lock\") pod \"installer-4-master-0\" (UID: \"0f93d242-a135-4284-8ace-704d0ae01afe\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 12:15:09.557996 master-0 kubenswrapper[31830]: I0319 12:15:09.557900 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f93d242-a135-4284-8ace-704d0ae01afe-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0f93d242-a135-4284-8ace-704d0ae01afe\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 12:15:09.606502 master-0 kubenswrapper[31830]: I0319 12:15:09.606463 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f93d242-a135-4284-8ace-704d0ae01afe-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0f93d242-a135-4284-8ace-704d0ae01afe\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 12:15:09.756077 master-0 kubenswrapper[31830]: I0319 12:15:09.756014 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 12:15:10.895472 master-0 kubenswrapper[31830]: I0319 12:15:10.894708 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 19 12:15:11.385638 master-0 kubenswrapper[31830]: I0319 12:15:11.385584 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"0f93d242-a135-4284-8ace-704d0ae01afe","Type":"ContainerStarted","Data":"3a3a60cd2b7a396d6592c417e998e5dfca5e79a2128530b38a38b211df4ef6b5"} Mar 19 12:15:11.394144 master-0 kubenswrapper[31830]: I0319 12:15:11.394083 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69f4fb98cb-qvvqh" event={"ID":"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26","Type":"ContainerStarted","Data":"f2706c58316b52da324f26d06ba5df2d4a83d1d14fe0c595714914865890e8db"} Mar 19 12:15:11.412369 master-0 kubenswrapper[31830]: I0319 12:15:11.412238 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=2.412217955 podStartE2EDuration="2.412217955s" podCreationTimestamp="2026-03-19 12:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:15:11.409388486 +0000 UTC m=+49.958349190" watchObservedRunningTime="2026-03-19 12:15:11.412217955 +0000 UTC m=+49.961178659" Mar 19 12:15:11.447881 master-0 kubenswrapper[31830]: I0319 12:15:11.447772 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-69f4fb98cb-qvvqh" podStartSLOduration=1.86275372 podStartE2EDuration="8.447753934s" podCreationTimestamp="2026-03-19 12:15:03 +0000 UTC" firstStartedPulling="2026-03-19 12:15:04.402708891 +0000 UTC m=+42.951669595" lastFinishedPulling="2026-03-19 12:15:10.987709105 +0000 UTC m=+49.536669809" observedRunningTime="2026-03-19 12:15:11.444043058 +0000 UTC m=+49.993003762" watchObservedRunningTime="2026-03-19 12:15:11.447753934 +0000 UTC m=+49.996714638" Mar 19 12:15:11.969177 master-0 kubenswrapper[31830]: I0319 12:15:11.969106 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cdddc6cb-q222c"] Mar 19 12:15:11.970040 master-0 kubenswrapper[31830]: I0319 12:15:11.969387 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" podUID="3a6b082a-649b-43f6-8e24-cf222873fe39" containerName="controller-manager" containerID="cri-o://dc774cb792a9ef5e2c8edc274dec5d1dc05b08edfdb8c435ffa6ab475b3fa134" gracePeriod=30 Mar 19 12:15:11.985616 master-0 kubenswrapper[31830]: I0319 12:15:11.985551 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9"] Mar 19 12:15:11.986055 master-0 kubenswrapper[31830]: I0319 12:15:11.985884 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" podUID="da9becfb-a504-4ef7-92ed-cd2db439d5db" containerName="route-controller-manager" containerID="cri-o://2d813a15fdfae4a519455f4052abe2653657dc79015833917eccfbaa2776f015" gracePeriod=30 Mar 19 12:15:12.407581 master-0 kubenswrapper[31830]: I0319 12:15:12.407507 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"0f93d242-a135-4284-8ace-704d0ae01afe","Type":"ContainerStarted","Data":"9a7716b319816771ffd4fe58e7b69169d2bde0dfe3d923c188bbe075b1946984"} Mar 19 12:15:12.409671 master-0 kubenswrapper[31830]: I0319 12:15:12.409584 31830 generic.go:334] "Generic (PLEG): container finished" podID="da9becfb-a504-4ef7-92ed-cd2db439d5db" containerID="2d813a15fdfae4a519455f4052abe2653657dc79015833917eccfbaa2776f015" exitCode=0 Mar 19 12:15:12.409779 master-0 kubenswrapper[31830]: I0319 12:15:12.409679 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" event={"ID":"da9becfb-a504-4ef7-92ed-cd2db439d5db","Type":"ContainerDied","Data":"2d813a15fdfae4a519455f4052abe2653657dc79015833917eccfbaa2776f015"} Mar 19 12:15:12.412223 master-0 kubenswrapper[31830]: I0319 12:15:12.412169 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-7cdddc6cb-q222c_3a6b082a-649b-43f6-8e24-cf222873fe39/controller-manager/2.log" Mar 19 12:15:12.412223 master-0 kubenswrapper[31830]: I0319 12:15:12.412221 31830 generic.go:334] "Generic (PLEG): container finished" podID="3a6b082a-649b-43f6-8e24-cf222873fe39" containerID="dc774cb792a9ef5e2c8edc274dec5d1dc05b08edfdb8c435ffa6ab475b3fa134" exitCode=0 Mar 19 12:15:12.412511 master-0 kubenswrapper[31830]: I0319 12:15:12.412257 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" event={"ID":"3a6b082a-649b-43f6-8e24-cf222873fe39","Type":"ContainerDied","Data":"dc774cb792a9ef5e2c8edc274dec5d1dc05b08edfdb8c435ffa6ab475b3fa134"} Mar 19 12:15:12.412511 master-0 kubenswrapper[31830]: I0319 12:15:12.412304 31830 scope.go:117] "RemoveContainer" containerID="09044fa3844b85995a99e406cd860f5e46e3c822f972902a1ed997f5be96ef8c" Mar 19 12:15:12.470907 master-0 kubenswrapper[31830]: E0319 12:15:12.470782 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09044fa3844b85995a99e406cd860f5e46e3c822f972902a1ed997f5be96ef8c\": container with ID starting with 09044fa3844b85995a99e406cd860f5e46e3c822f972902a1ed997f5be96ef8c not found: ID does not exist" containerID="09044fa3844b85995a99e406cd860f5e46e3c822f972902a1ed997f5be96ef8c" Mar 19 12:15:12.470907 master-0 kubenswrapper[31830]: I0319 12:15:12.470892 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:15:12.522952 master-0 kubenswrapper[31830]: I0319 12:15:12.522916 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 12:15:12.616677 master-0 kubenswrapper[31830]: I0319 12:15:12.616546 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da9becfb-a504-4ef7-92ed-cd2db439d5db-config\") pod \"da9becfb-a504-4ef7-92ed-cd2db439d5db\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " Mar 19 12:15:12.616677 master-0 kubenswrapper[31830]: I0319 12:15:12.616672 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srbt4\" (UniqueName: \"kubernetes.io/projected/3a6b082a-649b-43f6-8e24-cf222873fe39-kube-api-access-srbt4\") pod \"3a6b082a-649b-43f6-8e24-cf222873fe39\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " Mar 19 12:15:12.617169 master-0 kubenswrapper[31830]: I0319 12:15:12.616712 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvzcn\" (UniqueName: \"kubernetes.io/projected/da9becfb-a504-4ef7-92ed-cd2db439d5db-kube-api-access-lvzcn\") pod \"da9becfb-a504-4ef7-92ed-cd2db439d5db\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " Mar 19 12:15:12.617169 master-0 kubenswrapper[31830]: I0319 12:15:12.616874 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a6b082a-649b-43f6-8e24-cf222873fe39-serving-cert\") pod \"3a6b082a-649b-43f6-8e24-cf222873fe39\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " Mar 19 12:15:12.617169 master-0 kubenswrapper[31830]: I0319 12:15:12.616952 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da9becfb-a504-4ef7-92ed-cd2db439d5db-serving-cert\") pod \"da9becfb-a504-4ef7-92ed-cd2db439d5db\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " Mar 19 12:15:12.617169 master-0 kubenswrapper[31830]: I0319 12:15:12.617010 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-client-ca\") pod \"3a6b082a-649b-43f6-8e24-cf222873fe39\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " Mar 19 12:15:12.617169 master-0 kubenswrapper[31830]: I0319 12:15:12.617087 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-config\") pod \"3a6b082a-649b-43f6-8e24-cf222873fe39\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " Mar 19 12:15:12.617169 master-0 kubenswrapper[31830]: I0319 12:15:12.617135 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-proxy-ca-bundles\") pod \"3a6b082a-649b-43f6-8e24-cf222873fe39\" (UID: \"3a6b082a-649b-43f6-8e24-cf222873fe39\") " Mar 19 12:15:12.618332 master-0 kubenswrapper[31830]: I0319 12:15:12.617196 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da9becfb-a504-4ef7-92ed-cd2db439d5db-client-ca\") pod \"da9becfb-a504-4ef7-92ed-cd2db439d5db\" (UID: \"da9becfb-a504-4ef7-92ed-cd2db439d5db\") " Mar 19 12:15:12.618332 master-0 kubenswrapper[31830]: I0319 12:15:12.617575 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da9becfb-a504-4ef7-92ed-cd2db439d5db-config" (OuterVolumeSpecName: "config") pod "da9becfb-a504-4ef7-92ed-cd2db439d5db" (UID: "da9becfb-a504-4ef7-92ed-cd2db439d5db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:15:12.618332 master-0 kubenswrapper[31830]: I0319 12:15:12.618085 31830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da9becfb-a504-4ef7-92ed-cd2db439d5db-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:15:12.618332 master-0 kubenswrapper[31830]: I0319 12:15:12.618189 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3a6b082a-649b-43f6-8e24-cf222873fe39" (UID: "3a6b082a-649b-43f6-8e24-cf222873fe39"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:15:12.619093 master-0 kubenswrapper[31830]: I0319 12:15:12.618583 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-config" (OuterVolumeSpecName: "config") pod "3a6b082a-649b-43f6-8e24-cf222873fe39" (UID: "3a6b082a-649b-43f6-8e24-cf222873fe39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:15:12.619093 master-0 kubenswrapper[31830]: I0319 12:15:12.619010 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da9becfb-a504-4ef7-92ed-cd2db439d5db-client-ca" (OuterVolumeSpecName: "client-ca") pod "da9becfb-a504-4ef7-92ed-cd2db439d5db" (UID: "da9becfb-a504-4ef7-92ed-cd2db439d5db"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:15:12.619856 master-0 kubenswrapper[31830]: I0319 12:15:12.619811 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6b082a-649b-43f6-8e24-cf222873fe39-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3a6b082a-649b-43f6-8e24-cf222873fe39" (UID: "3a6b082a-649b-43f6-8e24-cf222873fe39"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:15:12.620969 master-0 kubenswrapper[31830]: I0319 12:15:12.620928 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da9becfb-a504-4ef7-92ed-cd2db439d5db-kube-api-access-lvzcn" (OuterVolumeSpecName: "kube-api-access-lvzcn") pod "da9becfb-a504-4ef7-92ed-cd2db439d5db" (UID: "da9becfb-a504-4ef7-92ed-cd2db439d5db"). InnerVolumeSpecName "kube-api-access-lvzcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:15:12.621371 master-0 kubenswrapper[31830]: I0319 12:15:12.621310 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a6b082a-649b-43f6-8e24-cf222873fe39-kube-api-access-srbt4" (OuterVolumeSpecName: "kube-api-access-srbt4") pod "3a6b082a-649b-43f6-8e24-cf222873fe39" (UID: "3a6b082a-649b-43f6-8e24-cf222873fe39"). InnerVolumeSpecName "kube-api-access-srbt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:15:12.621780 master-0 kubenswrapper[31830]: I0319 12:15:12.621645 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da9becfb-a504-4ef7-92ed-cd2db439d5db-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "da9becfb-a504-4ef7-92ed-cd2db439d5db" (UID: "da9becfb-a504-4ef7-92ed-cd2db439d5db"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:15:12.622018 master-0 kubenswrapper[31830]: I0319 12:15:12.621974 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-client-ca" (OuterVolumeSpecName: "client-ca") pod "3a6b082a-649b-43f6-8e24-cf222873fe39" (UID: "3a6b082a-649b-43f6-8e24-cf222873fe39"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:15:12.720641 master-0 kubenswrapper[31830]: I0319 12:15:12.720584 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srbt4\" (UniqueName: \"kubernetes.io/projected/3a6b082a-649b-43f6-8e24-cf222873fe39-kube-api-access-srbt4\") on node \"master-0\" DevicePath \"\"" Mar 19 12:15:12.720641 master-0 kubenswrapper[31830]: I0319 12:15:12.720632 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvzcn\" (UniqueName: \"kubernetes.io/projected/da9becfb-a504-4ef7-92ed-cd2db439d5db-kube-api-access-lvzcn\") on node \"master-0\" DevicePath \"\"" Mar 19 12:15:12.720641 master-0 kubenswrapper[31830]: I0319 12:15:12.720647 31830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a6b082a-649b-43f6-8e24-cf222873fe39-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 12:15:12.720998 master-0 kubenswrapper[31830]: I0319 12:15:12.720663 31830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da9becfb-a504-4ef7-92ed-cd2db439d5db-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 12:15:12.720998 master-0 kubenswrapper[31830]: I0319 12:15:12.720678 31830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 12:15:12.720998 master-0 kubenswrapper[31830]: I0319 12:15:12.720690 31830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:15:12.720998 master-0 kubenswrapper[31830]: I0319 12:15:12.720701 31830 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a6b082a-649b-43f6-8e24-cf222873fe39-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 19 12:15:12.720998 master-0 kubenswrapper[31830]: I0319 12:15:12.720714 31830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da9becfb-a504-4ef7-92ed-cd2db439d5db-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 12:15:12.768368 master-0 kubenswrapper[31830]: I0319 12:15:12.768289 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-695474f69-bz8b7"] Mar 19 12:15:12.768856 master-0 kubenswrapper[31830]: E0319 12:15:12.768787 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a6b082a-649b-43f6-8e24-cf222873fe39" containerName="controller-manager" Mar 19 12:15:12.768856 master-0 kubenswrapper[31830]: I0319 12:15:12.768838 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a6b082a-649b-43f6-8e24-cf222873fe39" containerName="controller-manager" Mar 19 12:15:12.768962 master-0 kubenswrapper[31830]: E0319 12:15:12.768871 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9becfb-a504-4ef7-92ed-cd2db439d5db" containerName="route-controller-manager" Mar 19 12:15:12.768962 master-0 kubenswrapper[31830]: I0319 12:15:12.768901 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9becfb-a504-4ef7-92ed-cd2db439d5db" containerName="route-controller-manager" Mar 19 12:15:12.768962 master-0 kubenswrapper[31830]: E0319 12:15:12.768920 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a6b082a-649b-43f6-8e24-cf222873fe39" containerName="controller-manager" Mar 19 12:15:12.768962 master-0 kubenswrapper[31830]: I0319 12:15:12.768930 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a6b082a-649b-43f6-8e24-cf222873fe39" containerName="controller-manager" Mar 19 12:15:12.769872 master-0 kubenswrapper[31830]: I0319 12:15:12.769767 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a6b082a-649b-43f6-8e24-cf222873fe39" containerName="controller-manager" Mar 19 12:15:12.769872 master-0 kubenswrapper[31830]: I0319 12:15:12.769844 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9becfb-a504-4ef7-92ed-cd2db439d5db" containerName="route-controller-manager" Mar 19 12:15:12.770824 master-0 kubenswrapper[31830]: I0319 12:15:12.770720 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:12.785009 master-0 kubenswrapper[31830]: I0319 12:15:12.784957 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 19 12:15:12.808615 master-0 kubenswrapper[31830]: I0319 12:15:12.807998 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-695474f69-bz8b7"] Mar 19 12:15:12.923313 master-0 kubenswrapper[31830]: I0319 12:15:12.923140 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-service-ca\") pod \"console-695474f69-bz8b7\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:12.923313 master-0 kubenswrapper[31830]: I0319 12:15:12.923235 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkhsr\" (UniqueName: \"kubernetes.io/projected/6db87e99-89b9-4f97-b6ca-b236cc27b901-kube-api-access-zkhsr\") pod \"console-695474f69-bz8b7\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:12.923313 master-0 kubenswrapper[31830]: I0319 12:15:12.923282 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-console-config\") pod \"console-695474f69-bz8b7\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:12.923606 master-0 kubenswrapper[31830]: I0319 12:15:12.923323 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6db87e99-89b9-4f97-b6ca-b236cc27b901-console-serving-cert\") pod \"console-695474f69-bz8b7\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:12.923606 master-0 kubenswrapper[31830]: I0319 12:15:12.923355 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-oauth-serving-cert\") pod \"console-695474f69-bz8b7\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:12.923606 master-0 kubenswrapper[31830]: I0319 12:15:12.923542 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-trusted-ca-bundle\") pod \"console-695474f69-bz8b7\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:12.923606 master-0 kubenswrapper[31830]: I0319 12:15:12.923592 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6db87e99-89b9-4f97-b6ca-b236cc27b901-console-oauth-config\") pod \"console-695474f69-bz8b7\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:13.025232 master-0 kubenswrapper[31830]: I0319 12:15:13.025156 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6db87e99-89b9-4f97-b6ca-b236cc27b901-console-serving-cert\") pod \"console-695474f69-bz8b7\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:13.025232 master-0 kubenswrapper[31830]: I0319 12:15:13.025227 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-oauth-serving-cert\") pod \"console-695474f69-bz8b7\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:13.026042 master-0 kubenswrapper[31830]: I0319 12:15:13.025277 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-trusted-ca-bundle\") pod \"console-695474f69-bz8b7\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:13.026042 master-0 kubenswrapper[31830]: I0319 12:15:13.025305 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6db87e99-89b9-4f97-b6ca-b236cc27b901-console-oauth-config\") pod \"console-695474f69-bz8b7\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:13.026042 master-0 kubenswrapper[31830]: I0319 12:15:13.025384 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-service-ca\") pod \"console-695474f69-bz8b7\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:13.026042 master-0 kubenswrapper[31830]: I0319 12:15:13.025413 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkhsr\" (UniqueName: \"kubernetes.io/projected/6db87e99-89b9-4f97-b6ca-b236cc27b901-kube-api-access-zkhsr\") pod \"console-695474f69-bz8b7\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:13.026042 master-0 kubenswrapper[31830]: I0319 12:15:13.025443 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-console-config\") pod \"console-695474f69-bz8b7\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:13.026359 master-0 kubenswrapper[31830]: I0319 12:15:13.026325 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-console-config\") pod \"console-695474f69-bz8b7\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:13.027335 master-0 kubenswrapper[31830]: I0319 12:15:13.027261 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-service-ca\") pod \"console-695474f69-bz8b7\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:13.027335 master-0 kubenswrapper[31830]: I0319 12:15:13.027280 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-oauth-serving-cert\") pod \"console-695474f69-bz8b7\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:13.028033 master-0 kubenswrapper[31830]: I0319 12:15:13.028003 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-trusted-ca-bundle\") pod \"console-695474f69-bz8b7\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:13.034027 master-0 kubenswrapper[31830]: I0319 12:15:13.033967 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6db87e99-89b9-4f97-b6ca-b236cc27b901-console-oauth-config\") pod \"console-695474f69-bz8b7\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:13.034213 master-0 kubenswrapper[31830]: I0319 12:15:13.034036 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6db87e99-89b9-4f97-b6ca-b236cc27b901-console-serving-cert\") pod \"console-695474f69-bz8b7\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:13.049086 master-0 kubenswrapper[31830]: I0319 12:15:13.049024 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkhsr\" (UniqueName: \"kubernetes.io/projected/6db87e99-89b9-4f97-b6ca-b236cc27b901-kube-api-access-zkhsr\") pod \"console-695474f69-bz8b7\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:13.088326 master-0 kubenswrapper[31830]: I0319 12:15:13.088265 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:13.421095 master-0 kubenswrapper[31830]: I0319 12:15:13.421060 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" Mar 19 12:15:13.421095 master-0 kubenswrapper[31830]: I0319 12:15:13.421060 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9" event={"ID":"da9becfb-a504-4ef7-92ed-cd2db439d5db","Type":"ContainerDied","Data":"fa112877e7809f3added7e93999d2d52089456dfb6885e6498c6e53ce0c53ded"} Mar 19 12:15:13.421421 master-0 kubenswrapper[31830]: I0319 12:15:13.421140 31830 scope.go:117] "RemoveContainer" containerID="2d813a15fdfae4a519455f4052abe2653657dc79015833917eccfbaa2776f015" Mar 19 12:15:13.425623 master-0 kubenswrapper[31830]: I0319 12:15:13.424751 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" event={"ID":"3a6b082a-649b-43f6-8e24-cf222873fe39","Type":"ContainerDied","Data":"b31a84101a7e9f8571fe0abea4a9c0ac92d862991255d66df670219d8949bf71"} Mar 19 12:15:13.425623 master-0 kubenswrapper[31830]: I0319 12:15:13.424861 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cdddc6cb-q222c" Mar 19 12:15:13.450367 master-0 kubenswrapper[31830]: I0319 12:15:13.450291 31830 scope.go:117] "RemoveContainer" containerID="dc774cb792a9ef5e2c8edc274dec5d1dc05b08edfdb8c435ffa6ab475b3fa134" Mar 19 12:15:13.477984 master-0 kubenswrapper[31830]: I0319 12:15:13.477256 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cdddc6cb-q222c"] Mar 19 12:15:13.482990 master-0 kubenswrapper[31830]: I0319 12:15:13.481130 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7cdddc6cb-q222c"] Mar 19 12:15:13.500511 master-0 kubenswrapper[31830]: I0319 12:15:13.500428 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9"] Mar 19 12:15:13.506680 master-0 kubenswrapper[31830]: I0319 12:15:13.506631 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-695474f69-bz8b7"] Mar 19 12:15:13.508074 master-0 kubenswrapper[31830]: I0319 12:15:13.508032 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fdb67f9cf-vkmd9"] Mar 19 12:15:13.509217 master-0 kubenswrapper[31830]: W0319 12:15:13.509163 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6db87e99_89b9_4f97_b6ca_b236cc27b901.slice/crio-b87d7c8814265a3da987480e78bc686cde71c16189607387a5f22d78ca5c4660 WatchSource:0}: Error finding container b87d7c8814265a3da987480e78bc686cde71c16189607387a5f22d78ca5c4660: Status 404 returned error can't find the container with id b87d7c8814265a3da987480e78bc686cde71c16189607387a5f22d78ca5c4660 Mar 19 12:15:13.688752 master-0 kubenswrapper[31830]: I0319 12:15:13.688695 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a6b082a-649b-43f6-8e24-cf222873fe39" path="/var/lib/kubelet/pods/3a6b082a-649b-43f6-8e24-cf222873fe39/volumes" Mar 19 12:15:13.689398 master-0 kubenswrapper[31830]: I0319 12:15:13.689374 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da9becfb-a504-4ef7-92ed-cd2db439d5db" path="/var/lib/kubelet/pods/da9becfb-a504-4ef7-92ed-cd2db439d5db/volumes" Mar 19 12:15:13.977598 master-0 kubenswrapper[31830]: I0319 12:15:13.977495 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:13.977598 master-0 kubenswrapper[31830]: I0319 12:15:13.977589 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:15:13.980599 master-0 kubenswrapper[31830]: I0319 12:15:13.980497 31830 patch_prober.go:28] interesting pod/console-69f4fb98cb-qvvqh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Mar 19 12:15:13.980709 master-0 kubenswrapper[31830]: I0319 12:15:13.980605 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-69f4fb98cb-qvvqh" podUID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Mar 19 12:15:14.017032 master-0 kubenswrapper[31830]: I0319 12:15:14.013616 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77879497c7-rjxcg"] Mar 19 12:15:14.018422 master-0 kubenswrapper[31830]: I0319 12:15:14.018375 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a6b082a-649b-43f6-8e24-cf222873fe39" containerName="controller-manager" Mar 19 12:15:14.019049 master-0 kubenswrapper[31830]: I0319 12:15:14.019020 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" Mar 19 12:15:14.026660 master-0 kubenswrapper[31830]: I0319 12:15:14.021586 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld"] Mar 19 12:15:14.026660 master-0 kubenswrapper[31830]: I0319 12:15:14.024322 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-79l7s" Mar 19 12:15:14.026660 master-0 kubenswrapper[31830]: I0319 12:15:14.026539 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld" Mar 19 12:15:14.038461 master-0 kubenswrapper[31830]: I0319 12:15:14.038406 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 19 12:15:14.038709 master-0 kubenswrapper[31830]: I0319 12:15:14.038477 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 19 12:15:14.041410 master-0 kubenswrapper[31830]: I0319 12:15:14.038940 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 19 12:15:14.041410 master-0 kubenswrapper[31830]: I0319 12:15:14.038992 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 19 12:15:14.041410 master-0 kubenswrapper[31830]: I0319 12:15:14.039762 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 19 12:15:14.041691 master-0 kubenswrapper[31830]: I0319 12:15:14.041504 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 19 12:15:14.054351 master-0 kubenswrapper[31830]: I0319 12:15:14.048900 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77879497c7-rjxcg"] Mar 19 12:15:14.054351 master-0 kubenswrapper[31830]: I0319 12:15:14.049025 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld"] Mar 19 12:15:14.054351 master-0 kubenswrapper[31830]: I0319 12:15:14.051182 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-lcwzg" Mar 19 12:15:14.054351 master-0 kubenswrapper[31830]: I0319 12:15:14.051567 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 19 12:15:14.054351 master-0 kubenswrapper[31830]: I0319 12:15:14.052963 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 19 12:15:14.054351 master-0 kubenswrapper[31830]: I0319 12:15:14.053193 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 19 12:15:14.054351 master-0 kubenswrapper[31830]: I0319 12:15:14.053282 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 19 12:15:14.068816 master-0 kubenswrapper[31830]: I0319 12:15:14.068768 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 19 12:15:14.157157 master-0 kubenswrapper[31830]: I0319 12:15:14.157067 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91b08ce3-b7ea-45d1-bc8a-970ff6713d9c-client-ca\") pod \"route-controller-manager-56c55c8dd5-lcwld\" (UID: \"91b08ce3-b7ea-45d1-bc8a-970ff6713d9c\") " pod="openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld" Mar 19 12:15:14.157157 master-0 kubenswrapper[31830]: I0319 12:15:14.157143 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw8n7\" (UniqueName: \"kubernetes.io/projected/9f280e58-4744-4c9b-88f3-bc7b844ec34e-kube-api-access-qw8n7\") pod \"controller-manager-77879497c7-rjxcg\" (UID: \"9f280e58-4744-4c9b-88f3-bc7b844ec34e\") " pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" Mar 19 12:15:14.157460 master-0 kubenswrapper[31830]: I0319 12:15:14.157182 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91b08ce3-b7ea-45d1-bc8a-970ff6713d9c-config\") pod \"route-controller-manager-56c55c8dd5-lcwld\" (UID: \"91b08ce3-b7ea-45d1-bc8a-970ff6713d9c\") " pod="openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld" Mar 19 12:15:14.157460 master-0 kubenswrapper[31830]: I0319 12:15:14.157203 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6v4b\" (UniqueName: \"kubernetes.io/projected/91b08ce3-b7ea-45d1-bc8a-970ff6713d9c-kube-api-access-j6v4b\") pod \"route-controller-manager-56c55c8dd5-lcwld\" (UID: \"91b08ce3-b7ea-45d1-bc8a-970ff6713d9c\") " pod="openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld" Mar 19 12:15:14.157460 master-0 kubenswrapper[31830]: I0319 12:15:14.157220 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f280e58-4744-4c9b-88f3-bc7b844ec34e-config\") pod \"controller-manager-77879497c7-rjxcg\" (UID: \"9f280e58-4744-4c9b-88f3-bc7b844ec34e\") " pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" Mar 19 12:15:14.157460 master-0 kubenswrapper[31830]: I0319 12:15:14.157268 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f280e58-4744-4c9b-88f3-bc7b844ec34e-client-ca\") pod \"controller-manager-77879497c7-rjxcg\" (UID: \"9f280e58-4744-4c9b-88f3-bc7b844ec34e\") " pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" Mar 19 12:15:14.157460 master-0 kubenswrapper[31830]: I0319 12:15:14.157288 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9f280e58-4744-4c9b-88f3-bc7b844ec34e-proxy-ca-bundles\") pod \"controller-manager-77879497c7-rjxcg\" (UID: \"9f280e58-4744-4c9b-88f3-bc7b844ec34e\") " pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" Mar 19 12:15:14.157663 master-0 kubenswrapper[31830]: I0319 12:15:14.157460 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f280e58-4744-4c9b-88f3-bc7b844ec34e-serving-cert\") pod \"controller-manager-77879497c7-rjxcg\" (UID: \"9f280e58-4744-4c9b-88f3-bc7b844ec34e\") " pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" Mar 19 12:15:14.157663 master-0 kubenswrapper[31830]: I0319 12:15:14.157647 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91b08ce3-b7ea-45d1-bc8a-970ff6713d9c-serving-cert\") pod \"route-controller-manager-56c55c8dd5-lcwld\" (UID: \"91b08ce3-b7ea-45d1-bc8a-970ff6713d9c\") " pod="openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld" Mar 19 12:15:14.259656 master-0 kubenswrapper[31830]: I0319 12:15:14.259493 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw8n7\" (UniqueName: \"kubernetes.io/projected/9f280e58-4744-4c9b-88f3-bc7b844ec34e-kube-api-access-qw8n7\") pod \"controller-manager-77879497c7-rjxcg\" (UID: \"9f280e58-4744-4c9b-88f3-bc7b844ec34e\") " pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" Mar 19 12:15:14.260061 master-0 kubenswrapper[31830]: I0319 12:15:14.259979 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91b08ce3-b7ea-45d1-bc8a-970ff6713d9c-config\") pod \"route-controller-manager-56c55c8dd5-lcwld\" (UID: \"91b08ce3-b7ea-45d1-bc8a-970ff6713d9c\") " pod="openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld" Mar 19 12:15:14.260061 master-0 kubenswrapper[31830]: I0319 12:15:14.260023 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6v4b\" (UniqueName: \"kubernetes.io/projected/91b08ce3-b7ea-45d1-bc8a-970ff6713d9c-kube-api-access-j6v4b\") pod \"route-controller-manager-56c55c8dd5-lcwld\" (UID: \"91b08ce3-b7ea-45d1-bc8a-970ff6713d9c\") " pod="openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld" Mar 19 12:15:14.260740 master-0 kubenswrapper[31830]: I0319 12:15:14.260214 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f280e58-4744-4c9b-88f3-bc7b844ec34e-config\") pod \"controller-manager-77879497c7-rjxcg\" (UID: \"9f280e58-4744-4c9b-88f3-bc7b844ec34e\") " pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" Mar 19 12:15:14.260740 master-0 kubenswrapper[31830]: I0319 12:15:14.260312 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f280e58-4744-4c9b-88f3-bc7b844ec34e-client-ca\") pod \"controller-manager-77879497c7-rjxcg\" (UID: \"9f280e58-4744-4c9b-88f3-bc7b844ec34e\") " pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" Mar 19 12:15:14.260740 master-0 kubenswrapper[31830]: I0319 12:15:14.260330 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9f280e58-4744-4c9b-88f3-bc7b844ec34e-proxy-ca-bundles\") pod \"controller-manager-77879497c7-rjxcg\" (UID: \"9f280e58-4744-4c9b-88f3-bc7b844ec34e\") " pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" Mar 19 12:15:14.260740 master-0 kubenswrapper[31830]: I0319 12:15:14.260371 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f280e58-4744-4c9b-88f3-bc7b844ec34e-serving-cert\") pod \"controller-manager-77879497c7-rjxcg\" (UID: \"9f280e58-4744-4c9b-88f3-bc7b844ec34e\") " pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" Mar 19 12:15:14.260740 master-0 kubenswrapper[31830]: I0319 12:15:14.260465 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91b08ce3-b7ea-45d1-bc8a-970ff6713d9c-serving-cert\") pod \"route-controller-manager-56c55c8dd5-lcwld\" (UID: \"91b08ce3-b7ea-45d1-bc8a-970ff6713d9c\") " pod="openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld" Mar 19 12:15:14.262917 master-0 kubenswrapper[31830]: I0319 12:15:14.260623 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91b08ce3-b7ea-45d1-bc8a-970ff6713d9c-client-ca\") pod \"route-controller-manager-56c55c8dd5-lcwld\" (UID: \"91b08ce3-b7ea-45d1-bc8a-970ff6713d9c\") " pod="openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld" Mar 19 12:15:14.262917 master-0 kubenswrapper[31830]: I0319 12:15:14.261312 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91b08ce3-b7ea-45d1-bc8a-970ff6713d9c-config\") pod \"route-controller-manager-56c55c8dd5-lcwld\" (UID: \"91b08ce3-b7ea-45d1-bc8a-970ff6713d9c\") " pod="openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld" Mar 19 12:15:14.262917 master-0 kubenswrapper[31830]: I0319 12:15:14.262329 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f280e58-4744-4c9b-88f3-bc7b844ec34e-client-ca\") pod \"controller-manager-77879497c7-rjxcg\" (UID: \"9f280e58-4744-4c9b-88f3-bc7b844ec34e\") " pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" Mar 19 12:15:14.264386 master-0 kubenswrapper[31830]: I0319 12:15:14.264131 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f280e58-4744-4c9b-88f3-bc7b844ec34e-config\") pod \"controller-manager-77879497c7-rjxcg\" (UID: \"9f280e58-4744-4c9b-88f3-bc7b844ec34e\") " pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" Mar 19 12:15:14.264386 master-0 kubenswrapper[31830]: I0319 12:15:14.264341 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9f280e58-4744-4c9b-88f3-bc7b844ec34e-proxy-ca-bundles\") pod \"controller-manager-77879497c7-rjxcg\" (UID: \"9f280e58-4744-4c9b-88f3-bc7b844ec34e\") " pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" Mar 19 12:15:14.264881 master-0 kubenswrapper[31830]: I0319 12:15:14.264779 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f280e58-4744-4c9b-88f3-bc7b844ec34e-serving-cert\") pod \"controller-manager-77879497c7-rjxcg\" (UID: \"9f280e58-4744-4c9b-88f3-bc7b844ec34e\") " pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" Mar 19 12:15:14.265766 master-0 kubenswrapper[31830]: I0319 12:15:14.265727 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91b08ce3-b7ea-45d1-bc8a-970ff6713d9c-serving-cert\") pod \"route-controller-manager-56c55c8dd5-lcwld\" (UID: \"91b08ce3-b7ea-45d1-bc8a-970ff6713d9c\") " pod="openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld" Mar 19 12:15:14.273472 master-0 kubenswrapper[31830]: I0319 12:15:14.273431 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91b08ce3-b7ea-45d1-bc8a-970ff6713d9c-client-ca\") pod \"route-controller-manager-56c55c8dd5-lcwld\" (UID: \"91b08ce3-b7ea-45d1-bc8a-970ff6713d9c\") " pod="openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld" Mar 19 12:15:14.276413 master-0 kubenswrapper[31830]: I0319 12:15:14.276375 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw8n7\" (UniqueName: \"kubernetes.io/projected/9f280e58-4744-4c9b-88f3-bc7b844ec34e-kube-api-access-qw8n7\") pod \"controller-manager-77879497c7-rjxcg\" (UID: \"9f280e58-4744-4c9b-88f3-bc7b844ec34e\") " pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" Mar 19 12:15:14.281463 master-0 kubenswrapper[31830]: I0319 12:15:14.281426 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6v4b\" (UniqueName: \"kubernetes.io/projected/91b08ce3-b7ea-45d1-bc8a-970ff6713d9c-kube-api-access-j6v4b\") pod \"route-controller-manager-56c55c8dd5-lcwld\" (UID: \"91b08ce3-b7ea-45d1-bc8a-970ff6713d9c\") " pod="openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld" Mar 19 12:15:14.365559 master-0 kubenswrapper[31830]: I0319 12:15:14.365393 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" Mar 19 12:15:14.380357 master-0 kubenswrapper[31830]: I0319 12:15:14.380242 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld" Mar 19 12:15:14.447549 master-0 kubenswrapper[31830]: I0319 12:15:14.446065 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-695474f69-bz8b7" event={"ID":"6db87e99-89b9-4f97-b6ca-b236cc27b901","Type":"ContainerStarted","Data":"f8a329d8e4cd3c8277440511f6592f1a504a1d602403e775fa35cb53dafb9bf0"} Mar 19 12:15:14.447549 master-0 kubenswrapper[31830]: I0319 12:15:14.446155 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-695474f69-bz8b7" event={"ID":"6db87e99-89b9-4f97-b6ca-b236cc27b901","Type":"ContainerStarted","Data":"b87d7c8814265a3da987480e78bc686cde71c16189607387a5f22d78ca5c4660"} Mar 19 12:15:14.476254 master-0 kubenswrapper[31830]: I0319 12:15:14.475024 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-695474f69-bz8b7" podStartSLOduration=2.474990447 podStartE2EDuration="2.474990447s" podCreationTimestamp="2026-03-19 12:15:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:15:14.472309613 +0000 UTC m=+53.021270327" watchObservedRunningTime="2026-03-19 12:15:14.474990447 +0000 UTC m=+53.023951171" Mar 19 12:15:14.785034 master-0 kubenswrapper[31830]: I0319 12:15:14.785005 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld"] Mar 19 12:15:14.789671 master-0 kubenswrapper[31830]: W0319 12:15:14.789643 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91b08ce3_b7ea_45d1_bc8a_970ff6713d9c.slice/crio-d5b7597727683461dfb4ef4b58031aaddf5fc92e365c7ae85201a7b4fce6ec6a WatchSource:0}: Error finding container d5b7597727683461dfb4ef4b58031aaddf5fc92e365c7ae85201a7b4fce6ec6a: Status 404 returned error can't find the container with id d5b7597727683461dfb4ef4b58031aaddf5fc92e365c7ae85201a7b4fce6ec6a Mar 19 12:15:14.843037 master-0 kubenswrapper[31830]: I0319 12:15:14.842972 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77879497c7-rjxcg"] Mar 19 12:15:14.853995 master-0 kubenswrapper[31830]: W0319 12:15:14.853930 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f280e58_4744_4c9b_88f3_bc7b844ec34e.slice/crio-9790f1c626e08a8d6f9acb5c2af31caffc1f3e59ff09f998ddb434bcb0805498 WatchSource:0}: Error finding container 9790f1c626e08a8d6f9acb5c2af31caffc1f3e59ff09f998ddb434bcb0805498: Status 404 returned error can't find the container with id 9790f1c626e08a8d6f9acb5c2af31caffc1f3e59ff09f998ddb434bcb0805498 Mar 19 12:15:15.464442 master-0 kubenswrapper[31830]: I0319 12:15:15.464383 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld" event={"ID":"91b08ce3-b7ea-45d1-bc8a-970ff6713d9c","Type":"ContainerStarted","Data":"a526c559fbbf41a66e98f2e378c06184d5dc82fd1e3e945d4833e0380ac1060f"} Mar 19 12:15:15.464442 master-0 kubenswrapper[31830]: I0319 12:15:15.464433 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld" event={"ID":"91b08ce3-b7ea-45d1-bc8a-970ff6713d9c","Type":"ContainerStarted","Data":"d5b7597727683461dfb4ef4b58031aaddf5fc92e365c7ae85201a7b4fce6ec6a"} Mar 19 12:15:15.465050 master-0 kubenswrapper[31830]: I0319 12:15:15.464963 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld" Mar 19 12:15:15.466572 master-0 kubenswrapper[31830]: I0319 12:15:15.466524 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" event={"ID":"9f280e58-4744-4c9b-88f3-bc7b844ec34e","Type":"ContainerStarted","Data":"c445bc11af2d7a480dda2bb6741fdbc6e05d8c7ed60594b7df1205d31f197738"} Mar 19 12:15:15.466572 master-0 kubenswrapper[31830]: I0319 12:15:15.466560 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" event={"ID":"9f280e58-4744-4c9b-88f3-bc7b844ec34e","Type":"ContainerStarted","Data":"9790f1c626e08a8d6f9acb5c2af31caffc1f3e59ff09f998ddb434bcb0805498"} Mar 19 12:15:15.466853 master-0 kubenswrapper[31830]: I0319 12:15:15.466833 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" Mar 19 12:15:15.471296 master-0 kubenswrapper[31830]: I0319 12:15:15.471245 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" Mar 19 12:15:15.471706 master-0 kubenswrapper[31830]: I0319 12:15:15.471680 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld" Mar 19 12:15:15.488874 master-0 kubenswrapper[31830]: I0319 12:15:15.487916 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-56c55c8dd5-lcwld" podStartSLOduration=3.487893343 podStartE2EDuration="3.487893343s" podCreationTimestamp="2026-03-19 12:15:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:15:15.485382285 +0000 UTC m=+54.034343019" watchObservedRunningTime="2026-03-19 12:15:15.487893343 +0000 UTC m=+54.036854047" Mar 19 12:15:15.509823 master-0 kubenswrapper[31830]: I0319 12:15:15.507088 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77879497c7-rjxcg" podStartSLOduration=4.507071602 podStartE2EDuration="4.507071602s" podCreationTimestamp="2026-03-19 12:15:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:15:15.502846171 +0000 UTC m=+54.051806875" watchObservedRunningTime="2026-03-19 12:15:15.507071602 +0000 UTC m=+54.056032306" Mar 19 12:15:21.668569 master-0 kubenswrapper[31830]: I0319 12:15:21.668520 31830 scope.go:117] "RemoveContainer" containerID="4eb7482c86a1b5f9e745f031e830bded6c37fd855abcbff4d6d73294bfadb247" Mar 19 12:15:23.088903 master-0 kubenswrapper[31830]: I0319 12:15:23.088823 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:23.088903 master-0 kubenswrapper[31830]: I0319 12:15:23.088899 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:15:23.090684 master-0 kubenswrapper[31830]: I0319 12:15:23.090635 31830 patch_prober.go:28] interesting pod/console-695474f69-bz8b7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 19 12:15:23.090899 master-0 kubenswrapper[31830]: I0319 12:15:23.090846 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-695474f69-bz8b7" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 19 12:15:23.977604 master-0 kubenswrapper[31830]: I0319 12:15:23.977553 31830 patch_prober.go:28] interesting pod/console-69f4fb98cb-qvvqh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Mar 19 12:15:23.977864 master-0 kubenswrapper[31830]: I0319 12:15:23.977619 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-69f4fb98cb-qvvqh" podUID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Mar 19 12:15:30.970745 master-0 kubenswrapper[31830]: I0319 12:15:30.970671 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:15:30.974312 master-0 kubenswrapper[31830]: I0319 12:15:30.974268 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 12:15:31.072595 master-0 kubenswrapper[31830]: I0319 12:15:31.072177 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access\") pod \"89890698-dd48-486b-bd64-dc909aecd9e8\" (UID: \"89890698-dd48-486b-bd64-dc909aecd9e8\") " Mar 19 12:15:31.074886 master-0 kubenswrapper[31830]: I0319 12:15:31.074679 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "89890698-dd48-486b-bd64-dc909aecd9e8" (UID: "89890698-dd48-486b-bd64-dc909aecd9e8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:15:31.174556 master-0 kubenswrapper[31830]: I0319 12:15:31.174500 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89890698-dd48-486b-bd64-dc909aecd9e8-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 12:15:33.090001 master-0 kubenswrapper[31830]: I0319 12:15:33.089912 31830 patch_prober.go:28] interesting pod/console-695474f69-bz8b7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 19 12:15:33.090547 master-0 kubenswrapper[31830]: I0319 12:15:33.090013 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-695474f69-bz8b7" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 19 12:15:33.978391 master-0 kubenswrapper[31830]: I0319 12:15:33.978263 31830 patch_prober.go:28] interesting pod/console-69f4fb98cb-qvvqh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Mar 19 12:15:33.978391 master-0 kubenswrapper[31830]: I0319 12:15:33.978344 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-69f4fb98cb-qvvqh" podUID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Mar 19 12:15:34.655601 master-0 kubenswrapper[31830]: I0319 12:15:34.655045 31830 scope.go:117] "RemoveContainer" containerID="f347ebf4af2e430c7010deb32f74eaaa375be42bd1cb0fd78e647b0e4fd96480" Mar 19 12:15:34.686147 master-0 kubenswrapper[31830]: I0319 12:15:34.686078 31830 scope.go:117] "RemoveContainer" containerID="95a5e59caf12dcb834fa10b5b5af9755159f99a81152a1ebbfb9f9785ea5edff" Mar 19 12:15:37.648573 master-0 kubenswrapper[31830]: I0319 12:15:37.648519 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-66b8ffb895-264cc" event={"ID":"32ddfe6f-9155-424c-979c-5b4cf426680c","Type":"ContainerStarted","Data":"5520ffce8912b4d3bb38d50c5accdfc69fd5b70a291a96f1a1ae110d4329d105"} Mar 19 12:15:37.649178 master-0 kubenswrapper[31830]: I0319 12:15:37.648782 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-66b8ffb895-264cc" Mar 19 12:15:37.655658 master-0 kubenswrapper[31830]: I0319 12:15:37.655584 31830 patch_prober.go:28] interesting pod/downloads-66b8ffb895-264cc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.89:8080/\": dial tcp 10.128.0.89:8080: connect: connection refused" start-of-body= Mar 19 12:15:37.655770 master-0 kubenswrapper[31830]: I0319 12:15:37.655693 31830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-264cc" podUID="32ddfe6f-9155-424c-979c-5b4cf426680c" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.89:8080/\": dial tcp 10.128.0.89:8080: connect: connection refused" Mar 19 12:15:37.754885 master-0 kubenswrapper[31830]: I0319 12:15:37.754767 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-66b8ffb895-264cc" podStartSLOduration=1.907547522 podStartE2EDuration="41.754739951s" podCreationTimestamp="2026-03-19 12:14:56 +0000 UTC" firstStartedPulling="2026-03-19 12:14:56.93578069 +0000 UTC m=+35.484741394" lastFinishedPulling="2026-03-19 12:15:36.782973119 +0000 UTC m=+75.331933823" observedRunningTime="2026-03-19 12:15:37.754461062 +0000 UTC m=+76.303421766" watchObservedRunningTime="2026-03-19 12:15:37.754739951 +0000 UTC m=+76.303700675" Mar 19 12:15:38.656159 master-0 kubenswrapper[31830]: I0319 12:15:38.656116 31830 patch_prober.go:28] interesting pod/downloads-66b8ffb895-264cc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.89:8080/\": dial tcp 10.128.0.89:8080: connect: connection refused" start-of-body= Mar 19 12:15:38.656159 master-0 kubenswrapper[31830]: I0319 12:15:38.656161 31830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-264cc" podUID="32ddfe6f-9155-424c-979c-5b4cf426680c" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.89:8080/\": dial tcp 10.128.0.89:8080: connect: connection refused" Mar 19 12:15:43.089842 master-0 kubenswrapper[31830]: I0319 12:15:43.089758 31830 patch_prober.go:28] interesting pod/console-695474f69-bz8b7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 19 12:15:43.090470 master-0 kubenswrapper[31830]: I0319 12:15:43.089885 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-695474f69-bz8b7" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 19 12:15:43.978255 master-0 kubenswrapper[31830]: I0319 12:15:43.978189 31830 patch_prober.go:28] interesting pod/console-69f4fb98cb-qvvqh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Mar 19 12:15:43.978521 master-0 kubenswrapper[31830]: I0319 12:15:43.978259 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-69f4fb98cb-qvvqh" podUID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Mar 19 12:15:46.520081 master-0 kubenswrapper[31830]: I0319 12:15:46.519965 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-66b8ffb895-264cc" Mar 19 12:15:50.143198 master-0 kubenswrapper[31830]: I0319 12:15:50.143123 31830 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 12:15:50.144875 master-0 kubenswrapper[31830]: I0319 12:15:50.144348 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:15:50.145117 master-0 kubenswrapper[31830]: I0319 12:15:50.145055 31830 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 19 12:15:50.145440 master-0 kubenswrapper[31830]: I0319 12:15:50.145390 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" containerID="cri-o://49208618a1a605bcca0e7a0e4a618f3e1b277ea51f314ec7cb82b073d6261eb8" gracePeriod=15 Mar 19 12:15:50.145677 master-0 kubenswrapper[31830]: I0319 12:15:50.145464 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" containerID="cri-o://85ac3bdc63293b181e3d779f2c3dd340478df86a724f42e86c25f67c2e97c503" gracePeriod=15 Mar 19 12:15:50.145677 master-0 kubenswrapper[31830]: I0319 12:15:50.145513 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://8721a1bcca3e6b8ad33aab266086c8fc37d6d5fcfd1c38b4b527628964b85d3a" gracePeriod=15 Mar 19 12:15:50.145677 master-0 kubenswrapper[31830]: I0319 12:15:50.145477 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://0b5ac637490a34aebef118fd54fa4f10821c561174d2eea7d5ba728bc39d30a3" gracePeriod=15 Mar 19 12:15:50.145677 master-0 kubenswrapper[31830]: I0319 12:15:50.145475 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" containerID="cri-o://565f9c73fa36f45fb9cd49587a2aa91819aea7c5a1fe5e602d37b0a519ad8eea" gracePeriod=15 Mar 19 12:15:50.147689 master-0 kubenswrapper[31830]: I0319 12:15:50.147218 31830 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 19 12:15:50.147689 master-0 kubenswrapper[31830]: E0319 12:15:50.147462 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="setup" Mar 19 12:15:50.147689 master-0 kubenswrapper[31830]: I0319 12:15:50.147477 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="setup" Mar 19 12:15:50.147689 master-0 kubenswrapper[31830]: E0319 12:15:50.147501 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" Mar 19 12:15:50.147689 master-0 kubenswrapper[31830]: I0319 12:15:50.147510 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" Mar 19 12:15:50.147689 master-0 kubenswrapper[31830]: E0319 12:15:50.147533 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" Mar 19 12:15:50.147689 master-0 kubenswrapper[31830]: I0319 12:15:50.147542 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" Mar 19 12:15:50.147689 master-0 kubenswrapper[31830]: E0319 12:15:50.147562 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" Mar 19 12:15:50.147689 master-0 kubenswrapper[31830]: I0319 12:15:50.147572 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" Mar 19 12:15:50.147689 master-0 kubenswrapper[31830]: E0319 12:15:50.147593 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 19 12:15:50.147689 master-0 kubenswrapper[31830]: I0319 12:15:50.147602 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 19 12:15:50.147689 master-0 kubenswrapper[31830]: E0319 12:15:50.147612 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" Mar 19 12:15:50.147689 master-0 kubenswrapper[31830]: I0319 12:15:50.147623 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" Mar 19 12:15:50.148735 master-0 kubenswrapper[31830]: I0319 12:15:50.147785 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" Mar 19 12:15:50.148735 master-0 kubenswrapper[31830]: I0319 12:15:50.147823 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" Mar 19 12:15:50.148735 master-0 kubenswrapper[31830]: I0319 12:15:50.147837 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" Mar 19 12:15:50.148735 master-0 kubenswrapper[31830]: I0319 12:15:50.147856 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 19 12:15:50.148735 master-0 kubenswrapper[31830]: I0319 12:15:50.147870 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" Mar 19 12:15:50.148735 master-0 kubenswrapper[31830]: I0319 12:15:50.147890 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="setup" Mar 19 12:15:50.287582 master-0 kubenswrapper[31830]: I0319 12:15:50.287374 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:15:50.287582 master-0 kubenswrapper[31830]: I0319 12:15:50.287473 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:15:50.287582 master-0 kubenswrapper[31830]: I0319 12:15:50.287497 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:15:50.287582 master-0 kubenswrapper[31830]: I0319 12:15:50.287546 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:15:50.287930 master-0 kubenswrapper[31830]: I0319 12:15:50.287743 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:15:50.288069 master-0 kubenswrapper[31830]: I0319 12:15:50.288000 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:15:50.288069 master-0 kubenswrapper[31830]: I0319 12:15:50.288062 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:15:50.288183 master-0 kubenswrapper[31830]: I0319 12:15:50.288133 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:15:50.389702 master-0 kubenswrapper[31830]: I0319 12:15:50.389606 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:15:50.389702 master-0 kubenswrapper[31830]: I0319 12:15:50.389693 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:15:50.389702 master-0 kubenswrapper[31830]: I0319 12:15:50.389715 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:15:50.390177 master-0 kubenswrapper[31830]: I0319 12:15:50.389744 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:15:50.390177 master-0 kubenswrapper[31830]: I0319 12:15:50.389788 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:15:50.390177 master-0 kubenswrapper[31830]: I0319 12:15:50.389850 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:15:50.390177 master-0 kubenswrapper[31830]: I0319 12:15:50.389870 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:15:50.390177 master-0 kubenswrapper[31830]: I0319 12:15:50.389937 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:15:50.390177 master-0 kubenswrapper[31830]: I0319 12:15:50.390103 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:15:50.390845 master-0 kubenswrapper[31830]: I0319 12:15:50.390216 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:15:50.390845 master-0 kubenswrapper[31830]: I0319 12:15:50.390258 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:15:50.390845 master-0 kubenswrapper[31830]: I0319 12:15:50.390162 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:15:50.390845 master-0 kubenswrapper[31830]: I0319 12:15:50.390299 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:15:50.390845 master-0 kubenswrapper[31830]: I0319 12:15:50.390362 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:15:50.390845 master-0 kubenswrapper[31830]: I0319 12:15:50.390389 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:15:50.390845 master-0 kubenswrapper[31830]: I0319 12:15:50.390449 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:15:50.769963 master-0 kubenswrapper[31830]: I0319 12:15:50.769873 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 19 12:15:50.770603 master-0 kubenswrapper[31830]: I0319 12:15:50.770507 31830 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="85ac3bdc63293b181e3d779f2c3dd340478df86a724f42e86c25f67c2e97c503" exitCode=0 Mar 19 12:15:50.770603 master-0 kubenswrapper[31830]: I0319 12:15:50.770533 31830 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="0b5ac637490a34aebef118fd54fa4f10821c561174d2eea7d5ba728bc39d30a3" exitCode=0 Mar 19 12:15:50.770603 master-0 kubenswrapper[31830]: I0319 12:15:50.770544 31830 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="565f9c73fa36f45fb9cd49587a2aa91819aea7c5a1fe5e602d37b0a519ad8eea" exitCode=2 Mar 19 12:15:51.118594 master-0 kubenswrapper[31830]: I0319 12:15:51.117983 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 12:15:51.118594 master-0 kubenswrapper[31830]: I0319 12:15:51.118029 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:15:51.162831 master-0 kubenswrapper[31830]: W0319 12:15:51.162763 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fb4ea7f83036d9c6adf3454fc7e9db.slice/crio-ebf5c99cc653a0f7c5246e6151f0e023e2e46ca27fb08b372ca8bb81c05e8f54 WatchSource:0}: Error finding container ebf5c99cc653a0f7c5246e6151f0e023e2e46ca27fb08b372ca8bb81c05e8f54: Status 404 returned error can't find the container with id ebf5c99cc653a0f7c5246e6151f0e023e2e46ca27fb08b372ca8bb81c05e8f54 Mar 19 12:15:51.166126 master-0 kubenswrapper[31830]: E0319 12:15:51.165955 31830 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189e3d2961412f98 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:16fb4ea7f83036d9c6adf3454fc7e9db,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 12:15:51.16516956 +0000 UTC m=+89.714130264,LastTimestamp:2026-03-19 12:15:51.16516956 +0000 UTC m=+89.714130264,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 12:15:51.521389 master-0 kubenswrapper[31830]: I0319 12:15:51.521331 31830 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" start-of-body= Mar 19 12:15:51.521605 master-0 kubenswrapper[31830]: I0319 12:15:51.521386 31830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:51.683191 master-0 kubenswrapper[31830]: I0319 12:15:51.682959 31830 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:51.684489 master-0 kubenswrapper[31830]: I0319 12:15:51.684427 31830 status_manager.go:851] "Failed to get status for pod" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:51.784180 master-0 kubenswrapper[31830]: I0319 12:15:51.784117 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 19 12:15:51.784941 master-0 kubenswrapper[31830]: I0319 12:15:51.784889 31830 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="8721a1bcca3e6b8ad33aab266086c8fc37d6d5fcfd1c38b4b527628964b85d3a" exitCode=0 Mar 19 12:15:51.787102 master-0 kubenswrapper[31830]: I0319 12:15:51.787011 31830 generic.go:334] "Generic (PLEG): container finished" podID="0f93d242-a135-4284-8ace-704d0ae01afe" containerID="9a7716b319816771ffd4fe58e7b69169d2bde0dfe3d923c188bbe075b1946984" exitCode=0 Mar 19 12:15:51.787202 master-0 kubenswrapper[31830]: I0319 12:15:51.787021 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"0f93d242-a135-4284-8ace-704d0ae01afe","Type":"ContainerDied","Data":"9a7716b319816771ffd4fe58e7b69169d2bde0dfe3d923c188bbe075b1946984"} Mar 19 12:15:51.788691 master-0 kubenswrapper[31830]: I0319 12:15:51.788646 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"16fb4ea7f83036d9c6adf3454fc7e9db","Type":"ContainerStarted","Data":"5ba7acb3f3ec5aabe9892f5e134a406d00ab3f00ba8659c8d7820a5e0b7411f9"} Mar 19 12:15:51.788691 master-0 kubenswrapper[31830]: I0319 12:15:51.788645 31830 status_manager.go:851] "Failed to get status for pod" podUID="0f93d242-a135-4284-8ace-704d0ae01afe" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:51.788691 master-0 kubenswrapper[31830]: I0319 12:15:51.788683 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"16fb4ea7f83036d9c6adf3454fc7e9db","Type":"ContainerStarted","Data":"ebf5c99cc653a0f7c5246e6151f0e023e2e46ca27fb08b372ca8bb81c05e8f54"} Mar 19 12:15:51.789380 master-0 kubenswrapper[31830]: I0319 12:15:51.789324 31830 status_manager.go:851] "Failed to get status for pod" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:51.789981 master-0 kubenswrapper[31830]: I0319 12:15:51.789941 31830 status_manager.go:851] "Failed to get status for pod" podUID="0f93d242-a135-4284-8ace-704d0ae01afe" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:51.790465 master-0 kubenswrapper[31830]: I0319 12:15:51.790418 31830 status_manager.go:851] "Failed to get status for pod" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:53.089895 master-0 kubenswrapper[31830]: I0319 12:15:53.089727 31830 patch_prober.go:28] interesting pod/console-695474f69-bz8b7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 19 12:15:53.090544 master-0 kubenswrapper[31830]: I0319 12:15:53.089873 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-695474f69-bz8b7" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 19 12:15:53.340590 master-0 kubenswrapper[31830]: I0319 12:15:53.339644 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 12:15:53.341654 master-0 kubenswrapper[31830]: I0319 12:15:53.341607 31830 status_manager.go:851] "Failed to get status for pod" podUID="0f93d242-a135-4284-8ace-704d0ae01afe" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:53.342528 master-0 kubenswrapper[31830]: I0319 12:15:53.342473 31830 status_manager.go:851] "Failed to get status for pod" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:53.443578 master-0 kubenswrapper[31830]: I0319 12:15:53.443536 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f93d242-a135-4284-8ace-704d0ae01afe-kubelet-dir\") pod \"0f93d242-a135-4284-8ace-704d0ae01afe\" (UID: \"0f93d242-a135-4284-8ace-704d0ae01afe\") " Mar 19 12:15:53.443821 master-0 kubenswrapper[31830]: I0319 12:15:53.443668 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0f93d242-a135-4284-8ace-704d0ae01afe-var-lock\") pod \"0f93d242-a135-4284-8ace-704d0ae01afe\" (UID: \"0f93d242-a135-4284-8ace-704d0ae01afe\") " Mar 19 12:15:53.443821 master-0 kubenswrapper[31830]: I0319 12:15:53.443724 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f93d242-a135-4284-8ace-704d0ae01afe-kube-api-access\") pod \"0f93d242-a135-4284-8ace-704d0ae01afe\" (UID: \"0f93d242-a135-4284-8ace-704d0ae01afe\") " Mar 19 12:15:53.443821 master-0 kubenswrapper[31830]: I0319 12:15:53.443659 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f93d242-a135-4284-8ace-704d0ae01afe-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0f93d242-a135-4284-8ace-704d0ae01afe" (UID: "0f93d242-a135-4284-8ace-704d0ae01afe"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:15:53.443821 master-0 kubenswrapper[31830]: I0319 12:15:53.443740 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f93d242-a135-4284-8ace-704d0ae01afe-var-lock" (OuterVolumeSpecName: "var-lock") pod "0f93d242-a135-4284-8ace-704d0ae01afe" (UID: "0f93d242-a135-4284-8ace-704d0ae01afe"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:15:53.444143 master-0 kubenswrapper[31830]: I0319 12:15:53.444044 31830 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f93d242-a135-4284-8ace-704d0ae01afe-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:15:53.444143 master-0 kubenswrapper[31830]: I0319 12:15:53.444062 31830 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0f93d242-a135-4284-8ace-704d0ae01afe-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 12:15:53.447488 master-0 kubenswrapper[31830]: I0319 12:15:53.447457 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f93d242-a135-4284-8ace-704d0ae01afe-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0f93d242-a135-4284-8ace-704d0ae01afe" (UID: "0f93d242-a135-4284-8ace-704d0ae01afe"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:15:53.545832 master-0 kubenswrapper[31830]: I0319 12:15:53.545699 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f93d242-a135-4284-8ace-704d0ae01afe-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 12:15:53.593665 master-0 kubenswrapper[31830]: I0319 12:15:53.593524 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 19 12:15:53.594581 master-0 kubenswrapper[31830]: I0319 12:15:53.594542 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:15:53.595730 master-0 kubenswrapper[31830]: I0319 12:15:53.595662 31830 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:53.596412 master-0 kubenswrapper[31830]: I0319 12:15:53.596350 31830 status_manager.go:851] "Failed to get status for pod" podUID="0f93d242-a135-4284-8ace-704d0ae01afe" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:53.597191 master-0 kubenswrapper[31830]: I0319 12:15:53.597061 31830 status_manager.go:851] "Failed to get status for pod" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:53.744786 master-0 kubenswrapper[31830]: E0319 12:15:53.744711 31830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod0f93d242_a135_4284_8ace_704d0ae01afe.slice/crio-3a3a60cd2b7a396d6592c417e998e5dfca5e79a2128530b38a38b211df4ef6b5\": RecentStats: unable to find data in memory cache]" Mar 19 12:15:53.747339 master-0 kubenswrapper[31830]: I0319 12:15:53.747293 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"b45ea2ef1cf2bc9d1d994d6538ae0a64\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " Mar 19 12:15:53.747527 master-0 kubenswrapper[31830]: I0319 12:15:53.747411 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b45ea2ef1cf2bc9d1d994d6538ae0a64" (UID: "b45ea2ef1cf2bc9d1d994d6538ae0a64"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:15:53.747631 master-0 kubenswrapper[31830]: I0319 12:15:53.747616 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"b45ea2ef1cf2bc9d1d994d6538ae0a64\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " Mar 19 12:15:53.747782 master-0 kubenswrapper[31830]: I0319 12:15:53.747770 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"b45ea2ef1cf2bc9d1d994d6538ae0a64\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " Mar 19 12:15:53.748000 master-0 kubenswrapper[31830]: I0319 12:15:53.747653 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "b45ea2ef1cf2bc9d1d994d6538ae0a64" (UID: "b45ea2ef1cf2bc9d1d994d6538ae0a64"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:15:53.748046 master-0 kubenswrapper[31830]: I0319 12:15:53.747931 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "b45ea2ef1cf2bc9d1d994d6538ae0a64" (UID: "b45ea2ef1cf2bc9d1d994d6538ae0a64"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:15:53.748230 master-0 kubenswrapper[31830]: I0319 12:15:53.748215 31830 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:15:53.748292 master-0 kubenswrapper[31830]: I0319 12:15:53.748282 31830 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:15:53.748351 master-0 kubenswrapper[31830]: I0319 12:15:53.748341 31830 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:15:53.815924 master-0 kubenswrapper[31830]: I0319 12:15:53.815871 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 19 12:15:53.817363 master-0 kubenswrapper[31830]: I0319 12:15:53.817309 31830 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="49208618a1a605bcca0e7a0e4a618f3e1b277ea51f314ec7cb82b073d6261eb8" exitCode=0 Mar 19 12:15:53.817461 master-0 kubenswrapper[31830]: I0319 12:15:53.817411 31830 scope.go:117] "RemoveContainer" containerID="85ac3bdc63293b181e3d779f2c3dd340478df86a724f42e86c25f67c2e97c503" Mar 19 12:15:53.817461 master-0 kubenswrapper[31830]: I0319 12:15:53.817437 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:15:53.819725 master-0 kubenswrapper[31830]: I0319 12:15:53.819224 31830 status_manager.go:851] "Failed to get status for pod" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:53.820334 master-0 kubenswrapper[31830]: I0319 12:15:53.820269 31830 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:53.820577 master-0 kubenswrapper[31830]: I0319 12:15:53.820534 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"0f93d242-a135-4284-8ace-704d0ae01afe","Type":"ContainerDied","Data":"3a3a60cd2b7a396d6592c417e998e5dfca5e79a2128530b38a38b211df4ef6b5"} Mar 19 12:15:53.820641 master-0 kubenswrapper[31830]: I0319 12:15:53.820596 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a3a60cd2b7a396d6592c417e998e5dfca5e79a2128530b38a38b211df4ef6b5" Mar 19 12:15:53.820683 master-0 kubenswrapper[31830]: I0319 12:15:53.820634 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 12:15:53.821311 master-0 kubenswrapper[31830]: I0319 12:15:53.821259 31830 status_manager.go:851] "Failed to get status for pod" podUID="0f93d242-a135-4284-8ace-704d0ae01afe" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:53.832160 master-0 kubenswrapper[31830]: I0319 12:15:53.832064 31830 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:53.833485 master-0 kubenswrapper[31830]: I0319 12:15:53.833306 31830 status_manager.go:851] "Failed to get status for pod" podUID="0f93d242-a135-4284-8ace-704d0ae01afe" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:53.834396 master-0 kubenswrapper[31830]: I0319 12:15:53.834269 31830 status_manager.go:851] "Failed to get status for pod" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:53.842040 master-0 kubenswrapper[31830]: I0319 12:15:53.841960 31830 scope.go:117] "RemoveContainer" containerID="0b5ac637490a34aebef118fd54fa4f10821c561174d2eea7d5ba728bc39d30a3" Mar 19 12:15:53.844860 master-0 kubenswrapper[31830]: I0319 12:15:53.844717 31830 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:53.845929 master-0 kubenswrapper[31830]: I0319 12:15:53.845759 31830 status_manager.go:851] "Failed to get status for pod" podUID="0f93d242-a135-4284-8ace-704d0ae01afe" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:53.846693 master-0 kubenswrapper[31830]: I0319 12:15:53.846636 31830 status_manager.go:851] "Failed to get status for pod" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:53.868440 master-0 kubenswrapper[31830]: I0319 12:15:53.868405 31830 scope.go:117] "RemoveContainer" containerID="8721a1bcca3e6b8ad33aab266086c8fc37d6d5fcfd1c38b4b527628964b85d3a" Mar 19 12:15:53.890239 master-0 kubenswrapper[31830]: I0319 12:15:53.890191 31830 scope.go:117] "RemoveContainer" containerID="565f9c73fa36f45fb9cd49587a2aa91819aea7c5a1fe5e602d37b0a519ad8eea" Mar 19 12:15:53.913098 master-0 kubenswrapper[31830]: I0319 12:15:53.913055 31830 scope.go:117] "RemoveContainer" containerID="49208618a1a605bcca0e7a0e4a618f3e1b277ea51f314ec7cb82b073d6261eb8" Mar 19 12:15:53.936577 master-0 kubenswrapper[31830]: I0319 12:15:53.936537 31830 scope.go:117] "RemoveContainer" containerID="9fba0a39bbccef4b23df04adb335845926b8d52e014255c123724fd850a01216" Mar 19 12:15:53.960908 master-0 kubenswrapper[31830]: I0319 12:15:53.960861 31830 scope.go:117] "RemoveContainer" containerID="85ac3bdc63293b181e3d779f2c3dd340478df86a724f42e86c25f67c2e97c503" Mar 19 12:15:53.961536 master-0 kubenswrapper[31830]: E0319 12:15:53.961472 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85ac3bdc63293b181e3d779f2c3dd340478df86a724f42e86c25f67c2e97c503\": container with ID starting with 85ac3bdc63293b181e3d779f2c3dd340478df86a724f42e86c25f67c2e97c503 not found: ID does not exist" containerID="85ac3bdc63293b181e3d779f2c3dd340478df86a724f42e86c25f67c2e97c503" Mar 19 12:15:53.961653 master-0 kubenswrapper[31830]: I0319 12:15:53.961553 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85ac3bdc63293b181e3d779f2c3dd340478df86a724f42e86c25f67c2e97c503"} err="failed to get container status \"85ac3bdc63293b181e3d779f2c3dd340478df86a724f42e86c25f67c2e97c503\": rpc error: code = NotFound desc = could not find container \"85ac3bdc63293b181e3d779f2c3dd340478df86a724f42e86c25f67c2e97c503\": container with ID starting with 85ac3bdc63293b181e3d779f2c3dd340478df86a724f42e86c25f67c2e97c503 not found: ID does not exist" Mar 19 12:15:53.961653 master-0 kubenswrapper[31830]: I0319 12:15:53.961637 31830 scope.go:117] "RemoveContainer" containerID="0b5ac637490a34aebef118fd54fa4f10821c561174d2eea7d5ba728bc39d30a3" Mar 19 12:15:53.962407 master-0 kubenswrapper[31830]: E0319 12:15:53.962375 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b5ac637490a34aebef118fd54fa4f10821c561174d2eea7d5ba728bc39d30a3\": container with ID starting with 0b5ac637490a34aebef118fd54fa4f10821c561174d2eea7d5ba728bc39d30a3 not found: ID does not exist" containerID="0b5ac637490a34aebef118fd54fa4f10821c561174d2eea7d5ba728bc39d30a3" Mar 19 12:15:53.962493 master-0 kubenswrapper[31830]: I0319 12:15:53.962423 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b5ac637490a34aebef118fd54fa4f10821c561174d2eea7d5ba728bc39d30a3"} err="failed to get container status \"0b5ac637490a34aebef118fd54fa4f10821c561174d2eea7d5ba728bc39d30a3\": rpc error: code = NotFound desc = could not find container \"0b5ac637490a34aebef118fd54fa4f10821c561174d2eea7d5ba728bc39d30a3\": container with ID starting with 0b5ac637490a34aebef118fd54fa4f10821c561174d2eea7d5ba728bc39d30a3 not found: ID does not exist" Mar 19 12:15:53.962493 master-0 kubenswrapper[31830]: I0319 12:15:53.962459 31830 scope.go:117] "RemoveContainer" containerID="8721a1bcca3e6b8ad33aab266086c8fc37d6d5fcfd1c38b4b527628964b85d3a" Mar 19 12:15:53.963212 master-0 kubenswrapper[31830]: E0319 12:15:53.963173 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8721a1bcca3e6b8ad33aab266086c8fc37d6d5fcfd1c38b4b527628964b85d3a\": container with ID starting with 8721a1bcca3e6b8ad33aab266086c8fc37d6d5fcfd1c38b4b527628964b85d3a not found: ID does not exist" containerID="8721a1bcca3e6b8ad33aab266086c8fc37d6d5fcfd1c38b4b527628964b85d3a" Mar 19 12:15:53.963299 master-0 kubenswrapper[31830]: I0319 12:15:53.963221 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8721a1bcca3e6b8ad33aab266086c8fc37d6d5fcfd1c38b4b527628964b85d3a"} err="failed to get container status \"8721a1bcca3e6b8ad33aab266086c8fc37d6d5fcfd1c38b4b527628964b85d3a\": rpc error: code = NotFound desc = could not find container \"8721a1bcca3e6b8ad33aab266086c8fc37d6d5fcfd1c38b4b527628964b85d3a\": container with ID starting with 8721a1bcca3e6b8ad33aab266086c8fc37d6d5fcfd1c38b4b527628964b85d3a not found: ID does not exist" Mar 19 12:15:53.963299 master-0 kubenswrapper[31830]: I0319 12:15:53.963261 31830 scope.go:117] "RemoveContainer" containerID="565f9c73fa36f45fb9cd49587a2aa91819aea7c5a1fe5e602d37b0a519ad8eea" Mar 19 12:15:53.963705 master-0 kubenswrapper[31830]: E0319 12:15:53.963680 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"565f9c73fa36f45fb9cd49587a2aa91819aea7c5a1fe5e602d37b0a519ad8eea\": container with ID starting with 565f9c73fa36f45fb9cd49587a2aa91819aea7c5a1fe5e602d37b0a519ad8eea not found: ID does not exist" containerID="565f9c73fa36f45fb9cd49587a2aa91819aea7c5a1fe5e602d37b0a519ad8eea" Mar 19 12:15:53.963820 master-0 kubenswrapper[31830]: I0319 12:15:53.963782 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"565f9c73fa36f45fb9cd49587a2aa91819aea7c5a1fe5e602d37b0a519ad8eea"} err="failed to get container status \"565f9c73fa36f45fb9cd49587a2aa91819aea7c5a1fe5e602d37b0a519ad8eea\": rpc error: code = NotFound desc = could not find container \"565f9c73fa36f45fb9cd49587a2aa91819aea7c5a1fe5e602d37b0a519ad8eea\": container with ID starting with 565f9c73fa36f45fb9cd49587a2aa91819aea7c5a1fe5e602d37b0a519ad8eea not found: ID does not exist" Mar 19 12:15:53.963910 master-0 kubenswrapper[31830]: I0319 12:15:53.963895 31830 scope.go:117] "RemoveContainer" containerID="49208618a1a605bcca0e7a0e4a618f3e1b277ea51f314ec7cb82b073d6261eb8" Mar 19 12:15:53.964414 master-0 kubenswrapper[31830]: E0319 12:15:53.964387 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49208618a1a605bcca0e7a0e4a618f3e1b277ea51f314ec7cb82b073d6261eb8\": container with ID starting with 49208618a1a605bcca0e7a0e4a618f3e1b277ea51f314ec7cb82b073d6261eb8 not found: ID does not exist" containerID="49208618a1a605bcca0e7a0e4a618f3e1b277ea51f314ec7cb82b073d6261eb8" Mar 19 12:15:53.964486 master-0 kubenswrapper[31830]: I0319 12:15:53.964413 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49208618a1a605bcca0e7a0e4a618f3e1b277ea51f314ec7cb82b073d6261eb8"} err="failed to get container status \"49208618a1a605bcca0e7a0e4a618f3e1b277ea51f314ec7cb82b073d6261eb8\": rpc error: code = NotFound desc = could not find container \"49208618a1a605bcca0e7a0e4a618f3e1b277ea51f314ec7cb82b073d6261eb8\": container with ID starting with 49208618a1a605bcca0e7a0e4a618f3e1b277ea51f314ec7cb82b073d6261eb8 not found: ID does not exist" Mar 19 12:15:53.964486 master-0 kubenswrapper[31830]: I0319 12:15:53.964432 31830 scope.go:117] "RemoveContainer" containerID="9fba0a39bbccef4b23df04adb335845926b8d52e014255c123724fd850a01216" Mar 19 12:15:53.964818 master-0 kubenswrapper[31830]: E0319 12:15:53.964743 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fba0a39bbccef4b23df04adb335845926b8d52e014255c123724fd850a01216\": container with ID starting with 9fba0a39bbccef4b23df04adb335845926b8d52e014255c123724fd850a01216 not found: ID does not exist" containerID="9fba0a39bbccef4b23df04adb335845926b8d52e014255c123724fd850a01216" Mar 19 12:15:53.964886 master-0 kubenswrapper[31830]: I0319 12:15:53.964847 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fba0a39bbccef4b23df04adb335845926b8d52e014255c123724fd850a01216"} err="failed to get container status \"9fba0a39bbccef4b23df04adb335845926b8d52e014255c123724fd850a01216\": rpc error: code = NotFound desc = could not find container \"9fba0a39bbccef4b23df04adb335845926b8d52e014255c123724fd850a01216\": container with ID starting with 9fba0a39bbccef4b23df04adb335845926b8d52e014255c123724fd850a01216 not found: ID does not exist" Mar 19 12:15:53.983196 master-0 kubenswrapper[31830]: I0319 12:15:53.983129 31830 patch_prober.go:28] interesting pod/console-69f4fb98cb-qvvqh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Mar 19 12:15:53.983460 master-0 kubenswrapper[31830]: I0319 12:15:53.983199 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-69f4fb98cb-qvvqh" podUID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Mar 19 12:15:55.687903 master-0 kubenswrapper[31830]: I0319 12:15:55.687834 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" path="/var/lib/kubelet/pods/b45ea2ef1cf2bc9d1d994d6538ae0a64/volumes" Mar 19 12:15:55.917505 master-0 kubenswrapper[31830]: E0319 12:15:55.917457 31830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T12:15:55Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T12:15:55Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T12:15:55Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T12:15:55Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:55.917968 master-0 kubenswrapper[31830]: E0319 12:15:55.917936 31830 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:55.918426 master-0 kubenswrapper[31830]: E0319 12:15:55.918394 31830 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:55.918884 master-0 kubenswrapper[31830]: E0319 12:15:55.918849 31830 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:55.919298 master-0 kubenswrapper[31830]: E0319 12:15:55.919276 31830 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:55.919298 master-0 kubenswrapper[31830]: E0319 12:15:55.919294 31830 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 19 12:15:55.990034 master-0 kubenswrapper[31830]: E0319 12:15:55.989927 31830 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:55.991304 master-0 kubenswrapper[31830]: E0319 12:15:55.991256 31830 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:55.991946 master-0 kubenswrapper[31830]: E0319 12:15:55.991908 31830 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:55.992594 master-0 kubenswrapper[31830]: E0319 12:15:55.992520 31830 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:55.993322 master-0 kubenswrapper[31830]: E0319 12:15:55.993265 31830 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:15:55.993367 master-0 kubenswrapper[31830]: I0319 12:15:55.993334 31830 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 19 12:15:55.993966 master-0 kubenswrapper[31830]: E0319 12:15:55.993915 31830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 19 12:15:56.195634 master-0 kubenswrapper[31830]: E0319 12:15:56.195567 31830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 19 12:15:56.596770 master-0 kubenswrapper[31830]: E0319 12:15:56.596674 31830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 19 12:15:57.397730 master-0 kubenswrapper[31830]: E0319 12:15:57.397670 31830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 19 12:15:58.999654 master-0 kubenswrapper[31830]: E0319 12:15:58.999527 31830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 19 12:15:59.057491 master-0 kubenswrapper[31830]: E0319 12:15:59.057352 31830 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189e3d2961412f98 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:16fb4ea7f83036d9c6adf3454fc7e9db,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 12:15:51.16516956 +0000 UTC m=+89.714130264,LastTimestamp:2026-03-19 12:15:51.16516956 +0000 UTC m=+89.714130264,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 12:16:01.683977 master-0 kubenswrapper[31830]: I0319 12:16:01.683485 31830 status_manager.go:851] "Failed to get status for pod" podUID="0f93d242-a135-4284-8ace-704d0ae01afe" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:16:01.684995 master-0 kubenswrapper[31830]: I0319 12:16:01.684465 31830 status_manager.go:851] "Failed to get status for pod" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:16:02.200425 master-0 kubenswrapper[31830]: E0319 12:16:02.200357 31830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 19 12:16:03.089278 master-0 kubenswrapper[31830]: I0319 12:16:03.089189 31830 patch_prober.go:28] interesting pod/console-695474f69-bz8b7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 19 12:16:03.090196 master-0 kubenswrapper[31830]: I0319 12:16:03.089290 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-695474f69-bz8b7" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 19 12:16:03.677643 master-0 kubenswrapper[31830]: I0319 12:16:03.677534 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:16:03.679223 master-0 kubenswrapper[31830]: I0319 12:16:03.679123 31830 status_manager.go:851] "Failed to get status for pod" podUID="0f93d242-a135-4284-8ace-704d0ae01afe" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:16:03.680123 master-0 kubenswrapper[31830]: I0319 12:16:03.680053 31830 status_manager.go:851] "Failed to get status for pod" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:16:03.715480 master-0 kubenswrapper[31830]: I0319 12:16:03.715419 31830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="64033343-e18f-4294-9a13-dd575335d16b" Mar 19 12:16:03.715480 master-0 kubenswrapper[31830]: I0319 12:16:03.715474 31830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="64033343-e18f-4294-9a13-dd575335d16b" Mar 19 12:16:03.716789 master-0 kubenswrapper[31830]: E0319 12:16:03.716694 31830 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:16:03.717684 master-0 kubenswrapper[31830]: I0319 12:16:03.717639 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:16:03.744941 master-0 kubenswrapper[31830]: W0319 12:16:03.744862 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d5ce05b3d592e63f1f92202d52b9635.slice/crio-4cbb685de0a2f7c7035ef914860e1d6d76d28bcb18233078b1bd55fedd0279fd WatchSource:0}: Error finding container 4cbb685de0a2f7c7035ef914860e1d6d76d28bcb18233078b1bd55fedd0279fd: Status 404 returned error can't find the container with id 4cbb685de0a2f7c7035ef914860e1d6d76d28bcb18233078b1bd55fedd0279fd Mar 19 12:16:03.917757 master-0 kubenswrapper[31830]: I0319 12:16:03.917681 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"4cbb685de0a2f7c7035ef914860e1d6d76d28bcb18233078b1bd55fedd0279fd"} Mar 19 12:16:03.977914 master-0 kubenswrapper[31830]: I0319 12:16:03.977857 31830 patch_prober.go:28] interesting pod/console-69f4fb98cb-qvvqh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Mar 19 12:16:03.978041 master-0 kubenswrapper[31830]: I0319 12:16:03.977942 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-69f4fb98cb-qvvqh" podUID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Mar 19 12:16:04.933229 master-0 kubenswrapper[31830]: I0319 12:16:04.933109 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_09672015532ae9d1d74ae4d426cd904b/kube-controller-manager/0.log" Mar 19 12:16:04.934715 master-0 kubenswrapper[31830]: I0319 12:16:04.934671 31830 generic.go:334] "Generic (PLEG): container finished" podID="09672015532ae9d1d74ae4d426cd904b" containerID="0caac3ca6bbe34a0e2d497521111d7392578df46354c8eb9456dc2e8b18fadb9" exitCode=1 Mar 19 12:16:04.935033 master-0 kubenswrapper[31830]: I0319 12:16:04.934779 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"09672015532ae9d1d74ae4d426cd904b","Type":"ContainerDied","Data":"0caac3ca6bbe34a0e2d497521111d7392578df46354c8eb9456dc2e8b18fadb9"} Mar 19 12:16:04.936072 master-0 kubenswrapper[31830]: I0319 12:16:04.936008 31830 scope.go:117] "RemoveContainer" containerID="0caac3ca6bbe34a0e2d497521111d7392578df46354c8eb9456dc2e8b18fadb9" Mar 19 12:16:04.936400 master-0 kubenswrapper[31830]: I0319 12:16:04.936329 31830 status_manager.go:851] "Failed to get status for pod" podUID="09672015532ae9d1d74ae4d426cd904b" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:16:04.937390 master-0 kubenswrapper[31830]: I0319 12:16:04.937335 31830 status_manager.go:851] "Failed to get status for pod" podUID="0f93d242-a135-4284-8ace-704d0ae01afe" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:16:04.939542 master-0 kubenswrapper[31830]: I0319 12:16:04.939490 31830 status_manager.go:851] "Failed to get status for pod" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:16:04.939834 master-0 kubenswrapper[31830]: I0319 12:16:04.939719 31830 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="718557be6f14b90fa10074703607c983ea0551f0bf0d37a2dbc9687c71e63b51" exitCode=0 Mar 19 12:16:04.939970 master-0 kubenswrapper[31830]: I0319 12:16:04.939752 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerDied","Data":"718557be6f14b90fa10074703607c983ea0551f0bf0d37a2dbc9687c71e63b51"} Mar 19 12:16:04.940303 master-0 kubenswrapper[31830]: I0319 12:16:04.940154 31830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="64033343-e18f-4294-9a13-dd575335d16b" Mar 19 12:16:04.940303 master-0 kubenswrapper[31830]: I0319 12:16:04.940184 31830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="64033343-e18f-4294-9a13-dd575335d16b" Mar 19 12:16:04.940958 master-0 kubenswrapper[31830]: I0319 12:16:04.940889 31830 status_manager.go:851] "Failed to get status for pod" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:16:04.941131 master-0 kubenswrapper[31830]: E0319 12:16:04.940960 31830 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:16:04.941539 master-0 kubenswrapper[31830]: I0319 12:16:04.941421 31830 status_manager.go:851] "Failed to get status for pod" podUID="09672015532ae9d1d74ae4d426cd904b" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:16:04.942094 master-0 kubenswrapper[31830]: I0319 12:16:04.942039 31830 status_manager.go:851] "Failed to get status for pod" podUID="0f93d242-a135-4284-8ace-704d0ae01afe" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:16:05.949599 master-0 kubenswrapper[31830]: I0319 12:16:05.949083 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"f8e645437125672bdd10ef442dc191bdbb5b1aa4fc95d9600c7547a486e706a2"} Mar 19 12:16:05.949599 master-0 kubenswrapper[31830]: I0319 12:16:05.949152 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"fdac9b39a5f31dd5b3b9878e7394b85703ec8676c900ec32f992b5c4090bcde9"} Mar 19 12:16:05.949599 master-0 kubenswrapper[31830]: I0319 12:16:05.949166 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"854780a5612a5e5e50c1280d682c36fa7794098294a6ebaedaccd9150dae588d"} Mar 19 12:16:05.949599 master-0 kubenswrapper[31830]: I0319 12:16:05.949178 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"b2118998c1da7782e4346f69f0c3869f4e1ae1ec7cb5bc289a89a0d7d90426d9"} Mar 19 12:16:05.953526 master-0 kubenswrapper[31830]: I0319 12:16:05.953487 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_09672015532ae9d1d74ae4d426cd904b/kube-controller-manager/0.log" Mar 19 12:16:05.953590 master-0 kubenswrapper[31830]: I0319 12:16:05.953542 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"09672015532ae9d1d74ae4d426cd904b","Type":"ContainerStarted","Data":"e5fbf9965772e33dc6dad1627c0ebaa9bcbb080610a9ab8137ea4a6a55a96ec1"} Mar 19 12:16:06.969510 master-0 kubenswrapper[31830]: I0319 12:16:06.969454 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"731a0a38d103540215a217f28bc229d759cbf1a86aead714b3ade475b7eca9b1"} Mar 19 12:16:06.969974 master-0 kubenswrapper[31830]: I0319 12:16:06.969925 31830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="64033343-e18f-4294-9a13-dd575335d16b" Mar 19 12:16:06.969974 master-0 kubenswrapper[31830]: I0319 12:16:06.969947 31830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="64033343-e18f-4294-9a13-dd575335d16b" Mar 19 12:16:06.970968 master-0 kubenswrapper[31830]: I0319 12:16:06.970290 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:16:07.717823 master-0 kubenswrapper[31830]: I0319 12:16:07.717759 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:16:08.718863 master-0 kubenswrapper[31830]: I0319 12:16:08.718772 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:16:08.719492 master-0 kubenswrapper[31830]: I0319 12:16:08.718925 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:16:08.725168 master-0 kubenswrapper[31830]: I0319 12:16:08.725130 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:16:12.381633 master-0 kubenswrapper[31830]: I0319 12:16:12.381584 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:16:12.808893 master-0 kubenswrapper[31830]: I0319 12:16:12.382174 31830 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 19 12:16:12.808893 master-0 kubenswrapper[31830]: I0319 12:16:12.382207 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 19 12:16:13.089488 master-0 kubenswrapper[31830]: I0319 12:16:13.089315 31830 patch_prober.go:28] interesting pod/console-695474f69-bz8b7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 19 12:16:13.089488 master-0 kubenswrapper[31830]: I0319 12:16:13.089386 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-695474f69-bz8b7" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 19 12:16:13.173063 master-0 kubenswrapper[31830]: I0319 12:16:13.163177 31830 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:16:13.188301 master-0 kubenswrapper[31830]: I0319 12:16:13.188234 31830 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="7d5ce05b3d592e63f1f92202d52b9635" podUID="d736f0ce-242a-43e5-aabf-b298e1959069" Mar 19 12:16:13.977610 master-0 kubenswrapper[31830]: I0319 12:16:13.977524 31830 patch_prober.go:28] interesting pod/console-69f4fb98cb-qvvqh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Mar 19 12:16:13.979306 master-0 kubenswrapper[31830]: I0319 12:16:13.977605 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-69f4fb98cb-qvvqh" podUID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Mar 19 12:16:14.030657 master-0 kubenswrapper[31830]: I0319 12:16:14.030611 31830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="64033343-e18f-4294-9a13-dd575335d16b" Mar 19 12:16:14.030657 master-0 kubenswrapper[31830]: I0319 12:16:14.030648 31830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="64033343-e18f-4294-9a13-dd575335d16b" Mar 19 12:16:14.033674 master-0 kubenswrapper[31830]: I0319 12:16:14.033619 31830 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="7d5ce05b3d592e63f1f92202d52b9635" podUID="d736f0ce-242a-43e5-aabf-b298e1959069" Mar 19 12:16:14.038256 master-0 kubenswrapper[31830]: I0319 12:16:14.038217 31830 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-master-0" containerID="cri-o://b2118998c1da7782e4346f69f0c3869f4e1ae1ec7cb5bc289a89a0d7d90426d9" Mar 19 12:16:14.038256 master-0 kubenswrapper[31830]: I0319 12:16:14.038250 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:16:15.038550 master-0 kubenswrapper[31830]: I0319 12:16:15.038472 31830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="64033343-e18f-4294-9a13-dd575335d16b" Mar 19 12:16:15.038550 master-0 kubenswrapper[31830]: I0319 12:16:15.038509 31830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="64033343-e18f-4294-9a13-dd575335d16b" Mar 19 12:16:15.041031 master-0 kubenswrapper[31830]: I0319 12:16:15.040968 31830 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="7d5ce05b3d592e63f1f92202d52b9635" podUID="d736f0ce-242a-43e5-aabf-b298e1959069" Mar 19 12:16:21.361129 master-0 kubenswrapper[31830]: I0319 12:16:21.361044 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 19 12:16:21.866876 master-0 kubenswrapper[31830]: I0319 12:16:21.866769 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 19 12:16:22.026377 master-0 kubenswrapper[31830]: I0319 12:16:22.026304 31830 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 19 12:16:22.192561 master-0 kubenswrapper[31830]: I0319 12:16:22.192440 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 19 12:16:22.214827 master-0 kubenswrapper[31830]: I0319 12:16:22.214730 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 19 12:16:22.382403 master-0 kubenswrapper[31830]: I0319 12:16:22.382333 31830 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 19 12:16:22.382983 master-0 kubenswrapper[31830]: I0319 12:16:22.382404 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 19 12:16:22.479230 master-0 kubenswrapper[31830]: I0319 12:16:22.479170 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 19 12:16:22.654054 master-0 kubenswrapper[31830]: I0319 12:16:22.653965 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 19 12:16:22.714372 master-0 kubenswrapper[31830]: I0319 12:16:22.714295 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 19 12:16:22.759885 master-0 kubenswrapper[31830]: I0319 12:16:22.759018 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 19 12:16:22.777976 master-0 kubenswrapper[31830]: I0319 12:16:22.777901 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-hw4t4" Mar 19 12:16:22.803305 master-0 kubenswrapper[31830]: I0319 12:16:22.803233 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 19 12:16:23.007522 master-0 kubenswrapper[31830]: I0319 12:16:23.007468 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 19 12:16:23.089306 master-0 kubenswrapper[31830]: I0319 12:16:23.089143 31830 patch_prober.go:28] interesting pod/console-695474f69-bz8b7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 19 12:16:23.089306 master-0 kubenswrapper[31830]: I0319 12:16:23.089224 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-695474f69-bz8b7" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 19 12:16:23.317913 master-0 kubenswrapper[31830]: I0319 12:16:23.317852 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 19 12:16:23.459017 master-0 kubenswrapper[31830]: I0319 12:16:23.458872 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-lcm2r" Mar 19 12:16:23.484102 master-0 kubenswrapper[31830]: I0319 12:16:23.484039 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 19 12:16:23.504925 master-0 kubenswrapper[31830]: I0319 12:16:23.504869 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 19 12:16:23.830683 master-0 kubenswrapper[31830]: I0319 12:16:23.830615 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 19 12:16:23.893509 master-0 kubenswrapper[31830]: I0319 12:16:23.893477 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 19 12:16:23.948689 master-0 kubenswrapper[31830]: I0319 12:16:23.948608 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 19 12:16:23.978106 master-0 kubenswrapper[31830]: I0319 12:16:23.978038 31830 patch_prober.go:28] interesting pod/console-69f4fb98cb-qvvqh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Mar 19 12:16:23.978106 master-0 kubenswrapper[31830]: I0319 12:16:23.978100 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-69f4fb98cb-qvvqh" podUID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Mar 19 12:16:23.983044 master-0 kubenswrapper[31830]: I0319 12:16:23.983000 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 19 12:16:24.278338 master-0 kubenswrapper[31830]: I0319 12:16:24.278244 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 19 12:16:24.374410 master-0 kubenswrapper[31830]: I0319 12:16:24.374340 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-77kwj" Mar 19 12:16:24.509668 master-0 kubenswrapper[31830]: I0319 12:16:24.509609 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 19 12:16:24.515156 master-0 kubenswrapper[31830]: I0319 12:16:24.515060 31830 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 19 12:16:24.578937 master-0 kubenswrapper[31830]: I0319 12:16:24.578694 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 19 12:16:24.831554 master-0 kubenswrapper[31830]: I0319 12:16:24.831396 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 19 12:16:24.838178 master-0 kubenswrapper[31830]: I0319 12:16:24.838123 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 19 12:16:24.855351 master-0 kubenswrapper[31830]: I0319 12:16:24.855304 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 19 12:16:24.863000 master-0 kubenswrapper[31830]: I0319 12:16:24.862900 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 19 12:16:24.957211 master-0 kubenswrapper[31830]: I0319 12:16:24.957156 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 19 12:16:25.077725 master-0 kubenswrapper[31830]: I0319 12:16:25.077644 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 19 12:16:25.248484 master-0 kubenswrapper[31830]: I0319 12:16:25.248395 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 19 12:16:25.260295 master-0 kubenswrapper[31830]: I0319 12:16:25.260205 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 19 12:16:25.353350 master-0 kubenswrapper[31830]: I0319 12:16:25.353251 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 19 12:16:25.393768 master-0 kubenswrapper[31830]: I0319 12:16:25.393657 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 19 12:16:25.450423 master-0 kubenswrapper[31830]: I0319 12:16:25.450251 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 19 12:16:25.480345 master-0 kubenswrapper[31830]: I0319 12:16:25.480248 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 19 12:16:25.536027 master-0 kubenswrapper[31830]: I0319 12:16:25.535963 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 19 12:16:25.660551 master-0 kubenswrapper[31830]: I0319 12:16:25.660449 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 19 12:16:25.691917 master-0 kubenswrapper[31830]: I0319 12:16:25.690093 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 19 12:16:25.693618 master-0 kubenswrapper[31830]: I0319 12:16:25.693594 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 19 12:16:25.716153 master-0 kubenswrapper[31830]: I0319 12:16:25.716017 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 19 12:16:25.768002 master-0 kubenswrapper[31830]: I0319 12:16:25.767942 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 19 12:16:25.814817 master-0 kubenswrapper[31830]: I0319 12:16:25.814720 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 19 12:16:25.828816 master-0 kubenswrapper[31830]: I0319 12:16:25.828767 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 19 12:16:26.005036 master-0 kubenswrapper[31830]: I0319 12:16:26.004952 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 19 12:16:26.022925 master-0 kubenswrapper[31830]: I0319 12:16:26.022841 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 19 12:16:26.039308 master-0 kubenswrapper[31830]: I0319 12:16:26.039198 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-48w96" Mar 19 12:16:26.101497 master-0 kubenswrapper[31830]: I0319 12:16:26.101136 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 19 12:16:26.105394 master-0 kubenswrapper[31830]: I0319 12:16:26.105358 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 19 12:16:26.112530 master-0 kubenswrapper[31830]: I0319 12:16:26.112500 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-mc2cj" Mar 19 12:16:26.141095 master-0 kubenswrapper[31830]: I0319 12:16:26.141043 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 19 12:16:26.262715 master-0 kubenswrapper[31830]: I0319 12:16:26.262618 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 19 12:16:26.296915 master-0 kubenswrapper[31830]: I0319 12:16:26.296849 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 19 12:16:26.331610 master-0 kubenswrapper[31830]: I0319 12:16:26.331554 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-xfzn8" Mar 19 12:16:26.379307 master-0 kubenswrapper[31830]: I0319 12:16:26.379258 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 19 12:16:26.381653 master-0 kubenswrapper[31830]: I0319 12:16:26.381611 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 19 12:16:26.433043 master-0 kubenswrapper[31830]: I0319 12:16:26.432967 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 19 12:16:26.471616 master-0 kubenswrapper[31830]: I0319 12:16:26.471571 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 19 12:16:26.509086 master-0 kubenswrapper[31830]: I0319 12:16:26.509038 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 19 12:16:26.613871 master-0 kubenswrapper[31830]: I0319 12:16:26.613698 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 19 12:16:26.691494 master-0 kubenswrapper[31830]: I0319 12:16:26.691424 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 19 12:16:26.832923 master-0 kubenswrapper[31830]: I0319 12:16:26.832854 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-shkfs" Mar 19 12:16:26.873524 master-0 kubenswrapper[31830]: I0319 12:16:26.873209 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 19 12:16:26.874432 master-0 kubenswrapper[31830]: I0319 12:16:26.874369 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 19 12:16:26.915925 master-0 kubenswrapper[31830]: I0319 12:16:26.915874 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 19 12:16:26.947222 master-0 kubenswrapper[31830]: I0319 12:16:26.947168 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 19 12:16:26.953423 master-0 kubenswrapper[31830]: I0319 12:16:26.953373 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 19 12:16:27.160855 master-0 kubenswrapper[31830]: I0319 12:16:27.160687 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 19 12:16:27.201323 master-0 kubenswrapper[31830]: I0319 12:16:27.201122 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 19 12:16:27.294019 master-0 kubenswrapper[31830]: I0319 12:16:27.293952 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 19 12:16:27.344537 master-0 kubenswrapper[31830]: I0319 12:16:27.344489 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 19 12:16:27.412406 master-0 kubenswrapper[31830]: I0319 12:16:27.412248 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 19 12:16:27.438126 master-0 kubenswrapper[31830]: I0319 12:16:27.438001 31830 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 19 12:16:27.443328 master-0 kubenswrapper[31830]: I0319 12:16:27.443212 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=37.443191694 podStartE2EDuration="37.443191694s" podCreationTimestamp="2026-03-19 12:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:16:13.155565181 +0000 UTC m=+111.704525935" watchObservedRunningTime="2026-03-19 12:16:27.443191694 +0000 UTC m=+125.992152418" Mar 19 12:16:27.444924 master-0 kubenswrapper[31830]: I0319 12:16:27.444888 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 19 12:16:27.445004 master-0 kubenswrapper[31830]: I0319 12:16:27.444939 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 19 12:16:27.451291 master-0 kubenswrapper[31830]: I0319 12:16:27.451198 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:16:27.457957 master-0 kubenswrapper[31830]: I0319 12:16:27.457913 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 19 12:16:27.462977 master-0 kubenswrapper[31830]: I0319 12:16:27.462624 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 19 12:16:27.469663 master-0 kubenswrapper[31830]: I0319 12:16:27.469506 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=14.469465498 podStartE2EDuration="14.469465498s" podCreationTimestamp="2026-03-19 12:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:16:27.468481237 +0000 UTC m=+126.017441971" watchObservedRunningTime="2026-03-19 12:16:27.469465498 +0000 UTC m=+126.018426212" Mar 19 12:16:27.540191 master-0 kubenswrapper[31830]: I0319 12:16:27.540130 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 19 12:16:27.554867 master-0 kubenswrapper[31830]: I0319 12:16:27.554320 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-gz8pl" Mar 19 12:16:27.676980 master-0 kubenswrapper[31830]: I0319 12:16:27.676881 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 19 12:16:27.681522 master-0 kubenswrapper[31830]: I0319 12:16:27.681467 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 19 12:16:27.694486 master-0 kubenswrapper[31830]: I0319 12:16:27.694394 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 19 12:16:27.715886 master-0 kubenswrapper[31830]: I0319 12:16:27.715784 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 19 12:16:27.761021 master-0 kubenswrapper[31830]: I0319 12:16:27.760976 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 19 12:16:27.768678 master-0 kubenswrapper[31830]: I0319 12:16:27.768625 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 19 12:16:27.812661 master-0 kubenswrapper[31830]: I0319 12:16:27.812594 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 19 12:16:27.845037 master-0 kubenswrapper[31830]: I0319 12:16:27.844965 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 19 12:16:27.847673 master-0 kubenswrapper[31830]: I0319 12:16:27.847588 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 19 12:16:27.872291 master-0 kubenswrapper[31830]: I0319 12:16:27.872249 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 19 12:16:28.031961 master-0 kubenswrapper[31830]: I0319 12:16:28.031891 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 19 12:16:28.231447 master-0 kubenswrapper[31830]: I0319 12:16:28.231345 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 19 12:16:28.267988 master-0 kubenswrapper[31830]: I0319 12:16:28.267881 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 19 12:16:28.323022 master-0 kubenswrapper[31830]: I0319 12:16:28.322904 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 19 12:16:28.361981 master-0 kubenswrapper[31830]: I0319 12:16:28.361916 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 19 12:16:28.365219 master-0 kubenswrapper[31830]: I0319 12:16:28.365134 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 19 12:16:28.374813 master-0 kubenswrapper[31830]: I0319 12:16:28.374732 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 19 12:16:28.395581 master-0 kubenswrapper[31830]: I0319 12:16:28.395533 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 19 12:16:28.405780 master-0 kubenswrapper[31830]: I0319 12:16:28.405723 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 19 12:16:28.424874 master-0 kubenswrapper[31830]: I0319 12:16:28.424773 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 19 12:16:28.427590 master-0 kubenswrapper[31830]: I0319 12:16:28.427539 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 19 12:16:28.495218 master-0 kubenswrapper[31830]: I0319 12:16:28.495177 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4kjzz" Mar 19 12:16:28.513771 master-0 kubenswrapper[31830]: I0319 12:16:28.513702 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 19 12:16:28.524095 master-0 kubenswrapper[31830]: I0319 12:16:28.524023 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 19 12:16:28.551745 master-0 kubenswrapper[31830]: I0319 12:16:28.551707 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 19 12:16:28.657972 master-0 kubenswrapper[31830]: I0319 12:16:28.657867 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 19 12:16:28.659561 master-0 kubenswrapper[31830]: I0319 12:16:28.659545 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 19 12:16:28.678934 master-0 kubenswrapper[31830]: I0319 12:16:28.678884 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-ww9m4" Mar 19 12:16:28.766999 master-0 kubenswrapper[31830]: I0319 12:16:28.766940 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 19 12:16:28.769700 master-0 kubenswrapper[31830]: I0319 12:16:28.769674 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 19 12:16:28.771387 master-0 kubenswrapper[31830]: I0319 12:16:28.771344 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 19 12:16:28.876462 master-0 kubenswrapper[31830]: I0319 12:16:28.876423 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 19 12:16:28.896944 master-0 kubenswrapper[31830]: I0319 12:16:28.896903 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 19 12:16:28.940565 master-0 kubenswrapper[31830]: I0319 12:16:28.940450 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 19 12:16:28.974047 master-0 kubenswrapper[31830]: I0319 12:16:28.973973 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 19 12:16:29.077809 master-0 kubenswrapper[31830]: I0319 12:16:29.077719 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 19 12:16:29.151155 master-0 kubenswrapper[31830]: I0319 12:16:29.151099 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 19 12:16:29.153644 master-0 kubenswrapper[31830]: I0319 12:16:29.153383 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 19 12:16:29.311939 master-0 kubenswrapper[31830]: I0319 12:16:29.311781 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 19 12:16:29.312533 master-0 kubenswrapper[31830]: I0319 12:16:29.312357 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 19 12:16:29.318888 master-0 kubenswrapper[31830]: I0319 12:16:29.318846 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 19 12:16:29.419694 master-0 kubenswrapper[31830]: I0319 12:16:29.419591 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 19 12:16:29.457779 master-0 kubenswrapper[31830]: I0319 12:16:29.457691 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-dr8qt" Mar 19 12:16:29.571494 master-0 kubenswrapper[31830]: I0319 12:16:29.571358 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 19 12:16:29.574819 master-0 kubenswrapper[31830]: I0319 12:16:29.574768 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 19 12:16:29.878028 master-0 kubenswrapper[31830]: I0319 12:16:29.877887 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 19 12:16:29.910532 master-0 kubenswrapper[31830]: I0319 12:16:29.910484 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-6flh6" Mar 19 12:16:29.912308 master-0 kubenswrapper[31830]: I0319 12:16:29.912258 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 19 12:16:29.935826 master-0 kubenswrapper[31830]: I0319 12:16:29.935764 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 19 12:16:30.059983 master-0 kubenswrapper[31830]: I0319 12:16:30.059948 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 19 12:16:30.096214 master-0 kubenswrapper[31830]: I0319 12:16:30.096147 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 19 12:16:30.195452 master-0 kubenswrapper[31830]: I0319 12:16:30.195302 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 19 12:16:30.339651 master-0 kubenswrapper[31830]: I0319 12:16:30.339616 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 19 12:16:30.355408 master-0 kubenswrapper[31830]: I0319 12:16:30.355371 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 19 12:16:30.364148 master-0 kubenswrapper[31830]: I0319 12:16:30.364108 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 19 12:16:30.619874 master-0 kubenswrapper[31830]: I0319 12:16:30.619810 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 19 12:16:30.754401 master-0 kubenswrapper[31830]: I0319 12:16:30.754352 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 19 12:16:30.831980 master-0 kubenswrapper[31830]: I0319 12:16:30.831921 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 19 12:16:30.899476 master-0 kubenswrapper[31830]: I0319 12:16:30.899317 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 19 12:16:31.013325 master-0 kubenswrapper[31830]: I0319 12:16:31.013224 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 19 12:16:31.174973 master-0 kubenswrapper[31830]: I0319 12:16:31.174843 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 19 12:16:31.180473 master-0 kubenswrapper[31830]: I0319 12:16:31.180412 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 19 12:16:31.182385 master-0 kubenswrapper[31830]: I0319 12:16:31.182354 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 19 12:16:31.206136 master-0 kubenswrapper[31830]: I0319 12:16:31.206092 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 19 12:16:31.238328 master-0 kubenswrapper[31830]: I0319 12:16:31.238294 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 19 12:16:31.298304 master-0 kubenswrapper[31830]: I0319 12:16:31.298136 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 19 12:16:31.362861 master-0 kubenswrapper[31830]: I0319 12:16:31.362789 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 19 12:16:31.419461 master-0 kubenswrapper[31830]: I0319 12:16:31.419390 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 19 12:16:31.442285 master-0 kubenswrapper[31830]: I0319 12:16:31.442085 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 19 12:16:31.466522 master-0 kubenswrapper[31830]: I0319 12:16:31.466256 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 19 12:16:31.498917 master-0 kubenswrapper[31830]: I0319 12:16:31.498864 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 19 12:16:31.509433 master-0 kubenswrapper[31830]: I0319 12:16:31.509372 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 19 12:16:31.589747 master-0 kubenswrapper[31830]: I0319 12:16:31.589681 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 19 12:16:31.647000 master-0 kubenswrapper[31830]: I0319 12:16:31.646926 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 19 12:16:31.681466 master-0 kubenswrapper[31830]: I0319 12:16:31.681385 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 19 12:16:31.712087 master-0 kubenswrapper[31830]: I0319 12:16:31.711952 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 19 12:16:31.746144 master-0 kubenswrapper[31830]: I0319 12:16:31.746083 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 19 12:16:31.749658 master-0 kubenswrapper[31830]: I0319 12:16:31.749620 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 19 12:16:31.781175 master-0 kubenswrapper[31830]: I0319 12:16:31.781089 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 19 12:16:31.786140 master-0 kubenswrapper[31830]: I0319 12:16:31.786090 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-mzg7v" Mar 19 12:16:31.786766 master-0 kubenswrapper[31830]: I0319 12:16:31.786724 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 19 12:16:31.835092 master-0 kubenswrapper[31830]: I0319 12:16:31.835050 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 19 12:16:31.854631 master-0 kubenswrapper[31830]: I0319 12:16:31.854590 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 19 12:16:31.895980 master-0 kubenswrapper[31830]: I0319 12:16:31.895932 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 19 12:16:31.945334 master-0 kubenswrapper[31830]: I0319 12:16:31.945264 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 19 12:16:31.962277 master-0 kubenswrapper[31830]: I0319 12:16:31.962149 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 19 12:16:31.996623 master-0 kubenswrapper[31830]: I0319 12:16:31.996583 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 19 12:16:32.062391 master-0 kubenswrapper[31830]: I0319 12:16:32.062320 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 19 12:16:32.086614 master-0 kubenswrapper[31830]: I0319 12:16:32.086409 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 19 12:16:32.119809 master-0 kubenswrapper[31830]: I0319 12:16:32.119731 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 19 12:16:32.145107 master-0 kubenswrapper[31830]: I0319 12:16:32.145043 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 19 12:16:32.172085 master-0 kubenswrapper[31830]: I0319 12:16:32.172023 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 19 12:16:32.251826 master-0 kubenswrapper[31830]: I0319 12:16:32.251757 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 19 12:16:32.256250 master-0 kubenswrapper[31830]: I0319 12:16:32.256185 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 19 12:16:32.264284 master-0 kubenswrapper[31830]: I0319 12:16:32.264245 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 19 12:16:32.282529 master-0 kubenswrapper[31830]: I0319 12:16:32.282482 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 19 12:16:32.297222 master-0 kubenswrapper[31830]: I0319 12:16:32.297177 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 19 12:16:32.300460 master-0 kubenswrapper[31830]: I0319 12:16:32.300422 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 19 12:16:32.353282 master-0 kubenswrapper[31830]: I0319 12:16:32.353215 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 19 12:16:32.354145 master-0 kubenswrapper[31830]: I0319 12:16:32.354105 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 19 12:16:32.377045 master-0 kubenswrapper[31830]: I0319 12:16:32.377001 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 19 12:16:32.380200 master-0 kubenswrapper[31830]: I0319 12:16:32.380164 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 19 12:16:32.381772 master-0 kubenswrapper[31830]: I0319 12:16:32.381723 31830 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 19 12:16:32.381913 master-0 kubenswrapper[31830]: I0319 12:16:32.381776 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 19 12:16:32.381913 master-0 kubenswrapper[31830]: I0319 12:16:32.381894 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:16:32.382658 master-0 kubenswrapper[31830]: I0319 12:16:32.382608 31830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"e5fbf9965772e33dc6dad1627c0ebaa9bcbb080610a9ab8137ea4a6a55a96ec1"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 19 12:16:32.382852 master-0 kubenswrapper[31830]: I0319 12:16:32.382801 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager" containerID="cri-o://e5fbf9965772e33dc6dad1627c0ebaa9bcbb080610a9ab8137ea4a6a55a96ec1" gracePeriod=30 Mar 19 12:16:32.449105 master-0 kubenswrapper[31830]: I0319 12:16:32.449028 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 19 12:16:32.485778 master-0 kubenswrapper[31830]: I0319 12:16:32.485683 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 19 12:16:32.508521 master-0 kubenswrapper[31830]: I0319 12:16:32.508334 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 19 12:16:32.555083 master-0 kubenswrapper[31830]: I0319 12:16:32.555009 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 19 12:16:32.576953 master-0 kubenswrapper[31830]: I0319 12:16:32.576908 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 19 12:16:32.577328 master-0 kubenswrapper[31830]: I0319 12:16:32.577282 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 19 12:16:32.582649 master-0 kubenswrapper[31830]: I0319 12:16:32.582540 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 19 12:16:32.670661 master-0 kubenswrapper[31830]: I0319 12:16:32.670594 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 19 12:16:32.763292 master-0 kubenswrapper[31830]: I0319 12:16:32.763104 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 19 12:16:32.775204 master-0 kubenswrapper[31830]: I0319 12:16:32.775105 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 19 12:16:32.801013 master-0 kubenswrapper[31830]: I0319 12:16:32.800958 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 19 12:16:32.804805 master-0 kubenswrapper[31830]: I0319 12:16:32.804763 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 19 12:16:32.848770 master-0 kubenswrapper[31830]: I0319 12:16:32.848718 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-h5t8s" Mar 19 12:16:32.851762 master-0 kubenswrapper[31830]: I0319 12:16:32.851707 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-xbkxv" Mar 19 12:16:32.901398 master-0 kubenswrapper[31830]: I0319 12:16:32.901355 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-pcp8m" Mar 19 12:16:32.931163 master-0 kubenswrapper[31830]: I0319 12:16:32.931121 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-r8qg7" Mar 19 12:16:32.935453 master-0 kubenswrapper[31830]: I0319 12:16:32.935423 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-hjms6" Mar 19 12:16:33.004891 master-0 kubenswrapper[31830]: I0319 12:16:33.004852 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 19 12:16:33.034491 master-0 kubenswrapper[31830]: I0319 12:16:33.034381 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 19 12:16:33.073652 master-0 kubenswrapper[31830]: I0319 12:16:33.073599 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 19 12:16:33.088961 master-0 kubenswrapper[31830]: I0319 12:16:33.088891 31830 patch_prober.go:28] interesting pod/console-695474f69-bz8b7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 19 12:16:33.088961 master-0 kubenswrapper[31830]: I0319 12:16:33.088984 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-695474f69-bz8b7" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 19 12:16:33.172758 master-0 kubenswrapper[31830]: I0319 12:16:33.172695 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 19 12:16:33.266339 master-0 kubenswrapper[31830]: I0319 12:16:33.266286 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 19 12:16:33.420239 master-0 kubenswrapper[31830]: I0319 12:16:33.420048 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 19 12:16:33.436925 master-0 kubenswrapper[31830]: I0319 12:16:33.436878 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 19 12:16:33.607700 master-0 kubenswrapper[31830]: I0319 12:16:33.607641 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-79l7s" Mar 19 12:16:33.625882 master-0 kubenswrapper[31830]: I0319 12:16:33.625834 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 19 12:16:33.635715 master-0 kubenswrapper[31830]: I0319 12:16:33.635666 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 19 12:16:33.695532 master-0 kubenswrapper[31830]: I0319 12:16:33.695366 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 19 12:16:33.842901 master-0 kubenswrapper[31830]: I0319 12:16:33.842834 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 19 12:16:33.933904 master-0 kubenswrapper[31830]: I0319 12:16:33.933842 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 19 12:16:33.978131 master-0 kubenswrapper[31830]: I0319 12:16:33.978039 31830 patch_prober.go:28] interesting pod/console-69f4fb98cb-qvvqh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Mar 19 12:16:33.978131 master-0 kubenswrapper[31830]: I0319 12:16:33.978117 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-69f4fb98cb-qvvqh" podUID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Mar 19 12:16:33.979066 master-0 kubenswrapper[31830]: I0319 12:16:33.979016 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 19 12:16:34.049305 master-0 kubenswrapper[31830]: I0319 12:16:34.049241 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 19 12:16:34.072046 master-0 kubenswrapper[31830]: I0319 12:16:34.071985 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 19 12:16:34.105025 master-0 kubenswrapper[31830]: I0319 12:16:34.104956 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 19 12:16:34.221347 master-0 kubenswrapper[31830]: I0319 12:16:34.221276 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 19 12:16:34.293175 master-0 kubenswrapper[31830]: I0319 12:16:34.293040 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-lcwzg" Mar 19 12:16:34.451889 master-0 kubenswrapper[31830]: I0319 12:16:34.449680 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-sklzz" Mar 19 12:16:34.574908 master-0 kubenswrapper[31830]: I0319 12:16:34.574766 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 19 12:16:34.606111 master-0 kubenswrapper[31830]: I0319 12:16:34.606030 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 19 12:16:34.615498 master-0 kubenswrapper[31830]: I0319 12:16:34.615441 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 19 12:16:34.662990 master-0 kubenswrapper[31830]: I0319 12:16:34.662922 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 19 12:16:34.683758 master-0 kubenswrapper[31830]: I0319 12:16:34.683663 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 19 12:16:34.698850 master-0 kubenswrapper[31830]: I0319 12:16:34.698758 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 19 12:16:34.783880 master-0 kubenswrapper[31830]: I0319 12:16:34.783836 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 19 12:16:34.879468 master-0 kubenswrapper[31830]: I0319 12:16:34.879335 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 19 12:16:34.990673 master-0 kubenswrapper[31830]: I0319 12:16:34.990632 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 19 12:16:34.992710 master-0 kubenswrapper[31830]: I0319 12:16:34.992673 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-djzws" Mar 19 12:16:35.016060 master-0 kubenswrapper[31830]: I0319 12:16:35.016006 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 19 12:16:35.068366 master-0 kubenswrapper[31830]: I0319 12:16:35.068314 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 19 12:16:35.145916 master-0 kubenswrapper[31830]: I0319 12:16:35.145745 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 19 12:16:35.154401 master-0 kubenswrapper[31830]: I0319 12:16:35.154352 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 19 12:16:35.196298 master-0 kubenswrapper[31830]: I0319 12:16:35.196237 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-v8nqn" Mar 19 12:16:35.198492 master-0 kubenswrapper[31830]: I0319 12:16:35.198406 31830 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 19 12:16:35.199726 master-0 kubenswrapper[31830]: I0319 12:16:35.199683 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 19 12:16:35.220646 master-0 kubenswrapper[31830]: I0319 12:16:35.220581 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 19 12:16:35.229500 master-0 kubenswrapper[31830]: I0319 12:16:35.229435 31830 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 19 12:16:35.268763 master-0 kubenswrapper[31830]: I0319 12:16:35.268675 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 19 12:16:35.352001 master-0 kubenswrapper[31830]: I0319 12:16:35.351902 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 19 12:16:35.358916 master-0 kubenswrapper[31830]: I0319 12:16:35.358786 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 19 12:16:35.378294 master-0 kubenswrapper[31830]: I0319 12:16:35.378238 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-svqv2" Mar 19 12:16:35.547964 master-0 kubenswrapper[31830]: I0319 12:16:35.547891 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-67sx5" Mar 19 12:16:35.574859 master-0 kubenswrapper[31830]: I0319 12:16:35.574723 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 19 12:16:35.622633 master-0 kubenswrapper[31830]: I0319 12:16:35.622573 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 19 12:16:35.725645 master-0 kubenswrapper[31830]: I0319 12:16:35.725561 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 19 12:16:35.844444 master-0 kubenswrapper[31830]: I0319 12:16:35.844306 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-gfs2v" Mar 19 12:16:35.886975 master-0 kubenswrapper[31830]: I0319 12:16:35.886937 31830 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 12:16:35.887606 master-0 kubenswrapper[31830]: I0319 12:16:35.887572 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" containerName="startup-monitor" containerID="cri-o://5ba7acb3f3ec5aabe9892f5e134a406d00ab3f00ba8659c8d7820a5e0b7411f9" gracePeriod=5 Mar 19 12:16:36.293600 master-0 kubenswrapper[31830]: I0319 12:16:36.293557 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 19 12:16:36.294300 master-0 kubenswrapper[31830]: I0319 12:16:36.293672 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 19 12:16:36.360507 master-0 kubenswrapper[31830]: I0319 12:16:36.360438 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 19 12:16:36.504945 master-0 kubenswrapper[31830]: I0319 12:16:36.504889 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 19 12:16:36.675785 master-0 kubenswrapper[31830]: I0319 12:16:36.675620 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 19 12:16:36.747861 master-0 kubenswrapper[31830]: I0319 12:16:36.744231 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 19 12:16:36.888340 master-0 kubenswrapper[31830]: I0319 12:16:36.888291 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 19 12:16:37.082129 master-0 kubenswrapper[31830]: I0319 12:16:37.082078 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 19 12:16:37.137844 master-0 kubenswrapper[31830]: I0319 12:16:37.137777 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 19 12:16:37.160367 master-0 kubenswrapper[31830]: I0319 12:16:37.160309 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 19 12:16:37.263748 master-0 kubenswrapper[31830]: I0319 12:16:37.263704 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 19 12:16:37.304851 master-0 kubenswrapper[31830]: I0319 12:16:37.304783 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 19 12:16:37.348959 master-0 kubenswrapper[31830]: I0319 12:16:37.348850 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-ets52rpou52es" Mar 19 12:16:37.372833 master-0 kubenswrapper[31830]: I0319 12:16:37.372770 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 19 12:16:37.481200 master-0 kubenswrapper[31830]: I0319 12:16:37.481118 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 19 12:16:37.506019 master-0 kubenswrapper[31830]: I0319 12:16:37.505952 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 19 12:16:37.661053 master-0 kubenswrapper[31830]: I0319 12:16:37.660926 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 19 12:16:37.910170 master-0 kubenswrapper[31830]: I0319 12:16:37.910072 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 19 12:16:37.974604 master-0 kubenswrapper[31830]: I0319 12:16:37.974496 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 19 12:16:38.102771 master-0 kubenswrapper[31830]: I0319 12:16:38.102735 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 19 12:16:38.230993 master-0 kubenswrapper[31830]: I0319 12:16:38.230848 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-cmchf" Mar 19 12:16:38.406896 master-0 kubenswrapper[31830]: I0319 12:16:38.406861 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 19 12:16:38.928201 master-0 kubenswrapper[31830]: I0319 12:16:38.928158 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 19 12:16:39.045179 master-0 kubenswrapper[31830]: I0319 12:16:39.045096 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 19 12:16:40.119462 master-0 kubenswrapper[31830]: I0319 12:16:40.119392 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 19 12:16:40.366431 master-0 kubenswrapper[31830]: I0319 12:16:40.366370 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 19 12:16:40.649508 master-0 kubenswrapper[31830]: I0319 12:16:40.649394 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 19 12:16:41.247252 master-0 kubenswrapper[31830]: I0319 12:16:41.247208 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_16fb4ea7f83036d9c6adf3454fc7e9db/startup-monitor/0.log" Mar 19 12:16:41.247252 master-0 kubenswrapper[31830]: I0319 12:16:41.247256 31830 generic.go:334] "Generic (PLEG): container finished" podID="16fb4ea7f83036d9c6adf3454fc7e9db" containerID="5ba7acb3f3ec5aabe9892f5e134a406d00ab3f00ba8659c8d7820a5e0b7411f9" exitCode=137 Mar 19 12:16:41.481881 master-0 kubenswrapper[31830]: I0319 12:16:41.481853 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_16fb4ea7f83036d9c6adf3454fc7e9db/startup-monitor/0.log" Mar 19 12:16:41.482140 master-0 kubenswrapper[31830]: I0319 12:16:41.482127 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:16:41.647785 master-0 kubenswrapper[31830]: I0319 12:16:41.647653 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 19 12:16:41.648158 master-0 kubenswrapper[31830]: I0319 12:16:41.648127 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 19 12:16:41.648446 master-0 kubenswrapper[31830]: I0319 12:16:41.647791 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests" (OuterVolumeSpecName: "manifests") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:16:41.648446 master-0 kubenswrapper[31830]: I0319 12:16:41.648242 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log" (OuterVolumeSpecName: "var-log") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:16:41.648756 master-0 kubenswrapper[31830]: I0319 12:16:41.648720 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 19 12:16:41.648970 master-0 kubenswrapper[31830]: I0319 12:16:41.648903 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock" (OuterVolumeSpecName: "var-lock") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:16:41.649144 master-0 kubenswrapper[31830]: I0319 12:16:41.649115 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 19 12:16:41.649337 master-0 kubenswrapper[31830]: I0319 12:16:41.649311 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 19 12:16:41.649645 master-0 kubenswrapper[31830]: I0319 12:16:41.649173 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:16:41.650172 master-0 kubenswrapper[31830]: I0319 12:16:41.650139 31830 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") on node \"master-0\" DevicePath \"\"" Mar 19 12:16:41.650377 master-0 kubenswrapper[31830]: I0319 12:16:41.650345 31830 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") on node \"master-0\" DevicePath \"\"" Mar 19 12:16:41.650536 master-0 kubenswrapper[31830]: I0319 12:16:41.650514 31830 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 12:16:41.650681 master-0 kubenswrapper[31830]: I0319 12:16:41.650658 31830 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:16:41.656775 master-0 kubenswrapper[31830]: I0319 12:16:41.656678 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:16:41.692910 master-0 kubenswrapper[31830]: I0319 12:16:41.692764 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" path="/var/lib/kubelet/pods/16fb4ea7f83036d9c6adf3454fc7e9db/volumes" Mar 19 12:16:41.693389 master-0 kubenswrapper[31830]: I0319 12:16:41.693347 31830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 19 12:16:41.720520 master-0 kubenswrapper[31830]: I0319 12:16:41.720414 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 12:16:41.720520 master-0 kubenswrapper[31830]: I0319 12:16:41.720481 31830 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="c79f3c6e-a832-4d34-8e68-61d681f60eda" Mar 19 12:16:41.726029 master-0 kubenswrapper[31830]: I0319 12:16:41.725908 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 12:16:41.726029 master-0 kubenswrapper[31830]: I0319 12:16:41.726007 31830 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="c79f3c6e-a832-4d34-8e68-61d681f60eda" Mar 19 12:16:41.752755 master-0 kubenswrapper[31830]: I0319 12:16:41.752641 31830 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:16:42.259285 master-0 kubenswrapper[31830]: I0319 12:16:42.259205 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_16fb4ea7f83036d9c6adf3454fc7e9db/startup-monitor/0.log" Mar 19 12:16:42.260223 master-0 kubenswrapper[31830]: I0319 12:16:42.259320 31830 scope.go:117] "RemoveContainer" containerID="5ba7acb3f3ec5aabe9892f5e134a406d00ab3f00ba8659c8d7820a5e0b7411f9" Mar 19 12:16:42.260223 master-0 kubenswrapper[31830]: I0319 12:16:42.259440 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:16:43.089683 master-0 kubenswrapper[31830]: I0319 12:16:43.089591 31830 patch_prober.go:28] interesting pod/console-695474f69-bz8b7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 19 12:16:43.089971 master-0 kubenswrapper[31830]: I0319 12:16:43.089731 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-695474f69-bz8b7" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 19 12:16:43.978574 master-0 kubenswrapper[31830]: I0319 12:16:43.978478 31830 patch_prober.go:28] interesting pod/console-69f4fb98cb-qvvqh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Mar 19 12:16:43.978574 master-0 kubenswrapper[31830]: I0319 12:16:43.978552 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-69f4fb98cb-qvvqh" podUID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Mar 19 12:16:53.089556 master-0 kubenswrapper[31830]: I0319 12:16:53.089508 31830 patch_prober.go:28] interesting pod/console-695474f69-bz8b7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 19 12:16:53.090428 master-0 kubenswrapper[31830]: I0319 12:16:53.090393 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-695474f69-bz8b7" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 19 12:16:53.978387 master-0 kubenswrapper[31830]: I0319 12:16:53.978301 31830 patch_prober.go:28] interesting pod/console-69f4fb98cb-qvvqh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Mar 19 12:16:53.978622 master-0 kubenswrapper[31830]: I0319 12:16:53.978389 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-69f4fb98cb-qvvqh" podUID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Mar 19 12:17:03.088946 master-0 kubenswrapper[31830]: I0319 12:17:03.088890 31830 patch_prober.go:28] interesting pod/console-695474f69-bz8b7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 19 12:17:03.089726 master-0 kubenswrapper[31830]: I0319 12:17:03.088956 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-695474f69-bz8b7" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 19 12:17:03.420696 master-0 kubenswrapper[31830]: I0319 12:17:03.420641 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_09672015532ae9d1d74ae4d426cd904b/kube-controller-manager/1.log" Mar 19 12:17:03.422250 master-0 kubenswrapper[31830]: I0319 12:17:03.422228 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_09672015532ae9d1d74ae4d426cd904b/kube-controller-manager/0.log" Mar 19 12:17:03.422384 master-0 kubenswrapper[31830]: I0319 12:17:03.422274 31830 generic.go:334] "Generic (PLEG): container finished" podID="09672015532ae9d1d74ae4d426cd904b" containerID="e5fbf9965772e33dc6dad1627c0ebaa9bcbb080610a9ab8137ea4a6a55a96ec1" exitCode=137 Mar 19 12:17:03.422384 master-0 kubenswrapper[31830]: I0319 12:17:03.422302 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"09672015532ae9d1d74ae4d426cd904b","Type":"ContainerDied","Data":"e5fbf9965772e33dc6dad1627c0ebaa9bcbb080610a9ab8137ea4a6a55a96ec1"} Mar 19 12:17:03.422384 master-0 kubenswrapper[31830]: I0319 12:17:03.422326 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"09672015532ae9d1d74ae4d426cd904b","Type":"ContainerStarted","Data":"a2f2d3c455898f0dff08ce78d00fccc2ef15d161401b675e3b61d3fc312756c6"} Mar 19 12:17:03.422384 master-0 kubenswrapper[31830]: I0319 12:17:03.422340 31830 scope.go:117] "RemoveContainer" containerID="0caac3ca6bbe34a0e2d497521111d7392578df46354c8eb9456dc2e8b18fadb9" Mar 19 12:17:03.978127 master-0 kubenswrapper[31830]: I0319 12:17:03.978012 31830 patch_prober.go:28] interesting pod/console-69f4fb98cb-qvvqh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Mar 19 12:17:03.978127 master-0 kubenswrapper[31830]: I0319 12:17:03.978066 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-69f4fb98cb-qvvqh" podUID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Mar 19 12:17:04.432334 master-0 kubenswrapper[31830]: I0319 12:17:04.432178 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_09672015532ae9d1d74ae4d426cd904b/kube-controller-manager/1.log" Mar 19 12:17:07.718222 master-0 kubenswrapper[31830]: I0319 12:17:07.718159 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:17:12.382164 master-0 kubenswrapper[31830]: I0319 12:17:12.382087 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:17:12.386744 master-0 kubenswrapper[31830]: I0319 12:17:12.386255 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:17:13.090089 master-0 kubenswrapper[31830]: I0319 12:17:13.090039 31830 patch_prober.go:28] interesting pod/console-695474f69-bz8b7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 19 12:17:13.090597 master-0 kubenswrapper[31830]: I0319 12:17:13.090119 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-695474f69-bz8b7" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 19 12:17:13.978645 master-0 kubenswrapper[31830]: I0319 12:17:13.978592 31830 patch_prober.go:28] interesting pod/console-69f4fb98cb-qvvqh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Mar 19 12:17:13.979614 master-0 kubenswrapper[31830]: I0319 12:17:13.979558 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-69f4fb98cb-qvvqh" podUID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Mar 19 12:17:17.726232 master-0 kubenswrapper[31830]: I0319 12:17:17.726133 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:17:23.089695 master-0 kubenswrapper[31830]: I0319 12:17:23.089619 31830 patch_prober.go:28] interesting pod/console-695474f69-bz8b7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 19 12:17:23.089695 master-0 kubenswrapper[31830]: I0319 12:17:23.089698 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-695474f69-bz8b7" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 19 12:17:23.978002 master-0 kubenswrapper[31830]: I0319 12:17:23.977940 31830 patch_prober.go:28] interesting pod/console-69f4fb98cb-qvvqh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Mar 19 12:17:23.978198 master-0 kubenswrapper[31830]: I0319 12:17:23.978009 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-69f4fb98cb-qvvqh" podUID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Mar 19 12:17:29.494699 master-0 kubenswrapper[31830]: I0319 12:17:29.494609 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-b6697847c-bngch"] Mar 19 12:17:29.495653 master-0 kubenswrapper[31830]: E0319 12:17:29.494891 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" containerName="startup-monitor" Mar 19 12:17:29.495653 master-0 kubenswrapper[31830]: I0319 12:17:29.494904 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" containerName="startup-monitor" Mar 19 12:17:29.495653 master-0 kubenswrapper[31830]: E0319 12:17:29.494929 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f93d242-a135-4284-8ace-704d0ae01afe" containerName="installer" Mar 19 12:17:29.495653 master-0 kubenswrapper[31830]: I0319 12:17:29.494935 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f93d242-a135-4284-8ace-704d0ae01afe" containerName="installer" Mar 19 12:17:29.495653 master-0 kubenswrapper[31830]: I0319 12:17:29.495060 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f93d242-a135-4284-8ace-704d0ae01afe" containerName="installer" Mar 19 12:17:29.495653 master-0 kubenswrapper[31830]: I0319 12:17:29.495088 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" containerName="startup-monitor" Mar 19 12:17:29.495653 master-0 kubenswrapper[31830]: I0319 12:17:29.495472 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.497827 master-0 kubenswrapper[31830]: I0319 12:17:29.497752 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 19 12:17:29.498666 master-0 kubenswrapper[31830]: I0319 12:17:29.498615 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 19 12:17:29.498666 master-0 kubenswrapper[31830]: I0319 12:17:29.498647 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 19 12:17:29.499514 master-0 kubenswrapper[31830]: I0319 12:17:29.499473 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 19 12:17:29.499858 master-0 kubenswrapper[31830]: I0319 12:17:29.499782 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 19 12:17:29.500043 master-0 kubenswrapper[31830]: I0319 12:17:29.500014 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 19 12:17:29.500395 master-0 kubenswrapper[31830]: I0319 12:17:29.500368 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-5xkbm" Mar 19 12:17:29.500588 master-0 kubenswrapper[31830]: I0319 12:17:29.500558 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 19 12:17:29.500783 master-0 kubenswrapper[31830]: I0319 12:17:29.500758 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 19 12:17:29.501024 master-0 kubenswrapper[31830]: I0319 12:17:29.500991 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 19 12:17:29.501221 master-0 kubenswrapper[31830]: I0319 12:17:29.501192 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 19 12:17:29.502110 master-0 kubenswrapper[31830]: I0319 12:17:29.502079 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 19 12:17:29.514052 master-0 kubenswrapper[31830]: I0319 12:17:29.513992 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 19 12:17:29.516432 master-0 kubenswrapper[31830]: I0319 12:17:29.516379 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-b6697847c-bngch"] Mar 19 12:17:29.544887 master-0 kubenswrapper[31830]: I0319 12:17:29.530777 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 19 12:17:29.615471 master-0 kubenswrapper[31830]: I0319 12:17:29.615382 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.615471 master-0 kubenswrapper[31830]: I0319 12:17:29.615458 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.615791 master-0 kubenswrapper[31830]: I0319 12:17:29.615507 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.615791 master-0 kubenswrapper[31830]: I0319 12:17:29.615569 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.615791 master-0 kubenswrapper[31830]: I0319 12:17:29.615612 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-user-template-error\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.615791 master-0 kubenswrapper[31830]: I0319 12:17:29.615652 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c7b5826-aae3-41ee-afc4-035b1f6490a8-audit-dir\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.615791 master-0 kubenswrapper[31830]: I0319 12:17:29.615695 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-session\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.615791 master-0 kubenswrapper[31830]: I0319 12:17:29.615778 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.616174 master-0 kubenswrapper[31830]: I0319 12:17:29.615861 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.616174 master-0 kubenswrapper[31830]: I0319 12:17:29.616015 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pkkm\" (UniqueName: \"kubernetes.io/projected/9c7b5826-aae3-41ee-afc4-035b1f6490a8-kube-api-access-9pkkm\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.616174 master-0 kubenswrapper[31830]: I0319 12:17:29.616153 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-audit-policies\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.616350 master-0 kubenswrapper[31830]: I0319 12:17:29.616189 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-user-template-login\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.616350 master-0 kubenswrapper[31830]: I0319 12:17:29.616265 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.717840 master-0 kubenswrapper[31830]: I0319 12:17:29.717751 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.718065 master-0 kubenswrapper[31830]: I0319 12:17:29.717900 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pkkm\" (UniqueName: \"kubernetes.io/projected/9c7b5826-aae3-41ee-afc4-035b1f6490a8-kube-api-access-9pkkm\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.718065 master-0 kubenswrapper[31830]: I0319 12:17:29.717970 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-audit-policies\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.718065 master-0 kubenswrapper[31830]: I0319 12:17:29.718009 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-user-template-login\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.718065 master-0 kubenswrapper[31830]: I0319 12:17:29.718048 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.718260 master-0 kubenswrapper[31830]: I0319 12:17:29.718117 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.718260 master-0 kubenswrapper[31830]: I0319 12:17:29.718163 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.718469 master-0 kubenswrapper[31830]: I0319 12:17:29.718425 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.718628 master-0 kubenswrapper[31830]: I0319 12:17:29.718501 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.718628 master-0 kubenswrapper[31830]: I0319 12:17:29.718541 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-user-template-error\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.718628 master-0 kubenswrapper[31830]: I0319 12:17:29.718575 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c7b5826-aae3-41ee-afc4-035b1f6490a8-audit-dir\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.718912 master-0 kubenswrapper[31830]: I0319 12:17:29.718626 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-session\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.718912 master-0 kubenswrapper[31830]: I0319 12:17:29.718704 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.718912 master-0 kubenswrapper[31830]: I0319 12:17:29.718740 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-audit-policies\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.718912 master-0 kubenswrapper[31830]: E0319 12:17:29.718887 31830 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Mar 19 12:17:29.719076 master-0 kubenswrapper[31830]: E0319 12:17:29.718938 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-session podName:9c7b5826-aae3-41ee-afc4-035b1f6490a8 nodeName:}" failed. No retries permitted until 2026-03-19 12:17:30.218921586 +0000 UTC m=+188.767882290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-session") pod "oauth-openshift-b6697847c-bngch" (UID: "9c7b5826-aae3-41ee-afc4-035b1f6490a8") : secret "v4-0-config-system-session" not found Mar 19 12:17:29.719527 master-0 kubenswrapper[31830]: E0319 12:17:29.719488 31830 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 19 12:17:29.719618 master-0 kubenswrapper[31830]: E0319 12:17:29.719534 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-cliconfig podName:9c7b5826-aae3-41ee-afc4-035b1f6490a8 nodeName:}" failed. No retries permitted until 2026-03-19 12:17:30.219522625 +0000 UTC m=+188.768483329 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-cliconfig") pod "oauth-openshift-b6697847c-bngch" (UID: "9c7b5826-aae3-41ee-afc4-035b1f6490a8") : configmap "v4-0-config-system-cliconfig" not found Mar 19 12:17:29.720079 master-0 kubenswrapper[31830]: I0319 12:17:29.720040 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.720223 master-0 kubenswrapper[31830]: I0319 12:17:29.720186 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c7b5826-aae3-41ee-afc4-035b1f6490a8-audit-dir\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.720651 master-0 kubenswrapper[31830]: I0319 12:17:29.720614 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.722747 master-0 kubenswrapper[31830]: I0319 12:17:29.722709 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.723307 master-0 kubenswrapper[31830]: I0319 12:17:29.723250 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.724442 master-0 kubenswrapper[31830]: I0319 12:17:29.724330 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-user-template-login\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.734355 master-0 kubenswrapper[31830]: I0319 12:17:29.734307 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.734486 master-0 kubenswrapper[31830]: I0319 12:17:29.734387 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-user-template-error\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.734718 master-0 kubenswrapper[31830]: I0319 12:17:29.734686 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:29.745646 master-0 kubenswrapper[31830]: I0319 12:17:29.745524 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pkkm\" (UniqueName: \"kubernetes.io/projected/9c7b5826-aae3-41ee-afc4-035b1f6490a8-kube-api-access-9pkkm\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:30.225292 master-0 kubenswrapper[31830]: I0319 12:17:30.225206 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:30.225610 master-0 kubenswrapper[31830]: E0319 12:17:30.225422 31830 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 19 12:17:30.225610 master-0 kubenswrapper[31830]: E0319 12:17:30.225546 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-cliconfig podName:9c7b5826-aae3-41ee-afc4-035b1f6490a8 nodeName:}" failed. No retries permitted until 2026-03-19 12:17:31.225517395 +0000 UTC m=+189.774478139 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-cliconfig") pod "oauth-openshift-b6697847c-bngch" (UID: "9c7b5826-aae3-41ee-afc4-035b1f6490a8") : configmap "v4-0-config-system-cliconfig" not found Mar 19 12:17:30.225610 master-0 kubenswrapper[31830]: I0319 12:17:30.225442 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-session\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:30.226045 master-0 kubenswrapper[31830]: E0319 12:17:30.225612 31830 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Mar 19 12:17:30.226045 master-0 kubenswrapper[31830]: E0319 12:17:30.225726 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-session podName:9c7b5826-aae3-41ee-afc4-035b1f6490a8 nodeName:}" failed. No retries permitted until 2026-03-19 12:17:31.225689851 +0000 UTC m=+189.774650605 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-session") pod "oauth-openshift-b6697847c-bngch" (UID: "9c7b5826-aae3-41ee-afc4-035b1f6490a8") : secret "v4-0-config-system-session" not found Mar 19 12:17:31.243030 master-0 kubenswrapper[31830]: I0319 12:17:31.242850 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-session\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:31.243767 master-0 kubenswrapper[31830]: I0319 12:17:31.243061 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:31.243767 master-0 kubenswrapper[31830]: E0319 12:17:31.243278 31830 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 19 12:17:31.243767 master-0 kubenswrapper[31830]: E0319 12:17:31.243432 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-cliconfig podName:9c7b5826-aae3-41ee-afc4-035b1f6490a8 nodeName:}" failed. No retries permitted until 2026-03-19 12:17:33.24339456 +0000 UTC m=+191.792355304 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-cliconfig") pod "oauth-openshift-b6697847c-bngch" (UID: "9c7b5826-aae3-41ee-afc4-035b1f6490a8") : configmap "v4-0-config-system-cliconfig" not found Mar 19 12:17:31.246519 master-0 kubenswrapper[31830]: I0319 12:17:31.246458 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-session\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:33.089083 master-0 kubenswrapper[31830]: I0319 12:17:33.089024 31830 patch_prober.go:28] interesting pod/console-695474f69-bz8b7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 19 12:17:33.089574 master-0 kubenswrapper[31830]: I0319 12:17:33.089112 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-695474f69-bz8b7" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 19 12:17:33.269717 master-0 kubenswrapper[31830]: I0319 12:17:33.269660 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-b6697847c-bngch\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:33.270007 master-0 kubenswrapper[31830]: E0319 12:17:33.269758 31830 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 19 12:17:33.270007 master-0 kubenswrapper[31830]: E0319 12:17:33.269853 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-cliconfig podName:9c7b5826-aae3-41ee-afc4-035b1f6490a8 nodeName:}" failed. No retries permitted until 2026-03-19 12:17:37.269828689 +0000 UTC m=+195.818789393 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-cliconfig") pod "oauth-openshift-b6697847c-bngch" (UID: "9c7b5826-aae3-41ee-afc4-035b1f6490a8") : configmap "v4-0-config-system-cliconfig" not found Mar 19 12:17:33.978771 master-0 kubenswrapper[31830]: I0319 12:17:33.978701 31830 patch_prober.go:28] interesting pod/console-69f4fb98cb-qvvqh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Mar 19 12:17:33.978977 master-0 kubenswrapper[31830]: I0319 12:17:33.978870 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-69f4fb98cb-qvvqh" podUID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Mar 19 12:17:35.648190 master-0 kubenswrapper[31830]: I0319 12:17:35.648101 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-b6697847c-bngch"] Mar 19 12:17:35.649010 master-0 kubenswrapper[31830]: E0319 12:17:35.648696 31830 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[v4-0-config-system-cliconfig], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-authentication/oauth-openshift-b6697847c-bngch" podUID="9c7b5826-aae3-41ee-afc4-035b1f6490a8" Mar 19 12:17:35.685867 master-0 kubenswrapper[31830]: I0319 12:17:35.685320 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:35.701109 master-0 kubenswrapper[31830]: I0319 12:17:35.696003 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:35.809744 master-0 kubenswrapper[31830]: I0319 12:17:35.809684 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-user-template-error\") pod \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " Mar 19 12:17:35.809998 master-0 kubenswrapper[31830]: I0319 12:17:35.809772 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-user-template-provider-selection\") pod \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " Mar 19 12:17:35.809998 master-0 kubenswrapper[31830]: I0319 12:17:35.809870 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-audit-policies\") pod \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " Mar 19 12:17:35.809998 master-0 kubenswrapper[31830]: I0319 12:17:35.809907 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-serving-cert\") pod \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " Mar 19 12:17:35.809998 master-0 kubenswrapper[31830]: I0319 12:17:35.809951 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-session\") pod \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " Mar 19 12:17:35.810136 master-0 kubenswrapper[31830]: I0319 12:17:35.810008 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pkkm\" (UniqueName: \"kubernetes.io/projected/9c7b5826-aae3-41ee-afc4-035b1f6490a8-kube-api-access-9pkkm\") pod \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " Mar 19 12:17:35.810176 master-0 kubenswrapper[31830]: I0319 12:17:35.810136 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-service-ca\") pod \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " Mar 19 12:17:35.810209 master-0 kubenswrapper[31830]: I0319 12:17:35.810193 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-trusted-ca-bundle\") pod \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " Mar 19 12:17:35.810548 master-0 kubenswrapper[31830]: I0319 12:17:35.810434 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c7b5826-aae3-41ee-afc4-035b1f6490a8-audit-dir\") pod \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " Mar 19 12:17:35.810548 master-0 kubenswrapper[31830]: I0319 12:17:35.810474 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9c7b5826-aae3-41ee-afc4-035b1f6490a8" (UID: "9c7b5826-aae3-41ee-afc4-035b1f6490a8"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:17:35.810548 master-0 kubenswrapper[31830]: I0319 12:17:35.810518 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c7b5826-aae3-41ee-afc4-035b1f6490a8-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9c7b5826-aae3-41ee-afc4-035b1f6490a8" (UID: "9c7b5826-aae3-41ee-afc4-035b1f6490a8"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:17:35.810715 master-0 kubenswrapper[31830]: I0319 12:17:35.810582 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-ocp-branding-template\") pod \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " Mar 19 12:17:35.810715 master-0 kubenswrapper[31830]: I0319 12:17:35.810638 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-router-certs\") pod \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " Mar 19 12:17:35.810715 master-0 kubenswrapper[31830]: I0319 12:17:35.810706 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-user-template-login\") pod \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\" (UID: \"9c7b5826-aae3-41ee-afc4-035b1f6490a8\") " Mar 19 12:17:35.811614 master-0 kubenswrapper[31830]: I0319 12:17:35.811405 31830 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 19 12:17:35.811614 master-0 kubenswrapper[31830]: I0319 12:17:35.811446 31830 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c7b5826-aae3-41ee-afc4-035b1f6490a8-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:17:35.811825 master-0 kubenswrapper[31830]: I0319 12:17:35.811785 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9c7b5826-aae3-41ee-afc4-035b1f6490a8" (UID: "9c7b5826-aae3-41ee-afc4-035b1f6490a8"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:17:35.812976 master-0 kubenswrapper[31830]: I0319 12:17:35.812938 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9c7b5826-aae3-41ee-afc4-035b1f6490a8" (UID: "9c7b5826-aae3-41ee-afc4-035b1f6490a8"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:17:35.814365 master-0 kubenswrapper[31830]: I0319 12:17:35.813410 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9c7b5826-aae3-41ee-afc4-035b1f6490a8" (UID: "9c7b5826-aae3-41ee-afc4-035b1f6490a8"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:17:35.814443 master-0 kubenswrapper[31830]: I0319 12:17:35.814291 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9c7b5826-aae3-41ee-afc4-035b1f6490a8" (UID: "9c7b5826-aae3-41ee-afc4-035b1f6490a8"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:17:35.814517 master-0 kubenswrapper[31830]: I0319 12:17:35.814461 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9c7b5826-aae3-41ee-afc4-035b1f6490a8" (UID: "9c7b5826-aae3-41ee-afc4-035b1f6490a8"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:17:35.815252 master-0 kubenswrapper[31830]: I0319 12:17:35.815187 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9c7b5826-aae3-41ee-afc4-035b1f6490a8" (UID: "9c7b5826-aae3-41ee-afc4-035b1f6490a8"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:17:35.815331 master-0 kubenswrapper[31830]: I0319 12:17:35.815280 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c7b5826-aae3-41ee-afc4-035b1f6490a8-kube-api-access-9pkkm" (OuterVolumeSpecName: "kube-api-access-9pkkm") pod "9c7b5826-aae3-41ee-afc4-035b1f6490a8" (UID: "9c7b5826-aae3-41ee-afc4-035b1f6490a8"). InnerVolumeSpecName "kube-api-access-9pkkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:17:35.815736 master-0 kubenswrapper[31830]: I0319 12:17:35.815660 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9c7b5826-aae3-41ee-afc4-035b1f6490a8" (UID: "9c7b5826-aae3-41ee-afc4-035b1f6490a8"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:17:35.816759 master-0 kubenswrapper[31830]: I0319 12:17:35.816717 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9c7b5826-aae3-41ee-afc4-035b1f6490a8" (UID: "9c7b5826-aae3-41ee-afc4-035b1f6490a8"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:17:35.817136 master-0 kubenswrapper[31830]: I0319 12:17:35.817098 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9c7b5826-aae3-41ee-afc4-035b1f6490a8" (UID: "9c7b5826-aae3-41ee-afc4-035b1f6490a8"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:17:35.912701 master-0 kubenswrapper[31830]: I0319 12:17:35.912556 31830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 12:17:35.912701 master-0 kubenswrapper[31830]: I0319 12:17:35.912607 31830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:17:35.912701 master-0 kubenswrapper[31830]: I0319 12:17:35.912621 31830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 19 12:17:35.912701 master-0 kubenswrapper[31830]: I0319 12:17:35.912637 31830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:17:35.912701 master-0 kubenswrapper[31830]: I0319 12:17:35.912649 31830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 19 12:17:35.912701 master-0 kubenswrapper[31830]: I0319 12:17:35.912662 31830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 19 12:17:35.912701 master-0 kubenswrapper[31830]: I0319 12:17:35.912674 31830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 19 12:17:35.912701 master-0 kubenswrapper[31830]: I0319 12:17:35.912686 31830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 12:17:35.912701 master-0 kubenswrapper[31830]: I0319 12:17:35.912698 31830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 19 12:17:35.912701 master-0 kubenswrapper[31830]: I0319 12:17:35.912711 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pkkm\" (UniqueName: \"kubernetes.io/projected/9c7b5826-aae3-41ee-afc4-035b1f6490a8-kube-api-access-9pkkm\") on node \"master-0\" DevicePath \"\"" Mar 19 12:17:36.692730 master-0 kubenswrapper[31830]: I0319 12:17:36.692686 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-b6697847c-bngch" Mar 19 12:17:36.758219 master-0 kubenswrapper[31830]: I0319 12:17:36.758161 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv"] Mar 19 12:17:36.759319 master-0 kubenswrapper[31830]: I0319 12:17:36.759291 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.763106 master-0 kubenswrapper[31830]: I0319 12:17:36.763043 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 19 12:17:36.763367 master-0 kubenswrapper[31830]: I0319 12:17:36.763306 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 19 12:17:36.763637 master-0 kubenswrapper[31830]: I0319 12:17:36.763575 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-b6697847c-bngch"] Mar 19 12:17:36.763747 master-0 kubenswrapper[31830]: I0319 12:17:36.763652 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-5xkbm" Mar 19 12:17:36.763938 master-0 kubenswrapper[31830]: I0319 12:17:36.763899 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 19 12:17:36.764029 master-0 kubenswrapper[31830]: I0319 12:17:36.763958 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 19 12:17:36.764407 master-0 kubenswrapper[31830]: I0319 12:17:36.764366 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 19 12:17:36.764504 master-0 kubenswrapper[31830]: I0319 12:17:36.764435 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 19 12:17:36.764566 master-0 kubenswrapper[31830]: I0319 12:17:36.764555 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 19 12:17:36.765124 master-0 kubenswrapper[31830]: I0319 12:17:36.765083 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 19 12:17:36.765241 master-0 kubenswrapper[31830]: I0319 12:17:36.765140 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 19 12:17:36.765613 master-0 kubenswrapper[31830]: I0319 12:17:36.765580 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 19 12:17:36.787723 master-0 kubenswrapper[31830]: I0319 12:17:36.787688 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 19 12:17:36.787940 master-0 kubenswrapper[31830]: I0319 12:17:36.787721 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 19 12:17:36.789135 master-0 kubenswrapper[31830]: I0319 12:17:36.789078 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 19 12:17:36.795611 master-0 kubenswrapper[31830]: I0319 12:17:36.795543 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-b6697847c-bngch"] Mar 19 12:17:36.808954 master-0 kubenswrapper[31830]: I0319 12:17:36.808868 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv"] Mar 19 12:17:36.825595 master-0 kubenswrapper[31830]: I0319 12:17:36.825532 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vl92\" (UniqueName: \"kubernetes.io/projected/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-kube-api-access-6vl92\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.825595 master-0 kubenswrapper[31830]: I0319 12:17:36.825590 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-user-template-login\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.825899 master-0 kubenswrapper[31830]: I0319 12:17:36.825626 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.825899 master-0 kubenswrapper[31830]: I0319 12:17:36.825690 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-service-ca\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.825899 master-0 kubenswrapper[31830]: I0319 12:17:36.825713 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-user-template-error\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.825899 master-0 kubenswrapper[31830]: I0319 12:17:36.825740 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-audit-policies\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.826137 master-0 kubenswrapper[31830]: I0319 12:17:36.825935 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.826137 master-0 kubenswrapper[31830]: I0319 12:17:36.825989 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-audit-dir\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.826137 master-0 kubenswrapper[31830]: I0319 12:17:36.826038 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.826137 master-0 kubenswrapper[31830]: I0319 12:17:36.826070 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-session\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.826137 master-0 kubenswrapper[31830]: I0319 12:17:36.826102 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-router-certs\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.826137 master-0 kubenswrapper[31830]: I0319 12:17:36.826131 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.826482 master-0 kubenswrapper[31830]: I0319 12:17:36.826206 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.928844 master-0 kubenswrapper[31830]: I0319 12:17:36.927879 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.928844 master-0 kubenswrapper[31830]: I0319 12:17:36.927991 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-audit-dir\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.928844 master-0 kubenswrapper[31830]: I0319 12:17:36.928061 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.928844 master-0 kubenswrapper[31830]: I0319 12:17:36.928123 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-session\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.928844 master-0 kubenswrapper[31830]: I0319 12:17:36.928184 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-router-certs\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.928844 master-0 kubenswrapper[31830]: I0319 12:17:36.928240 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.928844 master-0 kubenswrapper[31830]: I0319 12:17:36.928353 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.928844 master-0 kubenswrapper[31830]: I0319 12:17:36.928463 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vl92\" (UniqueName: \"kubernetes.io/projected/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-kube-api-access-6vl92\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.928844 master-0 kubenswrapper[31830]: I0319 12:17:36.928528 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-user-template-login\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.928844 master-0 kubenswrapper[31830]: I0319 12:17:36.928600 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.928844 master-0 kubenswrapper[31830]: I0319 12:17:36.928686 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-service-ca\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.928844 master-0 kubenswrapper[31830]: I0319 12:17:36.928744 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-user-template-error\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.928844 master-0 kubenswrapper[31830]: I0319 12:17:36.928837 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-audit-policies\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.930590 master-0 kubenswrapper[31830]: I0319 12:17:36.928975 31830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c7b5826-aae3-41ee-afc4-035b1f6490a8-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 19 12:17:36.930590 master-0 kubenswrapper[31830]: E0319 12:17:36.929480 31830 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 19 12:17:36.930590 master-0 kubenswrapper[31830]: E0319 12:17:36.929576 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-cliconfig podName:e3f23f8a-0a1f-47e3-b40c-9503a88809f9 nodeName:}" failed. No retries permitted until 2026-03-19 12:17:37.429548282 +0000 UTC m=+195.978509016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-cliconfig") pod "oauth-openshift-f7b6b8b77-5dcqv" (UID: "e3f23f8a-0a1f-47e3-b40c-9503a88809f9") : configmap "v4-0-config-system-cliconfig" not found Mar 19 12:17:36.930590 master-0 kubenswrapper[31830]: I0319 12:17:36.930515 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-audit-policies\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.941843 master-0 kubenswrapper[31830]: I0319 12:17:36.932970 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-audit-dir\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.941843 master-0 kubenswrapper[31830]: I0319 12:17:36.937196 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.941843 master-0 kubenswrapper[31830]: I0319 12:17:36.941654 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.942345 master-0 kubenswrapper[31830]: I0319 12:17:36.942135 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-service-ca\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.950909 master-0 kubenswrapper[31830]: I0319 12:17:36.946737 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-session\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.950909 master-0 kubenswrapper[31830]: I0319 12:17:36.948524 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.959890 master-0 kubenswrapper[31830]: I0319 12:17:36.959830 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-router-certs\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.969859 master-0 kubenswrapper[31830]: I0319 12:17:36.968307 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-user-template-login\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.977833 master-0 kubenswrapper[31830]: I0319 12:17:36.975366 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vl92\" (UniqueName: \"kubernetes.io/projected/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-kube-api-access-6vl92\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.977833 master-0 kubenswrapper[31830]: I0319 12:17:36.977283 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:36.992823 master-0 kubenswrapper[31830]: I0319 12:17:36.989086 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-user-template-error\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:37.436166 master-0 kubenswrapper[31830]: I0319 12:17:37.436056 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:37.437408 master-0 kubenswrapper[31830]: I0319 12:17:37.437330 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f7b6b8b77-5dcqv\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:37.691276 master-0 kubenswrapper[31830]: I0319 12:17:37.691121 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c7b5826-aae3-41ee-afc4-035b1f6490a8" path="/var/lib/kubelet/pods/9c7b5826-aae3-41ee-afc4-035b1f6490a8/volumes" Mar 19 12:17:37.691276 master-0 kubenswrapper[31830]: I0319 12:17:37.691233 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:38.148076 master-0 kubenswrapper[31830]: W0319 12:17:38.147814 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3f23f8a_0a1f_47e3_b40c_9503a88809f9.slice/crio-04e8fd52ab7b08e929542c59ecc7a2b5d8f4db4474947829a16ae3c8c5f8b6fd WatchSource:0}: Error finding container 04e8fd52ab7b08e929542c59ecc7a2b5d8f4db4474947829a16ae3c8c5f8b6fd: Status 404 returned error can't find the container with id 04e8fd52ab7b08e929542c59ecc7a2b5d8f4db4474947829a16ae3c8c5f8b6fd Mar 19 12:17:38.155399 master-0 kubenswrapper[31830]: I0319 12:17:38.155364 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv"] Mar 19 12:17:38.717666 master-0 kubenswrapper[31830]: I0319 12:17:38.717610 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" event={"ID":"e3f23f8a-0a1f-47e3-b40c-9503a88809f9","Type":"ContainerStarted","Data":"04e8fd52ab7b08e929542c59ecc7a2b5d8f4db4474947829a16ae3c8c5f8b6fd"} Mar 19 12:17:38.999822 master-0 kubenswrapper[31830]: I0319 12:17:38.999734 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-8p7qr"] Mar 19 12:17:39.000591 master-0 kubenswrapper[31830]: I0319 12:17:39.000556 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8p7qr" Mar 19 12:17:39.002757 master-0 kubenswrapper[31830]: I0319 12:17:39.002716 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-9wk7c" Mar 19 12:17:39.002887 master-0 kubenswrapper[31830]: I0319 12:17:39.002716 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 19 12:17:39.060232 master-0 kubenswrapper[31830]: I0319 12:17:39.060171 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6241ae9b-177b-4d97-9366-479855d8464f-host\") pod \"node-ca-8p7qr\" (UID: \"6241ae9b-177b-4d97-9366-479855d8464f\") " pod="openshift-image-registry/node-ca-8p7qr" Mar 19 12:17:39.060232 master-0 kubenswrapper[31830]: I0319 12:17:39.060236 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6241ae9b-177b-4d97-9366-479855d8464f-serviceca\") pod \"node-ca-8p7qr\" (UID: \"6241ae9b-177b-4d97-9366-479855d8464f\") " pod="openshift-image-registry/node-ca-8p7qr" Mar 19 12:17:39.060484 master-0 kubenswrapper[31830]: I0319 12:17:39.060305 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxxqf\" (UniqueName: \"kubernetes.io/projected/6241ae9b-177b-4d97-9366-479855d8464f-kube-api-access-wxxqf\") pod \"node-ca-8p7qr\" (UID: \"6241ae9b-177b-4d97-9366-479855d8464f\") " pod="openshift-image-registry/node-ca-8p7qr" Mar 19 12:17:39.161873 master-0 kubenswrapper[31830]: I0319 12:17:39.161806 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6241ae9b-177b-4d97-9366-479855d8464f-host\") pod \"node-ca-8p7qr\" (UID: \"6241ae9b-177b-4d97-9366-479855d8464f\") " pod="openshift-image-registry/node-ca-8p7qr" Mar 19 12:17:39.161873 master-0 kubenswrapper[31830]: I0319 12:17:39.161859 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6241ae9b-177b-4d97-9366-479855d8464f-serviceca\") pod \"node-ca-8p7qr\" (UID: \"6241ae9b-177b-4d97-9366-479855d8464f\") " pod="openshift-image-registry/node-ca-8p7qr" Mar 19 12:17:39.162665 master-0 kubenswrapper[31830]: I0319 12:17:39.161902 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxxqf\" (UniqueName: \"kubernetes.io/projected/6241ae9b-177b-4d97-9366-479855d8464f-kube-api-access-wxxqf\") pod \"node-ca-8p7qr\" (UID: \"6241ae9b-177b-4d97-9366-479855d8464f\") " pod="openshift-image-registry/node-ca-8p7qr" Mar 19 12:17:39.162665 master-0 kubenswrapper[31830]: I0319 12:17:39.161971 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6241ae9b-177b-4d97-9366-479855d8464f-host\") pod \"node-ca-8p7qr\" (UID: \"6241ae9b-177b-4d97-9366-479855d8464f\") " pod="openshift-image-registry/node-ca-8p7qr" Mar 19 12:17:39.162928 master-0 kubenswrapper[31830]: I0319 12:17:39.162889 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6241ae9b-177b-4d97-9366-479855d8464f-serviceca\") pod \"node-ca-8p7qr\" (UID: \"6241ae9b-177b-4d97-9366-479855d8464f\") " pod="openshift-image-registry/node-ca-8p7qr" Mar 19 12:17:39.182955 master-0 kubenswrapper[31830]: I0319 12:17:39.181192 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxxqf\" (UniqueName: \"kubernetes.io/projected/6241ae9b-177b-4d97-9366-479855d8464f-kube-api-access-wxxqf\") pod \"node-ca-8p7qr\" (UID: \"6241ae9b-177b-4d97-9366-479855d8464f\") " pod="openshift-image-registry/node-ca-8p7qr" Mar 19 12:17:39.327023 master-0 kubenswrapper[31830]: I0319 12:17:39.326743 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8p7qr" Mar 19 12:17:39.344363 master-0 kubenswrapper[31830]: W0319 12:17:39.344276 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6241ae9b_177b_4d97_9366_479855d8464f.slice/crio-2209bf9a46225bd2d64107d6ab36e228b64f52d1e588b6d5647f50f44071663a WatchSource:0}: Error finding container 2209bf9a46225bd2d64107d6ab36e228b64f52d1e588b6d5647f50f44071663a: Status 404 returned error can't find the container with id 2209bf9a46225bd2d64107d6ab36e228b64f52d1e588b6d5647f50f44071663a Mar 19 12:17:39.727215 master-0 kubenswrapper[31830]: I0319 12:17:39.727156 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8p7qr" event={"ID":"6241ae9b-177b-4d97-9366-479855d8464f","Type":"ContainerStarted","Data":"2209bf9a46225bd2d64107d6ab36e228b64f52d1e588b6d5647f50f44071663a"} Mar 19 12:17:40.911697 master-0 kubenswrapper[31830]: I0319 12:17:40.911607 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq"] Mar 19 12:17:40.919390 master-0 kubenswrapper[31830]: I0319 12:17:40.919334 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:40.922552 master-0 kubenswrapper[31830]: I0319 12:17:40.922493 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-bfonl27j4vul7" Mar 19 12:17:40.922739 master-0 kubenswrapper[31830]: I0319 12:17:40.922713 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 19 12:17:40.922888 master-0 kubenswrapper[31830]: I0319 12:17:40.922861 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 19 12:17:40.923422 master-0 kubenswrapper[31830]: I0319 12:17:40.923031 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 19 12:17:40.923422 master-0 kubenswrapper[31830]: I0319 12:17:40.923166 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 19 12:17:40.923422 master-0 kubenswrapper[31830]: I0319 12:17:40.923192 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 19 12:17:40.943770 master-0 kubenswrapper[31830]: I0319 12:17:40.943689 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq"] Mar 19 12:17:40.998039 master-0 kubenswrapper[31830]: I0319 12:17:40.997989 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3d3b5c49-51a9-465a-b6e9-b0107612c311-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:40.998177 master-0 kubenswrapper[31830]: I0319 12:17:40.998060 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/3d3b5c49-51a9-465a-b6e9-b0107612c311-secret-grpc-tls\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:40.998177 master-0 kubenswrapper[31830]: I0319 12:17:40.998086 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/3d3b5c49-51a9-465a-b6e9-b0107612c311-secret-thanos-querier-tls\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:40.998177 master-0 kubenswrapper[31830]: I0319 12:17:40.998108 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/3d3b5c49-51a9-465a-b6e9-b0107612c311-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:40.998177 master-0 kubenswrapper[31830]: I0319 12:17:40.998133 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h6lb\" (UniqueName: \"kubernetes.io/projected/3d3b5c49-51a9-465a-b6e9-b0107612c311-kube-api-access-8h6lb\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:40.998177 master-0 kubenswrapper[31830]: I0319 12:17:40.998162 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/3d3b5c49-51a9-465a-b6e9-b0107612c311-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:40.998342 master-0 kubenswrapper[31830]: I0319 12:17:40.998182 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3d3b5c49-51a9-465a-b6e9-b0107612c311-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:40.998342 master-0 kubenswrapper[31830]: I0319 12:17:40.998228 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d3b5c49-51a9-465a-b6e9-b0107612c311-metrics-client-ca\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:41.099428 master-0 kubenswrapper[31830]: I0319 12:17:41.099376 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/3d3b5c49-51a9-465a-b6e9-b0107612c311-secret-grpc-tls\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:41.099428 master-0 kubenswrapper[31830]: I0319 12:17:41.099414 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/3d3b5c49-51a9-465a-b6e9-b0107612c311-secret-thanos-querier-tls\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:41.099428 master-0 kubenswrapper[31830]: I0319 12:17:41.099435 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/3d3b5c49-51a9-465a-b6e9-b0107612c311-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:41.100001 master-0 kubenswrapper[31830]: I0319 12:17:41.099458 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h6lb\" (UniqueName: \"kubernetes.io/projected/3d3b5c49-51a9-465a-b6e9-b0107612c311-kube-api-access-8h6lb\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:41.100001 master-0 kubenswrapper[31830]: I0319 12:17:41.099483 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/3d3b5c49-51a9-465a-b6e9-b0107612c311-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:41.100001 master-0 kubenswrapper[31830]: I0319 12:17:41.099499 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3d3b5c49-51a9-465a-b6e9-b0107612c311-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:41.100001 master-0 kubenswrapper[31830]: I0319 12:17:41.099542 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d3b5c49-51a9-465a-b6e9-b0107612c311-metrics-client-ca\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:41.100001 master-0 kubenswrapper[31830]: I0319 12:17:41.099576 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3d3b5c49-51a9-465a-b6e9-b0107612c311-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:41.102454 master-0 kubenswrapper[31830]: I0319 12:17:41.102404 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d3b5c49-51a9-465a-b6e9-b0107612c311-metrics-client-ca\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:41.104006 master-0 kubenswrapper[31830]: I0319 12:17:41.103957 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3d3b5c49-51a9-465a-b6e9-b0107612c311-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:41.104215 master-0 kubenswrapper[31830]: I0319 12:17:41.104181 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3d3b5c49-51a9-465a-b6e9-b0107612c311-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:41.104345 master-0 kubenswrapper[31830]: I0319 12:17:41.104315 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/3d3b5c49-51a9-465a-b6e9-b0107612c311-secret-thanos-querier-tls\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:41.108243 master-0 kubenswrapper[31830]: I0319 12:17:41.108207 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/3d3b5c49-51a9-465a-b6e9-b0107612c311-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:41.108468 master-0 kubenswrapper[31830]: I0319 12:17:41.108411 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/3d3b5c49-51a9-465a-b6e9-b0107612c311-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:41.119820 master-0 kubenswrapper[31830]: I0319 12:17:41.119756 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/3d3b5c49-51a9-465a-b6e9-b0107612c311-secret-grpc-tls\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:41.120002 master-0 kubenswrapper[31830]: I0319 12:17:41.119905 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h6lb\" (UniqueName: \"kubernetes.io/projected/3d3b5c49-51a9-465a-b6e9-b0107612c311-kube-api-access-8h6lb\") pod \"thanos-querier-5ff76c69fd-pt6vq\" (UID: \"3d3b5c49-51a9-465a-b6e9-b0107612c311\") " pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:41.300781 master-0 kubenswrapper[31830]: I0319 12:17:41.300683 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:41.758331 master-0 kubenswrapper[31830]: I0319 12:17:41.758273 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" event={"ID":"e3f23f8a-0a1f-47e3-b40c-9503a88809f9","Type":"ContainerStarted","Data":"03194f0146041465b10e82f31779a8b5c014fc551b793a1cf70c85b2c887a996"} Mar 19 12:17:41.774120 master-0 kubenswrapper[31830]: I0319 12:17:41.758789 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq"] Mar 19 12:17:41.774255 master-0 kubenswrapper[31830]: I0319 12:17:41.774194 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:41.774255 master-0 kubenswrapper[31830]: I0319 12:17:41.774249 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:17:41.786136 master-0 kubenswrapper[31830]: I0319 12:17:41.786031 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" podStartSLOduration=4.458987148 podStartE2EDuration="6.786016703s" podCreationTimestamp="2026-03-19 12:17:35 +0000 UTC" firstStartedPulling="2026-03-19 12:17:38.149423844 +0000 UTC m=+196.698384548" lastFinishedPulling="2026-03-19 12:17:40.476453399 +0000 UTC m=+199.025414103" observedRunningTime="2026-03-19 12:17:41.783755072 +0000 UTC m=+200.332715776" watchObservedRunningTime="2026-03-19 12:17:41.786016703 +0000 UTC m=+200.334977407" Mar 19 12:17:42.771195 master-0 kubenswrapper[31830]: I0319 12:17:42.770524 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" event={"ID":"3d3b5c49-51a9-465a-b6e9-b0107612c311","Type":"ContainerStarted","Data":"57f79cb37322d353e4efd971262120ec25dbfba8a838ba7246333f1b049a4baf"} Mar 19 12:17:43.090061 master-0 kubenswrapper[31830]: I0319 12:17:43.089999 31830 patch_prober.go:28] interesting pod/console-695474f69-bz8b7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 19 12:17:43.090061 master-0 kubenswrapper[31830]: I0319 12:17:43.090062 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-695474f69-bz8b7" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 19 12:17:43.652747 master-0 kubenswrapper[31830]: I0319 12:17:43.652663 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-c8c668d4-qqj8z"] Mar 19 12:17:43.653753 master-0 kubenswrapper[31830]: I0319 12:17:43.653713 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.664697 master-0 kubenswrapper[31830]: I0319 12:17:43.659946 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-aiar26mnr5utb" Mar 19 12:17:43.673696 master-0 kubenswrapper[31830]: I0319 12:17:43.672065 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-86889676f6-phlgd"] Mar 19 12:17:43.673696 master-0 kubenswrapper[31830]: I0319 12:17:43.672357 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-86889676f6-phlgd" podUID="6db3fcbe-0dbf-464f-944b-62427173c8d3" containerName="metrics-server" containerID="cri-o://eeacdb60f8da61f85096f789c56cd94fccc18791a62d95df61660195a985a6a0" gracePeriod=170 Mar 19 12:17:43.702823 master-0 kubenswrapper[31830]: I0319 12:17:43.701178 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-c8c668d4-qqj8z"] Mar 19 12:17:43.748156 master-0 kubenswrapper[31830]: I0319 12:17:43.748096 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa1f8f0-265e-4a58-b02c-45967a85db0e-client-ca-bundle\") pod \"metrics-server-c8c668d4-qqj8z\" (UID: \"6aa1f8f0-265e-4a58-b02c-45967a85db0e\") " pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.748156 master-0 kubenswrapper[31830]: I0319 12:17:43.748153 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6aa1f8f0-265e-4a58-b02c-45967a85db0e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-c8c668d4-qqj8z\" (UID: \"6aa1f8f0-265e-4a58-b02c-45967a85db0e\") " pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.748433 master-0 kubenswrapper[31830]: I0319 12:17:43.748399 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6aa1f8f0-265e-4a58-b02c-45967a85db0e-secret-metrics-client-certs\") pod \"metrics-server-c8c668d4-qqj8z\" (UID: \"6aa1f8f0-265e-4a58-b02c-45967a85db0e\") " pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.748499 master-0 kubenswrapper[31830]: I0319 12:17:43.748437 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6aa1f8f0-265e-4a58-b02c-45967a85db0e-audit-log\") pod \"metrics-server-c8c668d4-qqj8z\" (UID: \"6aa1f8f0-265e-4a58-b02c-45967a85db0e\") " pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.748499 master-0 kubenswrapper[31830]: I0319 12:17:43.748463 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6aa1f8f0-265e-4a58-b02c-45967a85db0e-metrics-server-audit-profiles\") pod \"metrics-server-c8c668d4-qqj8z\" (UID: \"6aa1f8f0-265e-4a58-b02c-45967a85db0e\") " pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.748565 master-0 kubenswrapper[31830]: I0319 12:17:43.748508 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t866g\" (UniqueName: \"kubernetes.io/projected/6aa1f8f0-265e-4a58-b02c-45967a85db0e-kube-api-access-t866g\") pod \"metrics-server-c8c668d4-qqj8z\" (UID: \"6aa1f8f0-265e-4a58-b02c-45967a85db0e\") " pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.748565 master-0 kubenswrapper[31830]: I0319 12:17:43.748557 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6aa1f8f0-265e-4a58-b02c-45967a85db0e-secret-metrics-server-tls\") pod \"metrics-server-c8c668d4-qqj8z\" (UID: \"6aa1f8f0-265e-4a58-b02c-45967a85db0e\") " pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.782275 master-0 kubenswrapper[31830]: I0319 12:17:43.782223 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8p7qr" event={"ID":"6241ae9b-177b-4d97-9366-479855d8464f","Type":"ContainerStarted","Data":"a447135783442fc6cfc7086074ff7a67fda847f0972027a27bfbb5824fe1d4b3"} Mar 19 12:17:43.804984 master-0 kubenswrapper[31830]: I0319 12:17:43.804912 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-8p7qr" podStartSLOduration=2.500279496 podStartE2EDuration="5.804889223s" podCreationTimestamp="2026-03-19 12:17:38 +0000 UTC" firstStartedPulling="2026-03-19 12:17:39.346247371 +0000 UTC m=+197.895208075" lastFinishedPulling="2026-03-19 12:17:42.650857108 +0000 UTC m=+201.199817802" observedRunningTime="2026-03-19 12:17:43.795916442 +0000 UTC m=+202.344877146" watchObservedRunningTime="2026-03-19 12:17:43.804889223 +0000 UTC m=+202.353849937" Mar 19 12:17:43.850646 master-0 kubenswrapper[31830]: I0319 12:17:43.850554 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa1f8f0-265e-4a58-b02c-45967a85db0e-client-ca-bundle\") pod \"metrics-server-c8c668d4-qqj8z\" (UID: \"6aa1f8f0-265e-4a58-b02c-45967a85db0e\") " pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.850865 master-0 kubenswrapper[31830]: I0319 12:17:43.850827 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6aa1f8f0-265e-4a58-b02c-45967a85db0e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-c8c668d4-qqj8z\" (UID: \"6aa1f8f0-265e-4a58-b02c-45967a85db0e\") " pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.851834 master-0 kubenswrapper[31830]: I0319 12:17:43.851734 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6aa1f8f0-265e-4a58-b02c-45967a85db0e-secret-metrics-client-certs\") pod \"metrics-server-c8c668d4-qqj8z\" (UID: \"6aa1f8f0-265e-4a58-b02c-45967a85db0e\") " pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.852052 master-0 kubenswrapper[31830]: I0319 12:17:43.851776 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6aa1f8f0-265e-4a58-b02c-45967a85db0e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-c8c668d4-qqj8z\" (UID: \"6aa1f8f0-265e-4a58-b02c-45967a85db0e\") " pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.852052 master-0 kubenswrapper[31830]: I0319 12:17:43.851822 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6aa1f8f0-265e-4a58-b02c-45967a85db0e-audit-log\") pod \"metrics-server-c8c668d4-qqj8z\" (UID: \"6aa1f8f0-265e-4a58-b02c-45967a85db0e\") " pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.852052 master-0 kubenswrapper[31830]: I0319 12:17:43.851951 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6aa1f8f0-265e-4a58-b02c-45967a85db0e-metrics-server-audit-profiles\") pod \"metrics-server-c8c668d4-qqj8z\" (UID: \"6aa1f8f0-265e-4a58-b02c-45967a85db0e\") " pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.852052 master-0 kubenswrapper[31830]: I0319 12:17:43.851987 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t866g\" (UniqueName: \"kubernetes.io/projected/6aa1f8f0-265e-4a58-b02c-45967a85db0e-kube-api-access-t866g\") pod \"metrics-server-c8c668d4-qqj8z\" (UID: \"6aa1f8f0-265e-4a58-b02c-45967a85db0e\") " pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.852220 master-0 kubenswrapper[31830]: I0319 12:17:43.852065 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6aa1f8f0-265e-4a58-b02c-45967a85db0e-secret-metrics-server-tls\") pod \"metrics-server-c8c668d4-qqj8z\" (UID: \"6aa1f8f0-265e-4a58-b02c-45967a85db0e\") " pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.852220 master-0 kubenswrapper[31830]: I0319 12:17:43.852127 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6aa1f8f0-265e-4a58-b02c-45967a85db0e-audit-log\") pod \"metrics-server-c8c668d4-qqj8z\" (UID: \"6aa1f8f0-265e-4a58-b02c-45967a85db0e\") " pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.853899 master-0 kubenswrapper[31830]: I0319 12:17:43.853844 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa1f8f0-265e-4a58-b02c-45967a85db0e-client-ca-bundle\") pod \"metrics-server-c8c668d4-qqj8z\" (UID: \"6aa1f8f0-265e-4a58-b02c-45967a85db0e\") " pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.854666 master-0 kubenswrapper[31830]: I0319 12:17:43.854622 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6aa1f8f0-265e-4a58-b02c-45967a85db0e-metrics-server-audit-profiles\") pod \"metrics-server-c8c668d4-qqj8z\" (UID: \"6aa1f8f0-265e-4a58-b02c-45967a85db0e\") " pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.855340 master-0 kubenswrapper[31830]: I0319 12:17:43.855270 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6aa1f8f0-265e-4a58-b02c-45967a85db0e-secret-metrics-client-certs\") pod \"metrics-server-c8c668d4-qqj8z\" (UID: \"6aa1f8f0-265e-4a58-b02c-45967a85db0e\") " pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.858422 master-0 kubenswrapper[31830]: I0319 12:17:43.858386 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6aa1f8f0-265e-4a58-b02c-45967a85db0e-secret-metrics-server-tls\") pod \"metrics-server-c8c668d4-qqj8z\" (UID: \"6aa1f8f0-265e-4a58-b02c-45967a85db0e\") " pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.873859 master-0 kubenswrapper[31830]: I0319 12:17:43.873749 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t866g\" (UniqueName: \"kubernetes.io/projected/6aa1f8f0-265e-4a58-b02c-45967a85db0e-kube-api-access-t866g\") pod \"metrics-server-c8c668d4-qqj8z\" (UID: \"6aa1f8f0-265e-4a58-b02c-45967a85db0e\") " pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.972915 master-0 kubenswrapper[31830]: I0319 12:17:43.972785 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:17:43.978537 master-0 kubenswrapper[31830]: I0319 12:17:43.978020 31830 patch_prober.go:28] interesting pod/console-69f4fb98cb-qvvqh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Mar 19 12:17:43.978537 master-0 kubenswrapper[31830]: I0319 12:17:43.978103 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-69f4fb98cb-qvvqh" podUID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Mar 19 12:17:44.043463 master-0 kubenswrapper[31830]: I0319 12:17:44.043416 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-69f4fb98cb-qvvqh"] Mar 19 12:17:44.082823 master-0 kubenswrapper[31830]: I0319 12:17:44.081176 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5b87647974-5zv6r"] Mar 19 12:17:44.082823 master-0 kubenswrapper[31830]: I0319 12:17:44.081983 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.098720 master-0 kubenswrapper[31830]: I0319 12:17:44.098682 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b87647974-5zv6r"] Mar 19 12:17:44.156174 master-0 kubenswrapper[31830]: I0319 12:17:44.156114 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-trusted-ca-bundle\") pod \"console-5b87647974-5zv6r\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.156437 master-0 kubenswrapper[31830]: I0319 12:17:44.156213 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-service-ca\") pod \"console-5b87647974-5zv6r\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.156437 master-0 kubenswrapper[31830]: I0319 12:17:44.156268 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs4f6\" (UniqueName: \"kubernetes.io/projected/bc3b0ed8-8383-4d41-8b15-46cab419217f-kube-api-access-vs4f6\") pod \"console-5b87647974-5zv6r\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.156437 master-0 kubenswrapper[31830]: I0319 12:17:44.156290 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc3b0ed8-8383-4d41-8b15-46cab419217f-console-serving-cert\") pod \"console-5b87647974-5zv6r\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.156437 master-0 kubenswrapper[31830]: I0319 12:17:44.156335 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-console-config\") pod \"console-5b87647974-5zv6r\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.156437 master-0 kubenswrapper[31830]: I0319 12:17:44.156370 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bc3b0ed8-8383-4d41-8b15-46cab419217f-console-oauth-config\") pod \"console-5b87647974-5zv6r\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.157632 master-0 kubenswrapper[31830]: I0319 12:17:44.156458 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-oauth-serving-cert\") pod \"console-5b87647974-5zv6r\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.259182 master-0 kubenswrapper[31830]: I0319 12:17:44.259023 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-trusted-ca-bundle\") pod \"console-5b87647974-5zv6r\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.259182 master-0 kubenswrapper[31830]: I0319 12:17:44.259130 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-service-ca\") pod \"console-5b87647974-5zv6r\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.259182 master-0 kubenswrapper[31830]: I0319 12:17:44.259181 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs4f6\" (UniqueName: \"kubernetes.io/projected/bc3b0ed8-8383-4d41-8b15-46cab419217f-kube-api-access-vs4f6\") pod \"console-5b87647974-5zv6r\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.259521 master-0 kubenswrapper[31830]: I0319 12:17:44.259208 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc3b0ed8-8383-4d41-8b15-46cab419217f-console-serving-cert\") pod \"console-5b87647974-5zv6r\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.260210 master-0 kubenswrapper[31830]: I0319 12:17:44.260172 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-trusted-ca-bundle\") pod \"console-5b87647974-5zv6r\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.260274 master-0 kubenswrapper[31830]: I0319 12:17:44.260231 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-console-config\") pod \"console-5b87647974-5zv6r\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.260468 master-0 kubenswrapper[31830]: I0319 12:17:44.260391 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bc3b0ed8-8383-4d41-8b15-46cab419217f-console-oauth-config\") pod \"console-5b87647974-5zv6r\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.260468 master-0 kubenswrapper[31830]: I0319 12:17:44.260429 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-oauth-serving-cert\") pod \"console-5b87647974-5zv6r\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.260608 master-0 kubenswrapper[31830]: I0319 12:17:44.260570 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-service-ca\") pod \"console-5b87647974-5zv6r\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.261232 master-0 kubenswrapper[31830]: I0319 12:17:44.261189 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-console-config\") pod \"console-5b87647974-5zv6r\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.261465 master-0 kubenswrapper[31830]: I0319 12:17:44.261431 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-oauth-serving-cert\") pod \"console-5b87647974-5zv6r\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.275857 master-0 kubenswrapper[31830]: I0319 12:17:44.263486 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bc3b0ed8-8383-4d41-8b15-46cab419217f-console-oauth-config\") pod \"console-5b87647974-5zv6r\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.275857 master-0 kubenswrapper[31830]: I0319 12:17:44.263576 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc3b0ed8-8383-4d41-8b15-46cab419217f-console-serving-cert\") pod \"console-5b87647974-5zv6r\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.289376 master-0 kubenswrapper[31830]: I0319 12:17:44.289334 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs4f6\" (UniqueName: \"kubernetes.io/projected/bc3b0ed8-8383-4d41-8b15-46cab419217f-kube-api-access-vs4f6\") pod \"console-5b87647974-5zv6r\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:44.412187 master-0 kubenswrapper[31830]: I0319 12:17:44.412133 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:45.094094 master-0 kubenswrapper[31830]: I0319 12:17:45.094043 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-c8c668d4-qqj8z"] Mar 19 12:17:45.184849 master-0 kubenswrapper[31830]: W0319 12:17:45.179294 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc3b0ed8_8383_4d41_8b15_46cab419217f.slice/crio-49ecaf020e7a505f51e846b428f11956754baa868ec994dbb1e60324401eb98f WatchSource:0}: Error finding container 49ecaf020e7a505f51e846b428f11956754baa868ec994dbb1e60324401eb98f: Status 404 returned error can't find the container with id 49ecaf020e7a505f51e846b428f11956754baa868ec994dbb1e60324401eb98f Mar 19 12:17:45.194007 master-0 kubenswrapper[31830]: I0319 12:17:45.191470 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b87647974-5zv6r"] Mar 19 12:17:45.291213 master-0 kubenswrapper[31830]: I0319 12:17:45.290563 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 19 12:17:45.293864 master-0 kubenswrapper[31830]: I0319 12:17:45.292732 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.302195 master-0 kubenswrapper[31830]: I0319 12:17:45.301993 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 19 12:17:45.308206 master-0 kubenswrapper[31830]: I0319 12:17:45.303908 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 19 12:17:45.308206 master-0 kubenswrapper[31830]: I0319 12:17:45.304112 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 19 12:17:45.308206 master-0 kubenswrapper[31830]: I0319 12:17:45.304214 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 19 12:17:45.308206 master-0 kubenswrapper[31830]: I0319 12:17:45.304312 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 19 12:17:45.308206 master-0 kubenswrapper[31830]: I0319 12:17:45.304425 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-37tn0b2qg70ml" Mar 19 12:17:45.308206 master-0 kubenswrapper[31830]: I0319 12:17:45.304647 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 19 12:17:45.308206 master-0 kubenswrapper[31830]: I0319 12:17:45.304826 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 19 12:17:45.308206 master-0 kubenswrapper[31830]: I0319 12:17:45.304907 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 19 12:17:45.308206 master-0 kubenswrapper[31830]: I0319 12:17:45.305050 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 19 12:17:45.308206 master-0 kubenswrapper[31830]: I0319 12:17:45.306243 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 19 12:17:45.320115 master-0 kubenswrapper[31830]: I0319 12:17:45.319890 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 19 12:17:45.320252 master-0 kubenswrapper[31830]: I0319 12:17:45.320226 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 19 12:17:45.383651 master-0 kubenswrapper[31830]: I0319 12:17:45.383521 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.383651 master-0 kubenswrapper[31830]: I0319 12:17:45.383572 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.383651 master-0 kubenswrapper[31830]: I0319 12:17:45.383595 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp4cv\" (UniqueName: \"kubernetes.io/projected/d6814e91-dba6-44c2-80a5-6ee9429a3643-kube-api-access-rp4cv\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.383651 master-0 kubenswrapper[31830]: I0319 12:17:45.383620 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6814e91-dba6-44c2-80a5-6ee9429a3643-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.383651 master-0 kubenswrapper[31830]: I0319 12:17:45.383644 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-web-config\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.383651 master-0 kubenswrapper[31830]: I0319 12:17:45.383663 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.384097 master-0 kubenswrapper[31830]: I0319 12:17:45.383739 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d6814e91-dba6-44c2-80a5-6ee9429a3643-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.384097 master-0 kubenswrapper[31830]: I0319 12:17:45.383791 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6814e91-dba6-44c2-80a5-6ee9429a3643-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.384097 master-0 kubenswrapper[31830]: I0319 12:17:45.383870 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-config\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.384097 master-0 kubenswrapper[31830]: I0319 12:17:45.383943 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.384097 master-0 kubenswrapper[31830]: I0319 12:17:45.384002 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d6814e91-dba6-44c2-80a5-6ee9429a3643-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.384097 master-0 kubenswrapper[31830]: I0319 12:17:45.384033 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.384097 master-0 kubenswrapper[31830]: I0319 12:17:45.384067 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.384097 master-0 kubenswrapper[31830]: I0319 12:17:45.384089 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/d6814e91-dba6-44c2-80a5-6ee9429a3643-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.384409 master-0 kubenswrapper[31830]: I0319 12:17:45.384166 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d6814e91-dba6-44c2-80a5-6ee9429a3643-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.384409 master-0 kubenswrapper[31830]: I0319 12:17:45.384190 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.384409 master-0 kubenswrapper[31830]: I0319 12:17:45.384221 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6814e91-dba6-44c2-80a5-6ee9429a3643-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.384409 master-0 kubenswrapper[31830]: I0319 12:17:45.384247 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d6814e91-dba6-44c2-80a5-6ee9429a3643-config-out\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.487232 master-0 kubenswrapper[31830]: I0319 12:17:45.487115 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d6814e91-dba6-44c2-80a5-6ee9429a3643-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.487232 master-0 kubenswrapper[31830]: I0319 12:17:45.487163 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6814e91-dba6-44c2-80a5-6ee9429a3643-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.487232 master-0 kubenswrapper[31830]: I0319 12:17:45.487189 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-config\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.487232 master-0 kubenswrapper[31830]: I0319 12:17:45.487215 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.487506 master-0 kubenswrapper[31830]: I0319 12:17:45.487239 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d6814e91-dba6-44c2-80a5-6ee9429a3643-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.487506 master-0 kubenswrapper[31830]: I0319 12:17:45.487270 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.487506 master-0 kubenswrapper[31830]: I0319 12:17:45.487288 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.487506 master-0 kubenswrapper[31830]: I0319 12:17:45.487303 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/d6814e91-dba6-44c2-80a5-6ee9429a3643-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.487506 master-0 kubenswrapper[31830]: I0319 12:17:45.487333 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d6814e91-dba6-44c2-80a5-6ee9429a3643-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.487506 master-0 kubenswrapper[31830]: I0319 12:17:45.487350 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.487506 master-0 kubenswrapper[31830]: I0319 12:17:45.487367 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6814e91-dba6-44c2-80a5-6ee9429a3643-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.487506 master-0 kubenswrapper[31830]: I0319 12:17:45.487381 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d6814e91-dba6-44c2-80a5-6ee9429a3643-config-out\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.487506 master-0 kubenswrapper[31830]: I0319 12:17:45.487406 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.487506 master-0 kubenswrapper[31830]: I0319 12:17:45.487426 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.487506 master-0 kubenswrapper[31830]: I0319 12:17:45.487443 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp4cv\" (UniqueName: \"kubernetes.io/projected/d6814e91-dba6-44c2-80a5-6ee9429a3643-kube-api-access-rp4cv\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.487506 master-0 kubenswrapper[31830]: I0319 12:17:45.487467 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6814e91-dba6-44c2-80a5-6ee9429a3643-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.487506 master-0 kubenswrapper[31830]: I0319 12:17:45.487491 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-web-config\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.487506 master-0 kubenswrapper[31830]: I0319 12:17:45.487508 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.489590 master-0 kubenswrapper[31830]: I0319 12:17:45.489108 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d6814e91-dba6-44c2-80a5-6ee9429a3643-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.491236 master-0 kubenswrapper[31830]: I0319 12:17:45.491207 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.492451 master-0 kubenswrapper[31830]: I0319 12:17:45.492406 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6814e91-dba6-44c2-80a5-6ee9429a3643-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.494525 master-0 kubenswrapper[31830]: I0319 12:17:45.493356 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d6814e91-dba6-44c2-80a5-6ee9429a3643-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.494525 master-0 kubenswrapper[31830]: I0319 12:17:45.493645 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.494525 master-0 kubenswrapper[31830]: I0319 12:17:45.494132 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6814e91-dba6-44c2-80a5-6ee9429a3643-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.494769 master-0 kubenswrapper[31830]: I0319 12:17:45.494598 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6814e91-dba6-44c2-80a5-6ee9429a3643-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.496373 master-0 kubenswrapper[31830]: I0319 12:17:45.496250 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.496469 master-0 kubenswrapper[31830]: I0319 12:17:45.496424 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.496469 master-0 kubenswrapper[31830]: I0319 12:17:45.496462 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/d6814e91-dba6-44c2-80a5-6ee9429a3643-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.497029 master-0 kubenswrapper[31830]: I0319 12:17:45.496598 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-web-config\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.497335 master-0 kubenswrapper[31830]: I0319 12:17:45.497283 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-config\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.498962 master-0 kubenswrapper[31830]: I0319 12:17:45.498892 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.500042 master-0 kubenswrapper[31830]: I0319 12:17:45.499689 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.500042 master-0 kubenswrapper[31830]: I0319 12:17:45.499942 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d6814e91-dba6-44c2-80a5-6ee9429a3643-config-out\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.504819 master-0 kubenswrapper[31830]: I0319 12:17:45.504115 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/d6814e91-dba6-44c2-80a5-6ee9429a3643-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.514571 master-0 kubenswrapper[31830]: I0319 12:17:45.514482 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d6814e91-dba6-44c2-80a5-6ee9429a3643-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.533286 master-0 kubenswrapper[31830]: I0319 12:17:45.531528 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp4cv\" (UniqueName: \"kubernetes.io/projected/d6814e91-dba6-44c2-80a5-6ee9429a3643-kube-api-access-rp4cv\") pod \"prometheus-k8s-0\" (UID: \"d6814e91-dba6-44c2-80a5-6ee9429a3643\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.631424 master-0 kubenswrapper[31830]: I0319 12:17:45.631360 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:17:45.820596 master-0 kubenswrapper[31830]: I0319 12:17:45.820545 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" event={"ID":"6aa1f8f0-265e-4a58-b02c-45967a85db0e","Type":"ContainerStarted","Data":"cb4bc28e84a01d49ba1ca44c0689a5dd461f718757417930f64f0908b669c358"} Mar 19 12:17:45.820695 master-0 kubenswrapper[31830]: I0319 12:17:45.820602 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" event={"ID":"6aa1f8f0-265e-4a58-b02c-45967a85db0e","Type":"ContainerStarted","Data":"6af4799488488520e5753b8cd82be2535ade6585fa51ac850a2e94cde6547666"} Mar 19 12:17:45.836708 master-0 kubenswrapper[31830]: I0319 12:17:45.836668 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" event={"ID":"3d3b5c49-51a9-465a-b6e9-b0107612c311","Type":"ContainerStarted","Data":"dd47c8dd783d7e003127347fa4033a9642439d93a99610366ec502656957bb5c"} Mar 19 12:17:45.836893 master-0 kubenswrapper[31830]: I0319 12:17:45.836721 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" event={"ID":"3d3b5c49-51a9-465a-b6e9-b0107612c311","Type":"ContainerStarted","Data":"bf2852748283e4b2d1a05785c5206e084ebc3493039d3a9a635350df73875ed6"} Mar 19 12:17:45.836893 master-0 kubenswrapper[31830]: I0319 12:17:45.836735 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" event={"ID":"3d3b5c49-51a9-465a-b6e9-b0107612c311","Type":"ContainerStarted","Data":"142f349310a2057593f81e2922e63d6aeddf043fd5c9069198191fed9320225c"} Mar 19 12:17:45.841393 master-0 kubenswrapper[31830]: I0319 12:17:45.840617 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b87647974-5zv6r" event={"ID":"bc3b0ed8-8383-4d41-8b15-46cab419217f","Type":"ContainerStarted","Data":"90a33faa50134547d6c60b38c98a8611b25b4954f2680aab415a68b40923d1e6"} Mar 19 12:17:45.841393 master-0 kubenswrapper[31830]: I0319 12:17:45.840666 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b87647974-5zv6r" event={"ID":"bc3b0ed8-8383-4d41-8b15-46cab419217f","Type":"ContainerStarted","Data":"49ecaf020e7a505f51e846b428f11956754baa868ec994dbb1e60324401eb98f"} Mar 19 12:17:45.926078 master-0 kubenswrapper[31830]: I0319 12:17:45.925255 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" podStartSLOduration=2.925234446 podStartE2EDuration="2.925234446s" podCreationTimestamp="2026-03-19 12:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:17:45.924017548 +0000 UTC m=+204.472978262" watchObservedRunningTime="2026-03-19 12:17:45.925234446 +0000 UTC m=+204.474195160" Mar 19 12:17:46.063952 master-0 kubenswrapper[31830]: I0319 12:17:46.062881 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5b87647974-5zv6r" podStartSLOduration=2.062845803 podStartE2EDuration="2.062845803s" podCreationTimestamp="2026-03-19 12:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:17:46.058155916 +0000 UTC m=+204.607116630" watchObservedRunningTime="2026-03-19 12:17:46.062845803 +0000 UTC m=+204.611806627" Mar 19 12:17:46.130145 master-0 kubenswrapper[31830]: I0319 12:17:46.130086 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 19 12:17:46.141041 master-0 kubenswrapper[31830]: W0319 12:17:46.140996 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6814e91_dba6_44c2_80a5_6ee9429a3643.slice/crio-b27cd031b08fd29320b94792050d31361fd6ab9a8026983d6567976812255e66 WatchSource:0}: Error finding container b27cd031b08fd29320b94792050d31361fd6ab9a8026983d6567976812255e66: Status 404 returned error can't find the container with id b27cd031b08fd29320b94792050d31361fd6ab9a8026983d6567976812255e66 Mar 19 12:17:46.864899 master-0 kubenswrapper[31830]: I0319 12:17:46.859544 31830 generic.go:334] "Generic (PLEG): container finished" podID="d6814e91-dba6-44c2-80a5-6ee9429a3643" containerID="da2031e310927c5219003e2cc091f39272b020c8bfffe76451cfe709fbc7eeba" exitCode=0 Mar 19 12:17:46.864899 master-0 kubenswrapper[31830]: I0319 12:17:46.859632 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d6814e91-dba6-44c2-80a5-6ee9429a3643","Type":"ContainerDied","Data":"da2031e310927c5219003e2cc091f39272b020c8bfffe76451cfe709fbc7eeba"} Mar 19 12:17:46.864899 master-0 kubenswrapper[31830]: I0319 12:17:46.859662 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d6814e91-dba6-44c2-80a5-6ee9429a3643","Type":"ContainerStarted","Data":"b27cd031b08fd29320b94792050d31361fd6ab9a8026983d6567976812255e66"} Mar 19 12:17:46.868532 master-0 kubenswrapper[31830]: I0319 12:17:46.868490 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" event={"ID":"3d3b5c49-51a9-465a-b6e9-b0107612c311","Type":"ContainerStarted","Data":"cce91c82308dce19214a2929dc63866364a3cc7fccceeb70d0da86e25c6ff3e1"} Mar 19 12:17:47.879043 master-0 kubenswrapper[31830]: I0319 12:17:47.878980 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" event={"ID":"3d3b5c49-51a9-465a-b6e9-b0107612c311","Type":"ContainerStarted","Data":"3c13e8d1a9e1d754e030f97777a0d058aa6d9d19090280b94ad07fe4d4f3e7e3"} Mar 19 12:17:47.879043 master-0 kubenswrapper[31830]: I0319 12:17:47.879028 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" event={"ID":"3d3b5c49-51a9-465a-b6e9-b0107612c311","Type":"ContainerStarted","Data":"f6c3e441b919a50503ebfa9ad5027f32362694c7b26a40b97e25d3a27fbfb24d"} Mar 19 12:17:47.880102 master-0 kubenswrapper[31830]: I0319 12:17:47.880076 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:47.910001 master-0 kubenswrapper[31830]: I0319 12:17:47.909922 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" podStartSLOduration=3.031755064 podStartE2EDuration="7.909903644s" podCreationTimestamp="2026-03-19 12:17:40 +0000 UTC" firstStartedPulling="2026-03-19 12:17:41.766194251 +0000 UTC m=+200.315154955" lastFinishedPulling="2026-03-19 12:17:46.644342831 +0000 UTC m=+205.193303535" observedRunningTime="2026-03-19 12:17:47.904214976 +0000 UTC m=+206.453175700" watchObservedRunningTime="2026-03-19 12:17:47.909903644 +0000 UTC m=+206.458864358" Mar 19 12:17:50.905130 master-0 kubenswrapper[31830]: I0319 12:17:50.905013 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d6814e91-dba6-44c2-80a5-6ee9429a3643","Type":"ContainerStarted","Data":"b9380d4246c430aaea56ab831c36dddb16b0d7010886d310c4f92ddc0836d782"} Mar 19 12:17:50.905130 master-0 kubenswrapper[31830]: I0319 12:17:50.905057 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d6814e91-dba6-44c2-80a5-6ee9429a3643","Type":"ContainerStarted","Data":"7ef2f00fb06d801776a315e3bea91be34704b09efeb8915e61e3f6eaaed135fd"} Mar 19 12:17:50.905130 master-0 kubenswrapper[31830]: I0319 12:17:50.905068 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d6814e91-dba6-44c2-80a5-6ee9429a3643","Type":"ContainerStarted","Data":"ccbc9ef24382076f8314b6a16f7ee57c9aa44ab431db87ba9b57e73b54198b1b"} Mar 19 12:17:50.905130 master-0 kubenswrapper[31830]: I0319 12:17:50.905077 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d6814e91-dba6-44c2-80a5-6ee9429a3643","Type":"ContainerStarted","Data":"48a286ec1760af60108aadfca1889d59bf4a162a55c571ba082bb1b720bbfbab"} Mar 19 12:17:50.905130 master-0 kubenswrapper[31830]: I0319 12:17:50.905085 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d6814e91-dba6-44c2-80a5-6ee9429a3643","Type":"ContainerStarted","Data":"1a9a96e5819d07797a1e0b5f8d7bd4dfe3a5a010cc8f0cb4d019c24bc5e2410a"} Mar 19 12:17:50.905130 master-0 kubenswrapper[31830]: I0319 12:17:50.905094 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d6814e91-dba6-44c2-80a5-6ee9429a3643","Type":"ContainerStarted","Data":"4254e86274974336d90967f7495365e5291f945fb85440d6c3288105b4ae6a07"} Mar 19 12:17:50.939405 master-0 kubenswrapper[31830]: I0319 12:17:50.939338 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=2.810403544 podStartE2EDuration="5.93931676s" podCreationTimestamp="2026-03-19 12:17:45 +0000 UTC" firstStartedPulling="2026-03-19 12:17:46.865644752 +0000 UTC m=+205.414605476" lastFinishedPulling="2026-03-19 12:17:49.994557988 +0000 UTC m=+208.543518692" observedRunningTime="2026-03-19 12:17:50.935319084 +0000 UTC m=+209.484279798" watchObservedRunningTime="2026-03-19 12:17:50.93931676 +0000 UTC m=+209.488277464" Mar 19 12:17:51.308987 master-0 kubenswrapper[31830]: I0319 12:17:51.308924 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-5ff76c69fd-pt6vq" Mar 19 12:17:53.089951 master-0 kubenswrapper[31830]: I0319 12:17:53.089889 31830 patch_prober.go:28] interesting pod/console-695474f69-bz8b7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 19 12:17:53.090539 master-0 kubenswrapper[31830]: I0319 12:17:53.089970 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-695474f69-bz8b7" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 19 12:17:54.412623 master-0 kubenswrapper[31830]: I0319 12:17:54.412562 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:54.413199 master-0 kubenswrapper[31830]: I0319 12:17:54.412657 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:17:54.414202 master-0 kubenswrapper[31830]: I0319 12:17:54.414163 31830 patch_prober.go:28] interesting pod/console-5b87647974-5zv6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" start-of-body= Mar 19 12:17:54.414282 master-0 kubenswrapper[31830]: I0319 12:17:54.414205 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5b87647974-5zv6r" podUID="bc3b0ed8-8383-4d41-8b15-46cab419217f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" Mar 19 12:17:55.632603 master-0 kubenswrapper[31830]: I0319 12:17:55.632536 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:18:03.090064 master-0 kubenswrapper[31830]: I0319 12:18:03.089982 31830 patch_prober.go:28] interesting pod/console-695474f69-bz8b7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 19 12:18:03.091218 master-0 kubenswrapper[31830]: I0319 12:18:03.090078 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-695474f69-bz8b7" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 19 12:18:03.973028 master-0 kubenswrapper[31830]: I0319 12:18:03.972935 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:18:03.973028 master-0 kubenswrapper[31830]: I0319 12:18:03.973004 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:18:04.413956 master-0 kubenswrapper[31830]: I0319 12:18:04.413793 31830 patch_prober.go:28] interesting pod/console-5b87647974-5zv6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" start-of-body= Mar 19 12:18:04.414516 master-0 kubenswrapper[31830]: I0319 12:18:04.414487 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5b87647974-5zv6r" podUID="bc3b0ed8-8383-4d41-8b15-46cab419217f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" Mar 19 12:18:07.190712 master-0 kubenswrapper[31830]: I0319 12:18:07.190142 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-2qllh"] Mar 19 12:18:07.191324 master-0 kubenswrapper[31830]: I0319 12:18:07.190959 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c6b76c555-2qllh" Mar 19 12:18:07.194419 master-0 kubenswrapper[31830]: I0319 12:18:07.194009 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 19 12:18:07.194419 master-0 kubenswrapper[31830]: I0319 12:18:07.194048 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 19 12:18:07.211992 master-0 kubenswrapper[31830]: I0319 12:18:07.211940 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-2qllh"] Mar 19 12:18:07.246417 master-0 kubenswrapper[31830]: I0319 12:18:07.246364 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 19 12:18:07.258007 master-0 kubenswrapper[31830]: I0319 12:18:07.257966 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.259937 master-0 kubenswrapper[31830]: I0319 12:18:07.259895 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 19 12:18:07.260080 master-0 kubenswrapper[31830]: I0319 12:18:07.260067 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 19 12:18:07.260221 master-0 kubenswrapper[31830]: I0319 12:18:07.260193 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 19 12:18:07.260417 master-0 kubenswrapper[31830]: I0319 12:18:07.260399 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 19 12:18:07.260493 master-0 kubenswrapper[31830]: I0319 12:18:07.260459 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 19 12:18:07.260572 master-0 kubenswrapper[31830]: I0319 12:18:07.260540 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 19 12:18:07.261046 master-0 kubenswrapper[31830]: I0319 12:18:07.260991 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 19 12:18:07.270339 master-0 kubenswrapper[31830]: I0319 12:18:07.270290 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 19 12:18:07.280665 master-0 kubenswrapper[31830]: I0319 12:18:07.279150 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 19 12:18:07.320677 master-0 kubenswrapper[31830]: I0319 12:18:07.320584 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/27068591-a951-4c79-8a88-6c31210a50af-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-2qllh\" (UID: \"27068591-a951-4c79-8a88-6c31210a50af\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-2qllh" Mar 19 12:18:07.320971 master-0 kubenswrapper[31830]: I0319 12:18:07.320667 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/27068591-a951-4c79-8a88-6c31210a50af-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-2qllh\" (UID: \"27068591-a951-4c79-8a88-6c31210a50af\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-2qllh" Mar 19 12:18:07.421666 master-0 kubenswrapper[31830]: I0319 12:18:07.421610 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3ae07d43-4069-4d70-9960-0fd6b158fa76-web-config\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.425493 master-0 kubenswrapper[31830]: I0319 12:18:07.422139 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/27068591-a951-4c79-8a88-6c31210a50af-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-2qllh\" (UID: \"27068591-a951-4c79-8a88-6c31210a50af\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-2qllh" Mar 19 12:18:07.425493 master-0 kubenswrapper[31830]: I0319 12:18:07.422218 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ae07d43-4069-4d70-9960-0fd6b158fa76-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.425493 master-0 kubenswrapper[31830]: I0319 12:18:07.422301 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3ae07d43-4069-4d70-9960-0fd6b158fa76-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.425493 master-0 kubenswrapper[31830]: I0319 12:18:07.422357 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3ae07d43-4069-4d70-9960-0fd6b158fa76-config-volume\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.425493 master-0 kubenswrapper[31830]: I0319 12:18:07.422384 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3ae07d43-4069-4d70-9960-0fd6b158fa76-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.425493 master-0 kubenswrapper[31830]: I0319 12:18:07.422406 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3ae07d43-4069-4d70-9960-0fd6b158fa76-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.425493 master-0 kubenswrapper[31830]: I0319 12:18:07.422474 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/27068591-a951-4c79-8a88-6c31210a50af-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-2qllh\" (UID: \"27068591-a951-4c79-8a88-6c31210a50af\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-2qllh" Mar 19 12:18:07.425493 master-0 kubenswrapper[31830]: I0319 12:18:07.422499 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3ae07d43-4069-4d70-9960-0fd6b158fa76-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.425493 master-0 kubenswrapper[31830]: I0319 12:18:07.422533 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3ae07d43-4069-4d70-9960-0fd6b158fa76-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.425493 master-0 kubenswrapper[31830]: I0319 12:18:07.422560 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3ae07d43-4069-4d70-9960-0fd6b158fa76-config-out\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.425493 master-0 kubenswrapper[31830]: I0319 12:18:07.422582 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3ae07d43-4069-4d70-9960-0fd6b158fa76-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.425493 master-0 kubenswrapper[31830]: I0319 12:18:07.422636 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6xhz\" (UniqueName: \"kubernetes.io/projected/3ae07d43-4069-4d70-9960-0fd6b158fa76-kube-api-access-m6xhz\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.425493 master-0 kubenswrapper[31830]: I0319 12:18:07.422688 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3ae07d43-4069-4d70-9960-0fd6b158fa76-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.425493 master-0 kubenswrapper[31830]: E0319 12:18:07.422865 31830 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 19 12:18:07.425493 master-0 kubenswrapper[31830]: E0319 12:18:07.422921 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27068591-a951-4c79-8a88-6c31210a50af-networking-console-plugin-cert podName:27068591-a951-4c79-8a88-6c31210a50af nodeName:}" failed. No retries permitted until 2026-03-19 12:18:07.922900658 +0000 UTC m=+226.471861362 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/27068591-a951-4c79-8a88-6c31210a50af-networking-console-plugin-cert") pod "networking-console-plugin-7c6b76c555-2qllh" (UID: "27068591-a951-4c79-8a88-6c31210a50af") : secret "networking-console-plugin-cert" not found Mar 19 12:18:07.425493 master-0 kubenswrapper[31830]: I0319 12:18:07.424103 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/27068591-a951-4c79-8a88-6c31210a50af-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-2qllh\" (UID: \"27068591-a951-4c79-8a88-6c31210a50af\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-2qllh" Mar 19 12:18:07.524485 master-0 kubenswrapper[31830]: I0319 12:18:07.524448 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3ae07d43-4069-4d70-9960-0fd6b158fa76-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.524787 master-0 kubenswrapper[31830]: I0319 12:18:07.524773 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3ae07d43-4069-4d70-9960-0fd6b158fa76-web-config\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.524974 master-0 kubenswrapper[31830]: I0319 12:18:07.524955 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ae07d43-4069-4d70-9960-0fd6b158fa76-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.525109 master-0 kubenswrapper[31830]: I0319 12:18:07.525090 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3ae07d43-4069-4d70-9960-0fd6b158fa76-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.525240 master-0 kubenswrapper[31830]: I0319 12:18:07.525222 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3ae07d43-4069-4d70-9960-0fd6b158fa76-config-volume\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.525356 master-0 kubenswrapper[31830]: I0319 12:18:07.525335 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3ae07d43-4069-4d70-9960-0fd6b158fa76-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.525587 master-0 kubenswrapper[31830]: I0319 12:18:07.525568 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3ae07d43-4069-4d70-9960-0fd6b158fa76-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.525718 master-0 kubenswrapper[31830]: I0319 12:18:07.525700 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3ae07d43-4069-4d70-9960-0fd6b158fa76-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.525850 master-0 kubenswrapper[31830]: I0319 12:18:07.525831 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3ae07d43-4069-4d70-9960-0fd6b158fa76-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.525975 master-0 kubenswrapper[31830]: I0319 12:18:07.525956 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3ae07d43-4069-4d70-9960-0fd6b158fa76-config-out\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.526084 master-0 kubenswrapper[31830]: I0319 12:18:07.526067 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3ae07d43-4069-4d70-9960-0fd6b158fa76-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.526205 master-0 kubenswrapper[31830]: I0319 12:18:07.526186 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6xhz\" (UniqueName: \"kubernetes.io/projected/3ae07d43-4069-4d70-9960-0fd6b158fa76-kube-api-access-m6xhz\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.526395 master-0 kubenswrapper[31830]: I0319 12:18:07.526353 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3ae07d43-4069-4d70-9960-0fd6b158fa76-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.526708 master-0 kubenswrapper[31830]: I0319 12:18:07.526678 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3ae07d43-4069-4d70-9960-0fd6b158fa76-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.527519 master-0 kubenswrapper[31830]: I0319 12:18:07.527478 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3ae07d43-4069-4d70-9960-0fd6b158fa76-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.527905 master-0 kubenswrapper[31830]: I0319 12:18:07.527865 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3ae07d43-4069-4d70-9960-0fd6b158fa76-web-config\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.528411 master-0 kubenswrapper[31830]: I0319 12:18:07.528390 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3ae07d43-4069-4d70-9960-0fd6b158fa76-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.528662 master-0 kubenswrapper[31830]: I0319 12:18:07.528618 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3ae07d43-4069-4d70-9960-0fd6b158fa76-config-out\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.528954 master-0 kubenswrapper[31830]: I0319 12:18:07.528921 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3ae07d43-4069-4d70-9960-0fd6b158fa76-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.529094 master-0 kubenswrapper[31830]: I0319 12:18:07.529071 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3ae07d43-4069-4d70-9960-0fd6b158fa76-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.532255 master-0 kubenswrapper[31830]: I0319 12:18:07.532226 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3ae07d43-4069-4d70-9960-0fd6b158fa76-config-volume\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.533045 master-0 kubenswrapper[31830]: I0319 12:18:07.533018 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ae07d43-4069-4d70-9960-0fd6b158fa76-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.536050 master-0 kubenswrapper[31830]: I0319 12:18:07.536007 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3ae07d43-4069-4d70-9960-0fd6b158fa76-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.552079 master-0 kubenswrapper[31830]: I0319 12:18:07.549146 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6xhz\" (UniqueName: \"kubernetes.io/projected/3ae07d43-4069-4d70-9960-0fd6b158fa76-kube-api-access-m6xhz\") pod \"alertmanager-main-0\" (UID: \"3ae07d43-4069-4d70-9960-0fd6b158fa76\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.588175 master-0 kubenswrapper[31830]: I0319 12:18:07.588123 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 19 12:18:07.936099 master-0 kubenswrapper[31830]: I0319 12:18:07.935535 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/27068591-a951-4c79-8a88-6c31210a50af-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-2qllh\" (UID: \"27068591-a951-4c79-8a88-6c31210a50af\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-2qllh" Mar 19 12:18:07.939748 master-0 kubenswrapper[31830]: I0319 12:18:07.939696 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/27068591-a951-4c79-8a88-6c31210a50af-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-2qllh\" (UID: \"27068591-a951-4c79-8a88-6c31210a50af\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-2qllh" Mar 19 12:18:08.020267 master-0 kubenswrapper[31830]: I0319 12:18:08.020216 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 19 12:18:08.022103 master-0 kubenswrapper[31830]: W0319 12:18:08.022060 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ae07d43_4069_4d70_9960_0fd6b158fa76.slice/crio-5bec33b4cf01fc37e30f223ab638907ebc078b5e466060b254c031aeadf4904b WatchSource:0}: Error finding container 5bec33b4cf01fc37e30f223ab638907ebc078b5e466060b254c031aeadf4904b: Status 404 returned error can't find the container with id 5bec33b4cf01fc37e30f223ab638907ebc078b5e466060b254c031aeadf4904b Mar 19 12:18:08.111103 master-0 kubenswrapper[31830]: I0319 12:18:08.111035 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c6b76c555-2qllh" Mar 19 12:18:08.222300 master-0 kubenswrapper[31830]: I0319 12:18:08.222223 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-695474f69-bz8b7"] Mar 19 12:18:08.271607 master-0 kubenswrapper[31830]: I0319 12:18:08.270921 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-575589487f-9nhq4"] Mar 19 12:18:08.272363 master-0 kubenswrapper[31830]: I0319 12:18:08.272308 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.287140 master-0 kubenswrapper[31830]: I0319 12:18:08.287074 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-575589487f-9nhq4"] Mar 19 12:18:08.342684 master-0 kubenswrapper[31830]: I0319 12:18:08.342620 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxd8l\" (UniqueName: \"kubernetes.io/projected/0a1dfc0b-250d-465f-a075-f088f5725873-kube-api-access-lxd8l\") pod \"console-575589487f-9nhq4\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.342684 master-0 kubenswrapper[31830]: I0319 12:18:08.342669 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a1dfc0b-250d-465f-a075-f088f5725873-console-serving-cert\") pod \"console-575589487f-9nhq4\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.342684 master-0 kubenswrapper[31830]: I0319 12:18:08.342689 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-service-ca\") pod \"console-575589487f-9nhq4\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.343111 master-0 kubenswrapper[31830]: I0319 12:18:08.342710 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-console-config\") pod \"console-575589487f-9nhq4\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.343111 master-0 kubenswrapper[31830]: I0319 12:18:08.342736 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-oauth-serving-cert\") pod \"console-575589487f-9nhq4\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.343111 master-0 kubenswrapper[31830]: I0319 12:18:08.342752 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-trusted-ca-bundle\") pod \"console-575589487f-9nhq4\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.343111 master-0 kubenswrapper[31830]: I0319 12:18:08.342963 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a1dfc0b-250d-465f-a075-f088f5725873-console-oauth-config\") pod \"console-575589487f-9nhq4\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.444414 master-0 kubenswrapper[31830]: I0319 12:18:08.444237 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxd8l\" (UniqueName: \"kubernetes.io/projected/0a1dfc0b-250d-465f-a075-f088f5725873-kube-api-access-lxd8l\") pod \"console-575589487f-9nhq4\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.444414 master-0 kubenswrapper[31830]: I0319 12:18:08.444407 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a1dfc0b-250d-465f-a075-f088f5725873-console-serving-cert\") pod \"console-575589487f-9nhq4\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.444653 master-0 kubenswrapper[31830]: I0319 12:18:08.444429 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-service-ca\") pod \"console-575589487f-9nhq4\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.444653 master-0 kubenswrapper[31830]: I0319 12:18:08.444452 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-console-config\") pod \"console-575589487f-9nhq4\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.444653 master-0 kubenswrapper[31830]: I0319 12:18:08.444476 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-oauth-serving-cert\") pod \"console-575589487f-9nhq4\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.444653 master-0 kubenswrapper[31830]: I0319 12:18:08.444493 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-trusted-ca-bundle\") pod \"console-575589487f-9nhq4\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.444653 master-0 kubenswrapper[31830]: I0319 12:18:08.444523 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a1dfc0b-250d-465f-a075-f088f5725873-console-oauth-config\") pod \"console-575589487f-9nhq4\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.445652 master-0 kubenswrapper[31830]: I0319 12:18:08.445624 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-service-ca\") pod \"console-575589487f-9nhq4\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.445821 master-0 kubenswrapper[31830]: I0319 12:18:08.445757 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-oauth-serving-cert\") pod \"console-575589487f-9nhq4\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.445821 master-0 kubenswrapper[31830]: I0319 12:18:08.445757 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-console-config\") pod \"console-575589487f-9nhq4\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.446403 master-0 kubenswrapper[31830]: I0319 12:18:08.446369 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-trusted-ca-bundle\") pod \"console-575589487f-9nhq4\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.448545 master-0 kubenswrapper[31830]: I0319 12:18:08.448348 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a1dfc0b-250d-465f-a075-f088f5725873-console-oauth-config\") pod \"console-575589487f-9nhq4\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.448545 master-0 kubenswrapper[31830]: I0319 12:18:08.448500 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a1dfc0b-250d-465f-a075-f088f5725873-console-serving-cert\") pod \"console-575589487f-9nhq4\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.459272 master-0 kubenswrapper[31830]: I0319 12:18:08.459214 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxd8l\" (UniqueName: \"kubernetes.io/projected/0a1dfc0b-250d-465f-a075-f088f5725873-kube-api-access-lxd8l\") pod \"console-575589487f-9nhq4\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:08.532562 master-0 kubenswrapper[31830]: I0319 12:18:08.531566 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-2qllh"] Mar 19 12:18:08.532562 master-0 kubenswrapper[31830]: W0319 12:18:08.532246 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27068591_a951_4c79_8a88_6c31210a50af.slice/crio-e723367cac63ae49e5f0c540c1569aa70105ac9b158f61f06248fd5a3d1295a4 WatchSource:0}: Error finding container e723367cac63ae49e5f0c540c1569aa70105ac9b158f61f06248fd5a3d1295a4: Status 404 returned error can't find the container with id e723367cac63ae49e5f0c540c1569aa70105ac9b158f61f06248fd5a3d1295a4 Mar 19 12:18:08.597244 master-0 kubenswrapper[31830]: I0319 12:18:08.597170 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:09.017786 master-0 kubenswrapper[31830]: I0319 12:18:09.017699 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-575589487f-9nhq4"] Mar 19 12:18:09.019314 master-0 kubenswrapper[31830]: W0319 12:18:09.019260 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a1dfc0b_250d_465f_a075_f088f5725873.slice/crio-0248a7172bf1cd2d7f2dc54cf87753389a1b846e5d72dd00c6cb6d15a27bb0b2 WatchSource:0}: Error finding container 0248a7172bf1cd2d7f2dc54cf87753389a1b846e5d72dd00c6cb6d15a27bb0b2: Status 404 returned error can't find the container with id 0248a7172bf1cd2d7f2dc54cf87753389a1b846e5d72dd00c6cb6d15a27bb0b2 Mar 19 12:18:09.029761 master-0 kubenswrapper[31830]: I0319 12:18:09.029707 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c6b76c555-2qllh" event={"ID":"27068591-a951-4c79-8a88-6c31210a50af","Type":"ContainerStarted","Data":"e723367cac63ae49e5f0c540c1569aa70105ac9b158f61f06248fd5a3d1295a4"} Mar 19 12:18:09.031642 master-0 kubenswrapper[31830]: I0319 12:18:09.031586 31830 generic.go:334] "Generic (PLEG): container finished" podID="3ae07d43-4069-4d70-9960-0fd6b158fa76" containerID="13b8210097c1519aeb790567a886b9be75e30d816b0ad04e362d252f15625021" exitCode=0 Mar 19 12:18:09.031718 master-0 kubenswrapper[31830]: I0319 12:18:09.031675 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3ae07d43-4069-4d70-9960-0fd6b158fa76","Type":"ContainerDied","Data":"13b8210097c1519aeb790567a886b9be75e30d816b0ad04e362d252f15625021"} Mar 19 12:18:09.031718 master-0 kubenswrapper[31830]: I0319 12:18:09.031699 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3ae07d43-4069-4d70-9960-0fd6b158fa76","Type":"ContainerStarted","Data":"5bec33b4cf01fc37e30f223ab638907ebc078b5e466060b254c031aeadf4904b"} Mar 19 12:18:09.033428 master-0 kubenswrapper[31830]: I0319 12:18:09.033394 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-575589487f-9nhq4" event={"ID":"0a1dfc0b-250d-465f-a075-f088f5725873","Type":"ContainerStarted","Data":"0248a7172bf1cd2d7f2dc54cf87753389a1b846e5d72dd00c6cb6d15a27bb0b2"} Mar 19 12:18:09.093419 master-0 kubenswrapper[31830]: I0319 12:18:09.093383 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-69f4fb98cb-qvvqh" podUID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" containerName="console" containerID="cri-o://f2706c58316b52da324f26d06ba5df2d4a83d1d14fe0c595714914865890e8db" gracePeriod=15 Mar 19 12:18:09.482041 master-0 kubenswrapper[31830]: I0319 12:18:09.482002 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-69f4fb98cb-qvvqh_be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26/console/0.log" Mar 19 12:18:09.482945 master-0 kubenswrapper[31830]: I0319 12:18:09.482082 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:18:09.663773 master-0 kubenswrapper[31830]: I0319 12:18:09.662647 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-console-config\") pod \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " Mar 19 12:18:09.663773 master-0 kubenswrapper[31830]: I0319 12:18:09.662692 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-service-ca\") pod \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " Mar 19 12:18:09.663773 master-0 kubenswrapper[31830]: I0319 12:18:09.662736 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-console-serving-cert\") pod \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " Mar 19 12:18:09.663773 master-0 kubenswrapper[31830]: I0319 12:18:09.662851 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-oauth-serving-cert\") pod \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " Mar 19 12:18:09.663773 master-0 kubenswrapper[31830]: I0319 12:18:09.662931 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-console-oauth-config\") pod \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " Mar 19 12:18:09.663773 master-0 kubenswrapper[31830]: I0319 12:18:09.663013 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwhrt\" (UniqueName: \"kubernetes.io/projected/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-kube-api-access-vwhrt\") pod \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\" (UID: \"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26\") " Mar 19 12:18:09.663773 master-0 kubenswrapper[31830]: I0319 12:18:09.663223 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-console-config" (OuterVolumeSpecName: "console-config") pod "be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" (UID: "be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:18:09.663773 master-0 kubenswrapper[31830]: I0319 12:18:09.663683 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" (UID: "be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:18:09.664388 master-0 kubenswrapper[31830]: I0319 12:18:09.664288 31830 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-console-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:09.664388 master-0 kubenswrapper[31830]: I0319 12:18:09.664327 31830 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:09.665096 master-0 kubenswrapper[31830]: I0319 12:18:09.665000 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-service-ca" (OuterVolumeSpecName: "service-ca") pod "be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" (UID: "be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:18:09.666177 master-0 kubenswrapper[31830]: I0319 12:18:09.666024 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" (UID: "be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:18:09.666619 master-0 kubenswrapper[31830]: I0319 12:18:09.666562 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" (UID: "be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:18:09.667260 master-0 kubenswrapper[31830]: I0319 12:18:09.667187 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-kube-api-access-vwhrt" (OuterVolumeSpecName: "kube-api-access-vwhrt") pod "be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" (UID: "be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26"). InnerVolumeSpecName "kube-api-access-vwhrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:18:09.767319 master-0 kubenswrapper[31830]: I0319 12:18:09.767266 31830 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:09.767319 master-0 kubenswrapper[31830]: I0319 12:18:09.767303 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwhrt\" (UniqueName: \"kubernetes.io/projected/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-kube-api-access-vwhrt\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:09.767319 master-0 kubenswrapper[31830]: I0319 12:18:09.767340 31830 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:09.767665 master-0 kubenswrapper[31830]: I0319 12:18:09.767353 31830 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:10.046957 master-0 kubenswrapper[31830]: I0319 12:18:10.046893 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-575589487f-9nhq4" event={"ID":"0a1dfc0b-250d-465f-a075-f088f5725873","Type":"ContainerStarted","Data":"15e02e8bcdf411a7b55a0689ad33ce3a8e430d3a47cdd9b4d8ebfc49858aed75"} Mar 19 12:18:10.048780 master-0 kubenswrapper[31830]: I0319 12:18:10.048758 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-69f4fb98cb-qvvqh_be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26/console/0.log" Mar 19 12:18:10.048880 master-0 kubenswrapper[31830]: I0319 12:18:10.048811 31830 generic.go:334] "Generic (PLEG): container finished" podID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" containerID="f2706c58316b52da324f26d06ba5df2d4a83d1d14fe0c595714914865890e8db" exitCode=2 Mar 19 12:18:10.048880 master-0 kubenswrapper[31830]: I0319 12:18:10.048840 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69f4fb98cb-qvvqh" event={"ID":"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26","Type":"ContainerDied","Data":"f2706c58316b52da324f26d06ba5df2d4a83d1d14fe0c595714914865890e8db"} Mar 19 12:18:10.048880 master-0 kubenswrapper[31830]: I0319 12:18:10.048861 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69f4fb98cb-qvvqh" event={"ID":"be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26","Type":"ContainerDied","Data":"f7b663ffb2bb48e4ecf06f9105fe20f74da8a02ae5301fc423a27a455c6d9d33"} Mar 19 12:18:10.048880 master-0 kubenswrapper[31830]: I0319 12:18:10.048878 31830 scope.go:117] "RemoveContainer" containerID="f2706c58316b52da324f26d06ba5df2d4a83d1d14fe0c595714914865890e8db" Mar 19 12:18:10.049008 master-0 kubenswrapper[31830]: I0319 12:18:10.048970 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69f4fb98cb-qvvqh" Mar 19 12:18:10.078430 master-0 kubenswrapper[31830]: I0319 12:18:10.078347 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-575589487f-9nhq4" podStartSLOduration=2.078326165 podStartE2EDuration="2.078326165s" podCreationTimestamp="2026-03-19 12:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:18:10.070894981 +0000 UTC m=+228.619855685" watchObservedRunningTime="2026-03-19 12:18:10.078326165 +0000 UTC m=+228.627286869" Mar 19 12:18:10.096510 master-0 kubenswrapper[31830]: I0319 12:18:10.096426 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-69f4fb98cb-qvvqh"] Mar 19 12:18:10.102475 master-0 kubenswrapper[31830]: I0319 12:18:10.102426 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-69f4fb98cb-qvvqh"] Mar 19 12:18:10.749998 master-0 kubenswrapper[31830]: I0319 12:18:10.749962 31830 scope.go:117] "RemoveContainer" containerID="f2706c58316b52da324f26d06ba5df2d4a83d1d14fe0c595714914865890e8db" Mar 19 12:18:10.750535 master-0 kubenswrapper[31830]: E0319 12:18:10.750502 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2706c58316b52da324f26d06ba5df2d4a83d1d14fe0c595714914865890e8db\": container with ID starting with f2706c58316b52da324f26d06ba5df2d4a83d1d14fe0c595714914865890e8db not found: ID does not exist" containerID="f2706c58316b52da324f26d06ba5df2d4a83d1d14fe0c595714914865890e8db" Mar 19 12:18:10.750667 master-0 kubenswrapper[31830]: I0319 12:18:10.750630 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2706c58316b52da324f26d06ba5df2d4a83d1d14fe0c595714914865890e8db"} err="failed to get container status \"f2706c58316b52da324f26d06ba5df2d4a83d1d14fe0c595714914865890e8db\": rpc error: code = NotFound desc = could not find container \"f2706c58316b52da324f26d06ba5df2d4a83d1d14fe0c595714914865890e8db\": container with ID starting with f2706c58316b52da324f26d06ba5df2d4a83d1d14fe0c595714914865890e8db not found: ID does not exist" Mar 19 12:18:11.060086 master-0 kubenswrapper[31830]: I0319 12:18:11.060034 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3ae07d43-4069-4d70-9960-0fd6b158fa76","Type":"ContainerStarted","Data":"e729e22a92780785ce98884c9577d7dddd9494db72b870f3a8bc6fdfc1b6df84"} Mar 19 12:18:11.063689 master-0 kubenswrapper[31830]: I0319 12:18:11.063640 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c6b76c555-2qllh" event={"ID":"27068591-a951-4c79-8a88-6c31210a50af","Type":"ContainerStarted","Data":"037e426b569ca1ac2a1ecd66620dd3c7dc0cb93a814ef6ac9ee8dac3b6ef1ae8"} Mar 19 12:18:11.083858 master-0 kubenswrapper[31830]: I0319 12:18:11.082660 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-7c6b76c555-2qllh" podStartSLOduration=1.8384273549999999 podStartE2EDuration="4.082635784s" podCreationTimestamp="2026-03-19 12:18:07 +0000 UTC" firstStartedPulling="2026-03-19 12:18:08.534217004 +0000 UTC m=+227.083177698" lastFinishedPulling="2026-03-19 12:18:10.778425433 +0000 UTC m=+229.327386127" observedRunningTime="2026-03-19 12:18:11.080765305 +0000 UTC m=+229.629726019" watchObservedRunningTime="2026-03-19 12:18:11.082635784 +0000 UTC m=+229.631596488" Mar 19 12:18:11.687640 master-0 kubenswrapper[31830]: I0319 12:18:11.687559 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" path="/var/lib/kubelet/pods/be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26/volumes" Mar 19 12:18:12.082363 master-0 kubenswrapper[31830]: I0319 12:18:12.082293 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3ae07d43-4069-4d70-9960-0fd6b158fa76","Type":"ContainerStarted","Data":"b573f902980387d587043547f9499cbcb9b90ae738652fe075bfb387ae7b2c51"} Mar 19 12:18:12.082363 master-0 kubenswrapper[31830]: I0319 12:18:12.082363 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3ae07d43-4069-4d70-9960-0fd6b158fa76","Type":"ContainerStarted","Data":"131cda342b67993d7d898a3e4978a11737a65ba72e48aa90e353095fa1d37391"} Mar 19 12:18:12.082996 master-0 kubenswrapper[31830]: I0319 12:18:12.082380 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3ae07d43-4069-4d70-9960-0fd6b158fa76","Type":"ContainerStarted","Data":"347b5cef4b92965544088782e7b697b24709744be1b0d5441336ef731fc70acf"} Mar 19 12:18:12.082996 master-0 kubenswrapper[31830]: I0319 12:18:12.082394 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3ae07d43-4069-4d70-9960-0fd6b158fa76","Type":"ContainerStarted","Data":"c2f93d5d1a93ea5622cb8892bcdfc8989f24c2ff2a0579f797b8da1bd43127d4"} Mar 19 12:18:12.082996 master-0 kubenswrapper[31830]: I0319 12:18:12.082406 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3ae07d43-4069-4d70-9960-0fd6b158fa76","Type":"ContainerStarted","Data":"eb889b42796975d56a2db4fb29fcbe4641738a675ec27a29a46c37fc84fa4a2b"} Mar 19 12:18:12.122270 master-0 kubenswrapper[31830]: I0319 12:18:12.122185 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.367803334 podStartE2EDuration="5.122169238s" podCreationTimestamp="2026-03-19 12:18:07 +0000 UTC" firstStartedPulling="2026-03-19 12:18:09.036675304 +0000 UTC m=+227.585636008" lastFinishedPulling="2026-03-19 12:18:10.791041208 +0000 UTC m=+229.340001912" observedRunningTime="2026-03-19 12:18:12.121066983 +0000 UTC m=+230.670027687" watchObservedRunningTime="2026-03-19 12:18:12.122169238 +0000 UTC m=+230.671129942" Mar 19 12:18:14.081826 master-0 kubenswrapper[31830]: I0319 12:18:14.081677 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv"] Mar 19 12:18:14.413155 master-0 kubenswrapper[31830]: I0319 12:18:14.413025 31830 patch_prober.go:28] interesting pod/console-5b87647974-5zv6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" start-of-body= Mar 19 12:18:14.413155 master-0 kubenswrapper[31830]: I0319 12:18:14.413093 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5b87647974-5zv6r" podUID="bc3b0ed8-8383-4d41-8b15-46cab419217f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" Mar 19 12:18:18.507089 master-0 kubenswrapper[31830]: I0319 12:18:18.507017 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 19 12:18:18.507842 master-0 kubenswrapper[31830]: E0319 12:18:18.507436 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" containerName="console" Mar 19 12:18:18.507842 master-0 kubenswrapper[31830]: I0319 12:18:18.507448 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" containerName="console" Mar 19 12:18:18.507842 master-0 kubenswrapper[31830]: I0319 12:18:18.507685 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="be06cb2e-ccfa-47fe-aaa9-5dbc83a40a26" containerName="console" Mar 19 12:18:18.508454 master-0 kubenswrapper[31830]: I0319 12:18:18.508430 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 12:18:18.512107 master-0 kubenswrapper[31830]: I0319 12:18:18.512073 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-hhsz7" Mar 19 12:18:18.512423 master-0 kubenswrapper[31830]: I0319 12:18:18.512392 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 19 12:18:18.568852 master-0 kubenswrapper[31830]: I0319 12:18:18.568756 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 19 12:18:18.601608 master-0 kubenswrapper[31830]: I0319 12:18:18.597548 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:18.601608 master-0 kubenswrapper[31830]: I0319 12:18:18.597619 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:18:18.601608 master-0 kubenswrapper[31830]: I0319 12:18:18.598918 31830 patch_prober.go:28] interesting pod/console-575589487f-9nhq4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 19 12:18:18.601608 master-0 kubenswrapper[31830]: I0319 12:18:18.599001 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-575589487f-9nhq4" podUID="0a1dfc0b-250d-465f-a075-f088f5725873" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 19 12:18:18.637163 master-0 kubenswrapper[31830]: I0319 12:18:18.637079 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fad82d08-9bed-4000-8ade-6540ae9572aa-var-lock\") pod \"installer-5-master-0\" (UID: \"fad82d08-9bed-4000-8ade-6540ae9572aa\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 12:18:18.637665 master-0 kubenswrapper[31830]: I0319 12:18:18.637614 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fad82d08-9bed-4000-8ade-6540ae9572aa-kube-api-access\") pod \"installer-5-master-0\" (UID: \"fad82d08-9bed-4000-8ade-6540ae9572aa\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 12:18:18.638775 master-0 kubenswrapper[31830]: I0319 12:18:18.638737 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fad82d08-9bed-4000-8ade-6540ae9572aa-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"fad82d08-9bed-4000-8ade-6540ae9572aa\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 12:18:18.740626 master-0 kubenswrapper[31830]: I0319 12:18:18.740563 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fad82d08-9bed-4000-8ade-6540ae9572aa-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"fad82d08-9bed-4000-8ade-6540ae9572aa\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 12:18:18.740906 master-0 kubenswrapper[31830]: I0319 12:18:18.740661 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fad82d08-9bed-4000-8ade-6540ae9572aa-var-lock\") pod \"installer-5-master-0\" (UID: \"fad82d08-9bed-4000-8ade-6540ae9572aa\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 12:18:18.740906 master-0 kubenswrapper[31830]: I0319 12:18:18.740737 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fad82d08-9bed-4000-8ade-6540ae9572aa-var-lock\") pod \"installer-5-master-0\" (UID: \"fad82d08-9bed-4000-8ade-6540ae9572aa\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 12:18:18.740906 master-0 kubenswrapper[31830]: I0319 12:18:18.740782 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fad82d08-9bed-4000-8ade-6540ae9572aa-kube-api-access\") pod \"installer-5-master-0\" (UID: \"fad82d08-9bed-4000-8ade-6540ae9572aa\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 12:18:18.741213 master-0 kubenswrapper[31830]: I0319 12:18:18.740878 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fad82d08-9bed-4000-8ade-6540ae9572aa-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"fad82d08-9bed-4000-8ade-6540ae9572aa\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 12:18:18.760267 master-0 kubenswrapper[31830]: I0319 12:18:18.760149 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fad82d08-9bed-4000-8ade-6540ae9572aa-kube-api-access\") pod \"installer-5-master-0\" (UID: \"fad82d08-9bed-4000-8ade-6540ae9572aa\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 12:18:18.849620 master-0 kubenswrapper[31830]: I0319 12:18:18.849548 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 12:18:19.314078 master-0 kubenswrapper[31830]: I0319 12:18:19.314030 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 19 12:18:20.142594 master-0 kubenswrapper[31830]: I0319 12:18:20.142503 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"fad82d08-9bed-4000-8ade-6540ae9572aa","Type":"ContainerStarted","Data":"f11e1ab8076e7e1b6a74649f713d7819aba94f674200fc45abba2d1059d6751b"} Mar 19 12:18:20.142594 master-0 kubenswrapper[31830]: I0319 12:18:20.142596 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"fad82d08-9bed-4000-8ade-6540ae9572aa","Type":"ContainerStarted","Data":"5941e3cbe5b654a3d2631f8cbae275d2b61416a9d4db7d86cc05bd1921738a6a"} Mar 19 12:18:20.181427 master-0 kubenswrapper[31830]: I0319 12:18:20.181349 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=2.181325073 podStartE2EDuration="2.181325073s" podCreationTimestamp="2026-03-19 12:18:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:18:20.179648701 +0000 UTC m=+238.728609405" watchObservedRunningTime="2026-03-19 12:18:20.181325073 +0000 UTC m=+238.730285787" Mar 19 12:18:24.003031 master-0 kubenswrapper[31830]: I0319 12:18:24.002913 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:18:24.007513 master-0 kubenswrapper[31830]: I0319 12:18:24.007452 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-c8c668d4-qqj8z" Mar 19 12:18:24.413363 master-0 kubenswrapper[31830]: I0319 12:18:24.413120 31830 patch_prober.go:28] interesting pod/console-5b87647974-5zv6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" start-of-body= Mar 19 12:18:24.413363 master-0 kubenswrapper[31830]: I0319 12:18:24.413281 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5b87647974-5zv6r" podUID="bc3b0ed8-8383-4d41-8b15-46cab419217f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" Mar 19 12:18:28.598733 master-0 kubenswrapper[31830]: I0319 12:18:28.598638 31830 patch_prober.go:28] interesting pod/console-575589487f-9nhq4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 19 12:18:28.599365 master-0 kubenswrapper[31830]: I0319 12:18:28.598750 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-575589487f-9nhq4" podUID="0a1dfc0b-250d-465f-a075-f088f5725873" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 19 12:18:33.266994 master-0 kubenswrapper[31830]: I0319 12:18:33.266872 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-695474f69-bz8b7" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerName="console" containerID="cri-o://f8a329d8e4cd3c8277440511f6592f1a504a1d602403e775fa35cb53dafb9bf0" gracePeriod=15 Mar 19 12:18:33.855141 master-0 kubenswrapper[31830]: I0319 12:18:33.855091 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-695474f69-bz8b7_6db87e99-89b9-4f97-b6ca-b236cc27b901/console/0.log" Mar 19 12:18:33.855367 master-0 kubenswrapper[31830]: I0319 12:18:33.855157 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:18:33.928476 master-0 kubenswrapper[31830]: I0319 12:18:33.928388 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-oauth-serving-cert\") pod \"6db87e99-89b9-4f97-b6ca-b236cc27b901\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " Mar 19 12:18:33.928476 master-0 kubenswrapper[31830]: I0319 12:18:33.928453 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-trusted-ca-bundle\") pod \"6db87e99-89b9-4f97-b6ca-b236cc27b901\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " Mar 19 12:18:33.928742 master-0 kubenswrapper[31830]: I0319 12:18:33.928544 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6db87e99-89b9-4f97-b6ca-b236cc27b901-console-serving-cert\") pod \"6db87e99-89b9-4f97-b6ca-b236cc27b901\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " Mar 19 12:18:33.928742 master-0 kubenswrapper[31830]: I0319 12:18:33.928581 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-service-ca\") pod \"6db87e99-89b9-4f97-b6ca-b236cc27b901\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " Mar 19 12:18:33.928742 master-0 kubenswrapper[31830]: I0319 12:18:33.928617 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-console-config\") pod \"6db87e99-89b9-4f97-b6ca-b236cc27b901\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " Mar 19 12:18:33.928742 master-0 kubenswrapper[31830]: I0319 12:18:33.928709 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkhsr\" (UniqueName: \"kubernetes.io/projected/6db87e99-89b9-4f97-b6ca-b236cc27b901-kube-api-access-zkhsr\") pod \"6db87e99-89b9-4f97-b6ca-b236cc27b901\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " Mar 19 12:18:33.928929 master-0 kubenswrapper[31830]: I0319 12:18:33.928754 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6db87e99-89b9-4f97-b6ca-b236cc27b901-console-oauth-config\") pod \"6db87e99-89b9-4f97-b6ca-b236cc27b901\" (UID: \"6db87e99-89b9-4f97-b6ca-b236cc27b901\") " Mar 19 12:18:33.929657 master-0 kubenswrapper[31830]: I0319 12:18:33.929614 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6db87e99-89b9-4f97-b6ca-b236cc27b901" (UID: "6db87e99-89b9-4f97-b6ca-b236cc27b901"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:18:33.929885 master-0 kubenswrapper[31830]: I0319 12:18:33.929822 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-console-config" (OuterVolumeSpecName: "console-config") pod "6db87e99-89b9-4f97-b6ca-b236cc27b901" (UID: "6db87e99-89b9-4f97-b6ca-b236cc27b901"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:18:33.930036 master-0 kubenswrapper[31830]: I0319 12:18:33.929890 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6db87e99-89b9-4f97-b6ca-b236cc27b901" (UID: "6db87e99-89b9-4f97-b6ca-b236cc27b901"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:18:33.930237 master-0 kubenswrapper[31830]: I0319 12:18:33.930214 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-service-ca" (OuterVolumeSpecName: "service-ca") pod "6db87e99-89b9-4f97-b6ca-b236cc27b901" (UID: "6db87e99-89b9-4f97-b6ca-b236cc27b901"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:18:33.931477 master-0 kubenswrapper[31830]: I0319 12:18:33.931425 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6db87e99-89b9-4f97-b6ca-b236cc27b901-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6db87e99-89b9-4f97-b6ca-b236cc27b901" (UID: "6db87e99-89b9-4f97-b6ca-b236cc27b901"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:18:33.931912 master-0 kubenswrapper[31830]: I0319 12:18:33.931869 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6db87e99-89b9-4f97-b6ca-b236cc27b901-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6db87e99-89b9-4f97-b6ca-b236cc27b901" (UID: "6db87e99-89b9-4f97-b6ca-b236cc27b901"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:18:33.932335 master-0 kubenswrapper[31830]: I0319 12:18:33.932287 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6db87e99-89b9-4f97-b6ca-b236cc27b901-kube-api-access-zkhsr" (OuterVolumeSpecName: "kube-api-access-zkhsr") pod "6db87e99-89b9-4f97-b6ca-b236cc27b901" (UID: "6db87e99-89b9-4f97-b6ca-b236cc27b901"). InnerVolumeSpecName "kube-api-access-zkhsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:18:34.030512 master-0 kubenswrapper[31830]: I0319 12:18:34.030433 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkhsr\" (UniqueName: \"kubernetes.io/projected/6db87e99-89b9-4f97-b6ca-b236cc27b901-kube-api-access-zkhsr\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:34.030512 master-0 kubenswrapper[31830]: I0319 12:18:34.030476 31830 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6db87e99-89b9-4f97-b6ca-b236cc27b901-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:34.030512 master-0 kubenswrapper[31830]: I0319 12:18:34.030486 31830 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:34.030512 master-0 kubenswrapper[31830]: I0319 12:18:34.030494 31830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:34.030512 master-0 kubenswrapper[31830]: I0319 12:18:34.030503 31830 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6db87e99-89b9-4f97-b6ca-b236cc27b901-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:34.030512 master-0 kubenswrapper[31830]: I0319 12:18:34.030513 31830 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:34.030512 master-0 kubenswrapper[31830]: I0319 12:18:34.030523 31830 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6db87e99-89b9-4f97-b6ca-b236cc27b901-console-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:34.384427 master-0 kubenswrapper[31830]: I0319 12:18:34.384350 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-695474f69-bz8b7_6db87e99-89b9-4f97-b6ca-b236cc27b901/console/0.log" Mar 19 12:18:34.385233 master-0 kubenswrapper[31830]: I0319 12:18:34.384437 31830 generic.go:334] "Generic (PLEG): container finished" podID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerID="f8a329d8e4cd3c8277440511f6592f1a504a1d602403e775fa35cb53dafb9bf0" exitCode=2 Mar 19 12:18:34.385233 master-0 kubenswrapper[31830]: I0319 12:18:34.384486 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-695474f69-bz8b7" event={"ID":"6db87e99-89b9-4f97-b6ca-b236cc27b901","Type":"ContainerDied","Data":"f8a329d8e4cd3c8277440511f6592f1a504a1d602403e775fa35cb53dafb9bf0"} Mar 19 12:18:34.385233 master-0 kubenswrapper[31830]: I0319 12:18:34.384506 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-695474f69-bz8b7" Mar 19 12:18:34.385233 master-0 kubenswrapper[31830]: I0319 12:18:34.384533 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-695474f69-bz8b7" event={"ID":"6db87e99-89b9-4f97-b6ca-b236cc27b901","Type":"ContainerDied","Data":"b87d7c8814265a3da987480e78bc686cde71c16189607387a5f22d78ca5c4660"} Mar 19 12:18:34.385233 master-0 kubenswrapper[31830]: I0319 12:18:34.384562 31830 scope.go:117] "RemoveContainer" containerID="f8a329d8e4cd3c8277440511f6592f1a504a1d602403e775fa35cb53dafb9bf0" Mar 19 12:18:34.401505 master-0 kubenswrapper[31830]: I0319 12:18:34.401459 31830 scope.go:117] "RemoveContainer" containerID="f8a329d8e4cd3c8277440511f6592f1a504a1d602403e775fa35cb53dafb9bf0" Mar 19 12:18:34.401920 master-0 kubenswrapper[31830]: E0319 12:18:34.401885 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8a329d8e4cd3c8277440511f6592f1a504a1d602403e775fa35cb53dafb9bf0\": container with ID starting with f8a329d8e4cd3c8277440511f6592f1a504a1d602403e775fa35cb53dafb9bf0 not found: ID does not exist" containerID="f8a329d8e4cd3c8277440511f6592f1a504a1d602403e775fa35cb53dafb9bf0" Mar 19 12:18:34.401970 master-0 kubenswrapper[31830]: I0319 12:18:34.401924 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a329d8e4cd3c8277440511f6592f1a504a1d602403e775fa35cb53dafb9bf0"} err="failed to get container status \"f8a329d8e4cd3c8277440511f6592f1a504a1d602403e775fa35cb53dafb9bf0\": rpc error: code = NotFound desc = could not find container \"f8a329d8e4cd3c8277440511f6592f1a504a1d602403e775fa35cb53dafb9bf0\": container with ID starting with f8a329d8e4cd3c8277440511f6592f1a504a1d602403e775fa35cb53dafb9bf0 not found: ID does not exist" Mar 19 12:18:34.413650 master-0 kubenswrapper[31830]: I0319 12:18:34.413598 31830 patch_prober.go:28] interesting pod/console-5b87647974-5zv6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" start-of-body= Mar 19 12:18:34.413650 master-0 kubenswrapper[31830]: I0319 12:18:34.413637 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5b87647974-5zv6r" podUID="bc3b0ed8-8383-4d41-8b15-46cab419217f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" Mar 19 12:18:34.431857 master-0 kubenswrapper[31830]: I0319 12:18:34.431754 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-695474f69-bz8b7"] Mar 19 12:18:34.436418 master-0 kubenswrapper[31830]: I0319 12:18:34.436032 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-695474f69-bz8b7"] Mar 19 12:18:35.695092 master-0 kubenswrapper[31830]: I0319 12:18:35.694962 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" path="/var/lib/kubelet/pods/6db87e99-89b9-4f97-b6ca-b236cc27b901/volumes" Mar 19 12:18:36.892562 master-0 kubenswrapper[31830]: I0319 12:18:36.892509 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 19 12:18:36.893091 master-0 kubenswrapper[31830]: I0319 12:18:36.892706 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-5-master-0" podUID="fad82d08-9bed-4000-8ade-6540ae9572aa" containerName="installer" containerID="cri-o://f11e1ab8076e7e1b6a74649f713d7819aba94f674200fc45abba2d1059d6751b" gracePeriod=30 Mar 19 12:18:38.597853 master-0 kubenswrapper[31830]: I0319 12:18:38.597733 31830 patch_prober.go:28] interesting pod/console-575589487f-9nhq4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 19 12:18:38.598589 master-0 kubenswrapper[31830]: I0319 12:18:38.597871 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-575589487f-9nhq4" podUID="0a1dfc0b-250d-465f-a075-f088f5725873" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 19 12:18:39.120508 master-0 kubenswrapper[31830]: I0319 12:18:39.120414 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" podUID="e3f23f8a-0a1f-47e3-b40c-9503a88809f9" containerName="oauth-openshift" containerID="cri-o://03194f0146041465b10e82f31779a8b5c014fc551b793a1cf70c85b2c887a996" gracePeriod=15 Mar 19 12:18:39.434950 master-0 kubenswrapper[31830]: I0319 12:18:39.433894 31830 generic.go:334] "Generic (PLEG): container finished" podID="e3f23f8a-0a1f-47e3-b40c-9503a88809f9" containerID="03194f0146041465b10e82f31779a8b5c014fc551b793a1cf70c85b2c887a996" exitCode=0 Mar 19 12:18:39.434950 master-0 kubenswrapper[31830]: I0319 12:18:39.433943 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" event={"ID":"e3f23f8a-0a1f-47e3-b40c-9503a88809f9","Type":"ContainerDied","Data":"03194f0146041465b10e82f31779a8b5c014fc551b793a1cf70c85b2c887a996"} Mar 19 12:18:39.591821 master-0 kubenswrapper[31830]: I0319 12:18:39.591586 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:18:39.635669 master-0 kubenswrapper[31830]: I0319 12:18:39.635607 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-user-template-provider-selection\") pod \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " Mar 19 12:18:39.635669 master-0 kubenswrapper[31830]: I0319 12:18:39.635670 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-session\") pod \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " Mar 19 12:18:39.636464 master-0 kubenswrapper[31830]: I0319 12:18:39.635719 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-user-template-login\") pod \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " Mar 19 12:18:39.636464 master-0 kubenswrapper[31830]: I0319 12:18:39.635756 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-router-certs\") pod \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " Mar 19 12:18:39.636464 master-0 kubenswrapper[31830]: I0319 12:18:39.635853 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-service-ca\") pod \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " Mar 19 12:18:39.636464 master-0 kubenswrapper[31830]: I0319 12:18:39.635898 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-trusted-ca-bundle\") pod \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " Mar 19 12:18:39.636464 master-0 kubenswrapper[31830]: I0319 12:18:39.635936 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-audit-policies\") pod \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " Mar 19 12:18:39.636464 master-0 kubenswrapper[31830]: I0319 12:18:39.635970 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-cliconfig\") pod \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " Mar 19 12:18:39.636464 master-0 kubenswrapper[31830]: I0319 12:18:39.636009 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-user-template-error\") pod \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " Mar 19 12:18:39.636464 master-0 kubenswrapper[31830]: I0319 12:18:39.636075 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vl92\" (UniqueName: \"kubernetes.io/projected/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-kube-api-access-6vl92\") pod \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " Mar 19 12:18:39.636464 master-0 kubenswrapper[31830]: I0319 12:18:39.636120 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-ocp-branding-template\") pod \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " Mar 19 12:18:39.636464 master-0 kubenswrapper[31830]: I0319 12:18:39.636161 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-audit-dir\") pod \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " Mar 19 12:18:39.636464 master-0 kubenswrapper[31830]: I0319 12:18:39.636189 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-serving-cert\") pod \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\" (UID: \"e3f23f8a-0a1f-47e3-b40c-9503a88809f9\") " Mar 19 12:18:39.638264 master-0 kubenswrapper[31830]: I0319 12:18:39.638202 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e3f23f8a-0a1f-47e3-b40c-9503a88809f9" (UID: "e3f23f8a-0a1f-47e3-b40c-9503a88809f9"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:18:39.639647 master-0 kubenswrapper[31830]: I0319 12:18:39.639607 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e3f23f8a-0a1f-47e3-b40c-9503a88809f9" (UID: "e3f23f8a-0a1f-47e3-b40c-9503a88809f9"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:18:39.639719 master-0 kubenswrapper[31830]: I0319 12:18:39.639661 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e3f23f8a-0a1f-47e3-b40c-9503a88809f9" (UID: "e3f23f8a-0a1f-47e3-b40c-9503a88809f9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:18:39.640160 master-0 kubenswrapper[31830]: I0319 12:18:39.640074 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e3f23f8a-0a1f-47e3-b40c-9503a88809f9" (UID: "e3f23f8a-0a1f-47e3-b40c-9503a88809f9"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:18:39.641221 master-0 kubenswrapper[31830]: I0319 12:18:39.641181 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e3f23f8a-0a1f-47e3-b40c-9503a88809f9" (UID: "e3f23f8a-0a1f-47e3-b40c-9503a88809f9"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:18:39.641293 master-0 kubenswrapper[31830]: I0319 12:18:39.641228 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e3f23f8a-0a1f-47e3-b40c-9503a88809f9" (UID: "e3f23f8a-0a1f-47e3-b40c-9503a88809f9"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:18:39.641709 master-0 kubenswrapper[31830]: I0319 12:18:39.641667 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e3f23f8a-0a1f-47e3-b40c-9503a88809f9" (UID: "e3f23f8a-0a1f-47e3-b40c-9503a88809f9"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:18:39.641769 master-0 kubenswrapper[31830]: I0319 12:18:39.641752 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e3f23f8a-0a1f-47e3-b40c-9503a88809f9" (UID: "e3f23f8a-0a1f-47e3-b40c-9503a88809f9"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:18:39.642055 master-0 kubenswrapper[31830]: I0319 12:18:39.642016 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-kube-api-access-6vl92" (OuterVolumeSpecName: "kube-api-access-6vl92") pod "e3f23f8a-0a1f-47e3-b40c-9503a88809f9" (UID: "e3f23f8a-0a1f-47e3-b40c-9503a88809f9"). InnerVolumeSpecName "kube-api-access-6vl92". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:18:39.642102 master-0 kubenswrapper[31830]: I0319 12:18:39.642021 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e3f23f8a-0a1f-47e3-b40c-9503a88809f9" (UID: "e3f23f8a-0a1f-47e3-b40c-9503a88809f9"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:18:39.642325 master-0 kubenswrapper[31830]: I0319 12:18:39.642288 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e3f23f8a-0a1f-47e3-b40c-9503a88809f9" (UID: "e3f23f8a-0a1f-47e3-b40c-9503a88809f9"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:18:39.642521 master-0 kubenswrapper[31830]: I0319 12:18:39.642488 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-868d558fdf-npzgm"] Mar 19 12:18:39.642811 master-0 kubenswrapper[31830]: E0319 12:18:39.642776 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerName="console" Mar 19 12:18:39.642811 master-0 kubenswrapper[31830]: I0319 12:18:39.642792 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerName="console" Mar 19 12:18:39.642897 master-0 kubenswrapper[31830]: E0319 12:18:39.642842 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f23f8a-0a1f-47e3-b40c-9503a88809f9" containerName="oauth-openshift" Mar 19 12:18:39.642897 master-0 kubenswrapper[31830]: I0319 12:18:39.642850 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f23f8a-0a1f-47e3-b40c-9503a88809f9" containerName="oauth-openshift" Mar 19 12:18:39.643009 master-0 kubenswrapper[31830]: I0319 12:18:39.642964 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6db87e99-89b9-4f97-b6ca-b236cc27b901" containerName="console" Mar 19 12:18:39.643047 master-0 kubenswrapper[31830]: I0319 12:18:39.643015 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f23f8a-0a1f-47e3-b40c-9503a88809f9" containerName="oauth-openshift" Mar 19 12:18:39.643487 master-0 kubenswrapper[31830]: I0319 12:18:39.643453 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.645435 master-0 kubenswrapper[31830]: I0319 12:18:39.645381 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e3f23f8a-0a1f-47e3-b40c-9503a88809f9" (UID: "e3f23f8a-0a1f-47e3-b40c-9503a88809f9"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:18:39.652719 master-0 kubenswrapper[31830]: I0319 12:18:39.652663 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e3f23f8a-0a1f-47e3-b40c-9503a88809f9" (UID: "e3f23f8a-0a1f-47e3-b40c-9503a88809f9"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:18:39.662863 master-0 kubenswrapper[31830]: I0319 12:18:39.662064 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-868d558fdf-npzgm"] Mar 19 12:18:39.737939 master-0 kubenswrapper[31830]: I0319 12:18:39.737908 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.738225 master-0 kubenswrapper[31830]: I0319 12:18:39.737951 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.738225 master-0 kubenswrapper[31830]: I0319 12:18:39.737993 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3bc9f2d2-5538-4448-842f-37acfc790ae0-audit-dir\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.738225 master-0 kubenswrapper[31830]: I0319 12:18:39.738156 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.738415 master-0 kubenswrapper[31830]: I0319 12:18:39.738248 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-user-template-login\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.738415 master-0 kubenswrapper[31830]: I0319 12:18:39.738333 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3bc9f2d2-5538-4448-842f-37acfc790ae0-audit-policies\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.738415 master-0 kubenswrapper[31830]: I0319 12:18:39.738393 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wck9\" (UniqueName: \"kubernetes.io/projected/3bc9f2d2-5538-4448-842f-37acfc790ae0-kube-api-access-9wck9\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.738546 master-0 kubenswrapper[31830]: I0319 12:18:39.738460 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-system-router-certs\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.738546 master-0 kubenswrapper[31830]: I0319 12:18:39.738496 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.738546 master-0 kubenswrapper[31830]: I0319 12:18:39.738546 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-system-session\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.738762 master-0 kubenswrapper[31830]: I0319 12:18:39.738571 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.738762 master-0 kubenswrapper[31830]: I0319 12:18:39.738608 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-system-service-ca\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.738762 master-0 kubenswrapper[31830]: I0319 12:18:39.738626 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-user-template-error\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.738762 master-0 kubenswrapper[31830]: I0319 12:18:39.738689 31830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:39.738762 master-0 kubenswrapper[31830]: I0319 12:18:39.738708 31830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:39.738762 master-0 kubenswrapper[31830]: I0319 12:18:39.738752 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vl92\" (UniqueName: \"kubernetes.io/projected/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-kube-api-access-6vl92\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:39.738762 master-0 kubenswrapper[31830]: I0319 12:18:39.738769 31830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:39.739050 master-0 kubenswrapper[31830]: I0319 12:18:39.738779 31830 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:39.739050 master-0 kubenswrapper[31830]: I0319 12:18:39.738789 31830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:39.739050 master-0 kubenswrapper[31830]: I0319 12:18:39.738841 31830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:39.739050 master-0 kubenswrapper[31830]: I0319 12:18:39.738853 31830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:39.739050 master-0 kubenswrapper[31830]: I0319 12:18:39.738862 31830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:39.739050 master-0 kubenswrapper[31830]: I0319 12:18:39.738871 31830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:39.739050 master-0 kubenswrapper[31830]: I0319 12:18:39.738881 31830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:39.739050 master-0 kubenswrapper[31830]: I0319 12:18:39.738891 31830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:39.739050 master-0 kubenswrapper[31830]: I0319 12:18:39.738899 31830 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e3f23f8a-0a1f-47e3-b40c-9503a88809f9-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:39.840614 master-0 kubenswrapper[31830]: I0319 12:18:39.840527 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-system-service-ca\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.840614 master-0 kubenswrapper[31830]: I0319 12:18:39.840606 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.840947 master-0 kubenswrapper[31830]: I0319 12:18:39.840646 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-user-template-error\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.840947 master-0 kubenswrapper[31830]: I0319 12:18:39.840689 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.840947 master-0 kubenswrapper[31830]: I0319 12:18:39.840735 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.840947 master-0 kubenswrapper[31830]: I0319 12:18:39.840819 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3bc9f2d2-5538-4448-842f-37acfc790ae0-audit-dir\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.840947 master-0 kubenswrapper[31830]: I0319 12:18:39.840865 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.840947 master-0 kubenswrapper[31830]: I0319 12:18:39.840919 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-user-template-login\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.840947 master-0 kubenswrapper[31830]: I0319 12:18:39.840949 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3bc9f2d2-5538-4448-842f-37acfc790ae0-audit-policies\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.841276 master-0 kubenswrapper[31830]: I0319 12:18:39.841016 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wck9\" (UniqueName: \"kubernetes.io/projected/3bc9f2d2-5538-4448-842f-37acfc790ae0-kube-api-access-9wck9\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.841276 master-0 kubenswrapper[31830]: I0319 12:18:39.841081 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-system-router-certs\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.841276 master-0 kubenswrapper[31830]: I0319 12:18:39.841132 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.841276 master-0 kubenswrapper[31830]: I0319 12:18:39.841168 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-system-session\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.842180 master-0 kubenswrapper[31830]: I0319 12:18:39.842104 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3bc9f2d2-5538-4448-842f-37acfc790ae0-audit-dir\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.842477 master-0 kubenswrapper[31830]: I0319 12:18:39.842419 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.843009 master-0 kubenswrapper[31830]: I0319 12:18:39.842955 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-system-service-ca\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.843400 master-0 kubenswrapper[31830]: I0319 12:18:39.843265 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.843890 master-0 kubenswrapper[31830]: I0319 12:18:39.843848 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3bc9f2d2-5538-4448-842f-37acfc790ae0-audit-policies\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.844721 master-0 kubenswrapper[31830]: I0319 12:18:39.844680 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.845999 master-0 kubenswrapper[31830]: I0319 12:18:39.845961 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-user-template-login\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.846210 master-0 kubenswrapper[31830]: I0319 12:18:39.846185 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.846790 master-0 kubenswrapper[31830]: I0319 12:18:39.846739 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-system-router-certs\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.846929 master-0 kubenswrapper[31830]: I0319 12:18:39.846882 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.847873 master-0 kubenswrapper[31830]: I0319 12:18:39.847824 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-user-template-error\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.848058 master-0 kubenswrapper[31830]: I0319 12:18:39.848022 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3bc9f2d2-5538-4448-842f-37acfc790ae0-v4-0-config-system-session\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:39.864000 master-0 kubenswrapper[31830]: I0319 12:18:39.863957 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wck9\" (UniqueName: \"kubernetes.io/projected/3bc9f2d2-5538-4448-842f-37acfc790ae0-kube-api-access-9wck9\") pod \"oauth-openshift-868d558fdf-npzgm\" (UID: \"3bc9f2d2-5538-4448-842f-37acfc790ae0\") " pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:40.027617 master-0 kubenswrapper[31830]: I0319 12:18:40.027494 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:40.112920 master-0 kubenswrapper[31830]: I0319 12:18:40.107357 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 19 12:18:40.112920 master-0 kubenswrapper[31830]: I0319 12:18:40.108942 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 12:18:40.120503 master-0 kubenswrapper[31830]: I0319 12:18:40.118286 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 19 12:18:40.144817 master-0 kubenswrapper[31830]: I0319 12:18:40.144741 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5136761d-3b51-4cf2-8689-88d0bfefd0b2-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"5136761d-3b51-4cf2-8689-88d0bfefd0b2\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 12:18:40.144817 master-0 kubenswrapper[31830]: I0319 12:18:40.144823 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5136761d-3b51-4cf2-8689-88d0bfefd0b2-kube-api-access\") pod \"installer-6-master-0\" (UID: \"5136761d-3b51-4cf2-8689-88d0bfefd0b2\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 12:18:40.145035 master-0 kubenswrapper[31830]: I0319 12:18:40.144900 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5136761d-3b51-4cf2-8689-88d0bfefd0b2-var-lock\") pod \"installer-6-master-0\" (UID: \"5136761d-3b51-4cf2-8689-88d0bfefd0b2\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 12:18:40.246277 master-0 kubenswrapper[31830]: I0319 12:18:40.246202 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5136761d-3b51-4cf2-8689-88d0bfefd0b2-kube-api-access\") pod \"installer-6-master-0\" (UID: \"5136761d-3b51-4cf2-8689-88d0bfefd0b2\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 12:18:40.248639 master-0 kubenswrapper[31830]: I0319 12:18:40.246561 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5136761d-3b51-4cf2-8689-88d0bfefd0b2-var-lock\") pod \"installer-6-master-0\" (UID: \"5136761d-3b51-4cf2-8689-88d0bfefd0b2\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 12:18:40.248639 master-0 kubenswrapper[31830]: I0319 12:18:40.246630 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5136761d-3b51-4cf2-8689-88d0bfefd0b2-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"5136761d-3b51-4cf2-8689-88d0bfefd0b2\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 12:18:40.248639 master-0 kubenswrapper[31830]: I0319 12:18:40.246691 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5136761d-3b51-4cf2-8689-88d0bfefd0b2-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"5136761d-3b51-4cf2-8689-88d0bfefd0b2\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 12:18:40.248639 master-0 kubenswrapper[31830]: I0319 12:18:40.246726 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5136761d-3b51-4cf2-8689-88d0bfefd0b2-var-lock\") pod \"installer-6-master-0\" (UID: \"5136761d-3b51-4cf2-8689-88d0bfefd0b2\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 12:18:40.262651 master-0 kubenswrapper[31830]: I0319 12:18:40.262603 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5136761d-3b51-4cf2-8689-88d0bfefd0b2-kube-api-access\") pod \"installer-6-master-0\" (UID: \"5136761d-3b51-4cf2-8689-88d0bfefd0b2\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 12:18:40.447109 master-0 kubenswrapper[31830]: I0319 12:18:40.446896 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" event={"ID":"e3f23f8a-0a1f-47e3-b40c-9503a88809f9","Type":"ContainerDied","Data":"04e8fd52ab7b08e929542c59ecc7a2b5d8f4db4474947829a16ae3c8c5f8b6fd"} Mar 19 12:18:40.447109 master-0 kubenswrapper[31830]: I0319 12:18:40.446964 31830 scope.go:117] "RemoveContainer" containerID="03194f0146041465b10e82f31779a8b5c014fc551b793a1cf70c85b2c887a996" Mar 19 12:18:40.447109 master-0 kubenswrapper[31830]: I0319 12:18:40.447019 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv" Mar 19 12:18:40.477745 master-0 kubenswrapper[31830]: I0319 12:18:40.477694 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 12:18:40.491327 master-0 kubenswrapper[31830]: I0319 12:18:40.491258 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv"] Mar 19 12:18:40.501911 master-0 kubenswrapper[31830]: I0319 12:18:40.501832 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-f7b6b8b77-5dcqv"] Mar 19 12:18:40.512230 master-0 kubenswrapper[31830]: I0319 12:18:40.512179 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-868d558fdf-npzgm"] Mar 19 12:18:40.894745 master-0 kubenswrapper[31830]: I0319 12:18:40.894699 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 19 12:18:40.897306 master-0 kubenswrapper[31830]: W0319 12:18:40.897268 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5136761d_3b51_4cf2_8689_88d0bfefd0b2.slice/crio-288fb28088d5954eb572e32989b87625c4cd224e8a20ff7b0b888d5972e06bc8 WatchSource:0}: Error finding container 288fb28088d5954eb572e32989b87625c4cd224e8a20ff7b0b888d5972e06bc8: Status 404 returned error can't find the container with id 288fb28088d5954eb572e32989b87625c4cd224e8a20ff7b0b888d5972e06bc8 Mar 19 12:18:41.455548 master-0 kubenswrapper[31830]: I0319 12:18:41.455404 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"5136761d-3b51-4cf2-8689-88d0bfefd0b2","Type":"ContainerStarted","Data":"b246bb9a28555bbdd0e4b4b104ba4e2ddb4462f8b7d4f97825fea6862561477d"} Mar 19 12:18:41.455548 master-0 kubenswrapper[31830]: I0319 12:18:41.455451 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"5136761d-3b51-4cf2-8689-88d0bfefd0b2","Type":"ContainerStarted","Data":"288fb28088d5954eb572e32989b87625c4cd224e8a20ff7b0b888d5972e06bc8"} Mar 19 12:18:41.457872 master-0 kubenswrapper[31830]: I0319 12:18:41.457767 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" event={"ID":"3bc9f2d2-5538-4448-842f-37acfc790ae0","Type":"ContainerStarted","Data":"4fc73461c17e9d6a04a1edff613569e72c6614bd61534b2a88e0b93f68bc6f38"} Mar 19 12:18:41.457872 master-0 kubenswrapper[31830]: I0319 12:18:41.457813 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" event={"ID":"3bc9f2d2-5538-4448-842f-37acfc790ae0","Type":"ContainerStarted","Data":"40c2e07be7cb043af025bd35e5db9abf64eb04f87c5e3c8db1a72906aea0cb61"} Mar 19 12:18:41.458486 master-0 kubenswrapper[31830]: I0319 12:18:41.458441 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:41.463434 master-0 kubenswrapper[31830]: I0319 12:18:41.463395 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" Mar 19 12:18:41.475535 master-0 kubenswrapper[31830]: I0319 12:18:41.475428 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-6-master-0" podStartSLOduration=1.475404391 podStartE2EDuration="1.475404391s" podCreationTimestamp="2026-03-19 12:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:18:41.473149391 +0000 UTC m=+260.022110115" watchObservedRunningTime="2026-03-19 12:18:41.475404391 +0000 UTC m=+260.024365105" Mar 19 12:18:41.537785 master-0 kubenswrapper[31830]: I0319 12:18:41.516325 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-868d558fdf-npzgm" podStartSLOduration=27.51630338 podStartE2EDuration="27.51630338s" podCreationTimestamp="2026-03-19 12:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:18:41.513086992 +0000 UTC m=+260.062047736" watchObservedRunningTime="2026-03-19 12:18:41.51630338 +0000 UTC m=+260.065264104" Mar 19 12:18:41.693969 master-0 kubenswrapper[31830]: I0319 12:18:41.693018 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3f23f8a-0a1f-47e3-b40c-9503a88809f9" path="/var/lib/kubelet/pods/e3f23f8a-0a1f-47e3-b40c-9503a88809f9/volumes" Mar 19 12:18:44.413612 master-0 kubenswrapper[31830]: I0319 12:18:44.413504 31830 patch_prober.go:28] interesting pod/console-5b87647974-5zv6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" start-of-body= Mar 19 12:18:44.413612 master-0 kubenswrapper[31830]: I0319 12:18:44.413577 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5b87647974-5zv6r" podUID="bc3b0ed8-8383-4d41-8b15-46cab419217f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" Mar 19 12:18:45.632381 master-0 kubenswrapper[31830]: I0319 12:18:45.632312 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:18:45.661658 master-0 kubenswrapper[31830]: I0319 12:18:45.661608 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:18:46.528453 master-0 kubenswrapper[31830]: I0319 12:18:46.528410 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 19 12:18:48.598516 master-0 kubenswrapper[31830]: I0319 12:18:48.598429 31830 patch_prober.go:28] interesting pod/console-575589487f-9nhq4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 19 12:18:48.599548 master-0 kubenswrapper[31830]: I0319 12:18:48.598528 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-575589487f-9nhq4" podUID="0a1dfc0b-250d-465f-a075-f088f5725873" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 19 12:18:50.543357 master-0 kubenswrapper[31830]: I0319 12:18:50.543255 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_fad82d08-9bed-4000-8ade-6540ae9572aa/installer/0.log" Mar 19 12:18:50.543357 master-0 kubenswrapper[31830]: I0319 12:18:50.543309 31830 generic.go:334] "Generic (PLEG): container finished" podID="fad82d08-9bed-4000-8ade-6540ae9572aa" containerID="f11e1ab8076e7e1b6a74649f713d7819aba94f674200fc45abba2d1059d6751b" exitCode=1 Mar 19 12:18:50.543357 master-0 kubenswrapper[31830]: I0319 12:18:50.543343 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"fad82d08-9bed-4000-8ade-6540ae9572aa","Type":"ContainerDied","Data":"f11e1ab8076e7e1b6a74649f713d7819aba94f674200fc45abba2d1059d6751b"} Mar 19 12:18:50.690503 master-0 kubenswrapper[31830]: I0319 12:18:50.690437 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_fad82d08-9bed-4000-8ade-6540ae9572aa/installer/0.log" Mar 19 12:18:50.690730 master-0 kubenswrapper[31830]: I0319 12:18:50.690532 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 12:18:50.737051 master-0 kubenswrapper[31830]: I0319 12:18:50.731906 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fad82d08-9bed-4000-8ade-6540ae9572aa-var-lock\") pod \"fad82d08-9bed-4000-8ade-6540ae9572aa\" (UID: \"fad82d08-9bed-4000-8ade-6540ae9572aa\") " Mar 19 12:18:50.737051 master-0 kubenswrapper[31830]: I0319 12:18:50.732030 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fad82d08-9bed-4000-8ade-6540ae9572aa-kube-api-access\") pod \"fad82d08-9bed-4000-8ade-6540ae9572aa\" (UID: \"fad82d08-9bed-4000-8ade-6540ae9572aa\") " Mar 19 12:18:50.737051 master-0 kubenswrapper[31830]: I0319 12:18:50.732051 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fad82d08-9bed-4000-8ade-6540ae9572aa-kubelet-dir\") pod \"fad82d08-9bed-4000-8ade-6540ae9572aa\" (UID: \"fad82d08-9bed-4000-8ade-6540ae9572aa\") " Mar 19 12:18:50.737051 master-0 kubenswrapper[31830]: I0319 12:18:50.732573 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad82d08-9bed-4000-8ade-6540ae9572aa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fad82d08-9bed-4000-8ade-6540ae9572aa" (UID: "fad82d08-9bed-4000-8ade-6540ae9572aa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:18:50.737051 master-0 kubenswrapper[31830]: I0319 12:18:50.732615 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad82d08-9bed-4000-8ade-6540ae9572aa-var-lock" (OuterVolumeSpecName: "var-lock") pod "fad82d08-9bed-4000-8ade-6540ae9572aa" (UID: "fad82d08-9bed-4000-8ade-6540ae9572aa"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:18:50.737569 master-0 kubenswrapper[31830]: I0319 12:18:50.737115 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fad82d08-9bed-4000-8ade-6540ae9572aa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fad82d08-9bed-4000-8ade-6540ae9572aa" (UID: "fad82d08-9bed-4000-8ade-6540ae9572aa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:18:50.834030 master-0 kubenswrapper[31830]: I0319 12:18:50.833818 31830 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fad82d08-9bed-4000-8ade-6540ae9572aa-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:50.834030 master-0 kubenswrapper[31830]: I0319 12:18:50.833863 31830 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fad82d08-9bed-4000-8ade-6540ae9572aa-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:50.834030 master-0 kubenswrapper[31830]: I0319 12:18:50.833876 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fad82d08-9bed-4000-8ade-6540ae9572aa-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 12:18:51.554426 master-0 kubenswrapper[31830]: I0319 12:18:51.554364 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_fad82d08-9bed-4000-8ade-6540ae9572aa/installer/0.log" Mar 19 12:18:51.555458 master-0 kubenswrapper[31830]: I0319 12:18:51.554459 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"fad82d08-9bed-4000-8ade-6540ae9572aa","Type":"ContainerDied","Data":"5941e3cbe5b654a3d2631f8cbae275d2b61416a9d4db7d86cc05bd1921738a6a"} Mar 19 12:18:51.555458 master-0 kubenswrapper[31830]: I0319 12:18:51.554521 31830 scope.go:117] "RemoveContainer" containerID="f11e1ab8076e7e1b6a74649f713d7819aba94f674200fc45abba2d1059d6751b" Mar 19 12:18:51.555458 master-0 kubenswrapper[31830]: I0319 12:18:51.554578 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 12:18:51.621630 master-0 kubenswrapper[31830]: I0319 12:18:51.621578 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 19 12:18:51.628773 master-0 kubenswrapper[31830]: I0319 12:18:51.628720 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 19 12:18:51.686917 master-0 kubenswrapper[31830]: I0319 12:18:51.686853 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fad82d08-9bed-4000-8ade-6540ae9572aa" path="/var/lib/kubelet/pods/fad82d08-9bed-4000-8ade-6540ae9572aa/volumes" Mar 19 12:18:54.412910 master-0 kubenswrapper[31830]: I0319 12:18:54.412868 31830 patch_prober.go:28] interesting pod/console-5b87647974-5zv6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" start-of-body= Mar 19 12:18:54.413495 master-0 kubenswrapper[31830]: I0319 12:18:54.413464 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5b87647974-5zv6r" podUID="bc3b0ed8-8383-4d41-8b15-46cab419217f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" Mar 19 12:18:56.888318 master-0 kubenswrapper[31830]: I0319 12:18:56.888265 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b87647974-5zv6r"] Mar 19 12:18:56.928458 master-0 kubenswrapper[31830]: I0319 12:18:56.928384 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-b5f5fdd67-r4lxc"] Mar 19 12:18:56.928729 master-0 kubenswrapper[31830]: E0319 12:18:56.928710 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad82d08-9bed-4000-8ade-6540ae9572aa" containerName="installer" Mar 19 12:18:56.928729 master-0 kubenswrapper[31830]: I0319 12:18:56.928728 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad82d08-9bed-4000-8ade-6540ae9572aa" containerName="installer" Mar 19 12:18:56.928926 master-0 kubenswrapper[31830]: I0319 12:18:56.928908 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad82d08-9bed-4000-8ade-6540ae9572aa" containerName="installer" Mar 19 12:18:56.929381 master-0 kubenswrapper[31830]: I0319 12:18:56.929359 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:56.946792 master-0 kubenswrapper[31830]: I0319 12:18:56.946747 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b5f5fdd67-r4lxc"] Mar 19 12:18:57.025014 master-0 kubenswrapper[31830]: I0319 12:18:57.024729 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8f224cab-e321-4e24-83bc-f99242f971b0-console-oauth-config\") pod \"console-b5f5fdd67-r4lxc\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.025014 master-0 kubenswrapper[31830]: I0319 12:18:57.024790 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-console-config\") pod \"console-b5f5fdd67-r4lxc\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.025014 master-0 kubenswrapper[31830]: I0319 12:18:57.024866 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f224cab-e321-4e24-83bc-f99242f971b0-console-serving-cert\") pod \"console-b5f5fdd67-r4lxc\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.025014 master-0 kubenswrapper[31830]: I0319 12:18:57.024922 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-service-ca\") pod \"console-b5f5fdd67-r4lxc\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.025014 master-0 kubenswrapper[31830]: I0319 12:18:57.024955 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-oauth-serving-cert\") pod \"console-b5f5fdd67-r4lxc\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.025014 master-0 kubenswrapper[31830]: I0319 12:18:57.024991 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-trusted-ca-bundle\") pod \"console-b5f5fdd67-r4lxc\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.025014 master-0 kubenswrapper[31830]: I0319 12:18:57.025021 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsk9k\" (UniqueName: \"kubernetes.io/projected/8f224cab-e321-4e24-83bc-f99242f971b0-kube-api-access-rsk9k\") pod \"console-b5f5fdd67-r4lxc\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.126673 master-0 kubenswrapper[31830]: I0319 12:18:57.126564 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8f224cab-e321-4e24-83bc-f99242f971b0-console-oauth-config\") pod \"console-b5f5fdd67-r4lxc\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.126673 master-0 kubenswrapper[31830]: I0319 12:18:57.126626 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-console-config\") pod \"console-b5f5fdd67-r4lxc\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.127103 master-0 kubenswrapper[31830]: I0319 12:18:57.126844 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f224cab-e321-4e24-83bc-f99242f971b0-console-serving-cert\") pod \"console-b5f5fdd67-r4lxc\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.127103 master-0 kubenswrapper[31830]: I0319 12:18:57.126953 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-service-ca\") pod \"console-b5f5fdd67-r4lxc\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.127103 master-0 kubenswrapper[31830]: I0319 12:18:57.126986 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-oauth-serving-cert\") pod \"console-b5f5fdd67-r4lxc\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.127103 master-0 kubenswrapper[31830]: I0319 12:18:57.127026 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-trusted-ca-bundle\") pod \"console-b5f5fdd67-r4lxc\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.127103 master-0 kubenswrapper[31830]: I0319 12:18:57.127080 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsk9k\" (UniqueName: \"kubernetes.io/projected/8f224cab-e321-4e24-83bc-f99242f971b0-kube-api-access-rsk9k\") pod \"console-b5f5fdd67-r4lxc\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.127656 master-0 kubenswrapper[31830]: I0319 12:18:57.127611 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-console-config\") pod \"console-b5f5fdd67-r4lxc\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.128175 master-0 kubenswrapper[31830]: I0319 12:18:57.128146 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-trusted-ca-bundle\") pod \"console-b5f5fdd67-r4lxc\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.128175 master-0 kubenswrapper[31830]: I0319 12:18:57.128145 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-service-ca\") pod \"console-b5f5fdd67-r4lxc\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.129833 master-0 kubenswrapper[31830]: I0319 12:18:57.129761 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f224cab-e321-4e24-83bc-f99242f971b0-console-serving-cert\") pod \"console-b5f5fdd67-r4lxc\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.130103 master-0 kubenswrapper[31830]: I0319 12:18:57.130064 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8f224cab-e321-4e24-83bc-f99242f971b0-console-oauth-config\") pod \"console-b5f5fdd67-r4lxc\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.133472 master-0 kubenswrapper[31830]: I0319 12:18:57.133356 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-oauth-serving-cert\") pod \"console-b5f5fdd67-r4lxc\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.144149 master-0 kubenswrapper[31830]: I0319 12:18:57.144017 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsk9k\" (UniqueName: \"kubernetes.io/projected/8f224cab-e321-4e24-83bc-f99242f971b0-kube-api-access-rsk9k\") pod \"console-b5f5fdd67-r4lxc\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.253192 master-0 kubenswrapper[31830]: I0319 12:18:57.252904 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:18:57.675016 master-0 kubenswrapper[31830]: I0319 12:18:57.674962 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b5f5fdd67-r4lxc"] Mar 19 12:18:58.598984 master-0 kubenswrapper[31830]: I0319 12:18:58.598933 31830 patch_prober.go:28] interesting pod/console-575589487f-9nhq4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 19 12:18:58.599441 master-0 kubenswrapper[31830]: I0319 12:18:58.599004 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-575589487f-9nhq4" podUID="0a1dfc0b-250d-465f-a075-f088f5725873" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 19 12:18:58.618005 master-0 kubenswrapper[31830]: I0319 12:18:58.617934 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b5f5fdd67-r4lxc" event={"ID":"8f224cab-e321-4e24-83bc-f99242f971b0","Type":"ContainerStarted","Data":"6265d7dbb0a319c10109ed5dc5151dfca4a590c22cd594631e9826123ab8e603"} Mar 19 12:18:58.618005 master-0 kubenswrapper[31830]: I0319 12:18:58.617992 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b5f5fdd67-r4lxc" event={"ID":"8f224cab-e321-4e24-83bc-f99242f971b0","Type":"ContainerStarted","Data":"1944f1b02e25c74d5e13b760781b624f047cb27669e749bab0f7f7f79cb67d59"} Mar 19 12:18:58.641384 master-0 kubenswrapper[31830]: I0319 12:18:58.641316 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-b5f5fdd67-r4lxc" podStartSLOduration=2.64129378 podStartE2EDuration="2.64129378s" podCreationTimestamp="2026-03-19 12:18:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:18:58.640860787 +0000 UTC m=+277.189821491" watchObservedRunningTime="2026-03-19 12:18:58.64129378 +0000 UTC m=+277.190254494" Mar 19 12:19:07.253307 master-0 kubenswrapper[31830]: I0319 12:19:07.253236 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:19:07.253307 master-0 kubenswrapper[31830]: I0319 12:19:07.253307 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:19:07.256269 master-0 kubenswrapper[31830]: I0319 12:19:07.256188 31830 patch_prober.go:28] interesting pod/console-b5f5fdd67-r4lxc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Mar 19 12:19:07.256379 master-0 kubenswrapper[31830]: I0319 12:19:07.256265 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b5f5fdd67-r4lxc" podUID="8f224cab-e321-4e24-83bc-f99242f971b0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Mar 19 12:19:08.598148 master-0 kubenswrapper[31830]: I0319 12:19:08.598076 31830 patch_prober.go:28] interesting pod/console-575589487f-9nhq4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 19 12:19:08.598684 master-0 kubenswrapper[31830]: I0319 12:19:08.598146 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-575589487f-9nhq4" podUID="0a1dfc0b-250d-465f-a075-f088f5725873" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 19 12:19:17.254242 master-0 kubenswrapper[31830]: I0319 12:19:17.254110 31830 patch_prober.go:28] interesting pod/console-b5f5fdd67-r4lxc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Mar 19 12:19:17.254925 master-0 kubenswrapper[31830]: I0319 12:19:17.254196 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b5f5fdd67-r4lxc" podUID="8f224cab-e321-4e24-83bc-f99242f971b0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Mar 19 12:19:18.598242 master-0 kubenswrapper[31830]: I0319 12:19:18.598163 31830 patch_prober.go:28] interesting pod/console-575589487f-9nhq4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 19 12:19:18.598242 master-0 kubenswrapper[31830]: I0319 12:19:18.598238 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-575589487f-9nhq4" podUID="0a1dfc0b-250d-465f-a075-f088f5725873" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 19 12:19:21.691983 master-0 kubenswrapper[31830]: I0319 12:19:21.691916 31830 kubelet.go:1505] "Image garbage collection succeeded" Mar 19 12:19:21.927090 master-0 kubenswrapper[31830]: I0319 12:19:21.927001 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5b87647974-5zv6r" podUID="bc3b0ed8-8383-4d41-8b15-46cab419217f" containerName="console" containerID="cri-o://90a33faa50134547d6c60b38c98a8611b25b4954f2680aab415a68b40923d1e6" gracePeriod=15 Mar 19 12:19:22.356989 master-0 kubenswrapper[31830]: I0319 12:19:22.356957 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b87647974-5zv6r_bc3b0ed8-8383-4d41-8b15-46cab419217f/console/0.log" Mar 19 12:19:22.357182 master-0 kubenswrapper[31830]: I0319 12:19:22.357021 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:19:22.521673 master-0 kubenswrapper[31830]: I0319 12:19:22.521613 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-trusted-ca-bundle\") pod \"bc3b0ed8-8383-4d41-8b15-46cab419217f\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " Mar 19 12:19:22.521944 master-0 kubenswrapper[31830]: I0319 12:19:22.521742 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs4f6\" (UniqueName: \"kubernetes.io/projected/bc3b0ed8-8383-4d41-8b15-46cab419217f-kube-api-access-vs4f6\") pod \"bc3b0ed8-8383-4d41-8b15-46cab419217f\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " Mar 19 12:19:22.521944 master-0 kubenswrapper[31830]: I0319 12:19:22.521844 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bc3b0ed8-8383-4d41-8b15-46cab419217f-console-oauth-config\") pod \"bc3b0ed8-8383-4d41-8b15-46cab419217f\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " Mar 19 12:19:22.521944 master-0 kubenswrapper[31830]: I0319 12:19:22.521904 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-console-config\") pod \"bc3b0ed8-8383-4d41-8b15-46cab419217f\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " Mar 19 12:19:22.522048 master-0 kubenswrapper[31830]: I0319 12:19:22.521990 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-oauth-serving-cert\") pod \"bc3b0ed8-8383-4d41-8b15-46cab419217f\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " Mar 19 12:19:22.522048 master-0 kubenswrapper[31830]: I0319 12:19:22.522031 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-service-ca\") pod \"bc3b0ed8-8383-4d41-8b15-46cab419217f\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " Mar 19 12:19:22.522115 master-0 kubenswrapper[31830]: I0319 12:19:22.522073 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc3b0ed8-8383-4d41-8b15-46cab419217f-console-serving-cert\") pod \"bc3b0ed8-8383-4d41-8b15-46cab419217f\" (UID: \"bc3b0ed8-8383-4d41-8b15-46cab419217f\") " Mar 19 12:19:22.523781 master-0 kubenswrapper[31830]: I0319 12:19:22.523731 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bc3b0ed8-8383-4d41-8b15-46cab419217f" (UID: "bc3b0ed8-8383-4d41-8b15-46cab419217f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:19:22.523781 master-0 kubenswrapper[31830]: I0319 12:19:22.523761 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-console-config" (OuterVolumeSpecName: "console-config") pod "bc3b0ed8-8383-4d41-8b15-46cab419217f" (UID: "bc3b0ed8-8383-4d41-8b15-46cab419217f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:19:22.523910 master-0 kubenswrapper[31830]: I0319 12:19:22.523885 31830 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-console-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:19:22.523940 master-0 kubenswrapper[31830]: I0319 12:19:22.523908 31830 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 12:19:22.524012 master-0 kubenswrapper[31830]: I0319 12:19:22.523929 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-service-ca" (OuterVolumeSpecName: "service-ca") pod "bc3b0ed8-8383-4d41-8b15-46cab419217f" (UID: "bc3b0ed8-8383-4d41-8b15-46cab419217f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:19:22.526172 master-0 kubenswrapper[31830]: I0319 12:19:22.526137 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc3b0ed8-8383-4d41-8b15-46cab419217f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bc3b0ed8-8383-4d41-8b15-46cab419217f" (UID: "bc3b0ed8-8383-4d41-8b15-46cab419217f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:19:22.526172 master-0 kubenswrapper[31830]: I0319 12:19:22.526159 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc3b0ed8-8383-4d41-8b15-46cab419217f-kube-api-access-vs4f6" (OuterVolumeSpecName: "kube-api-access-vs4f6") pod "bc3b0ed8-8383-4d41-8b15-46cab419217f" (UID: "bc3b0ed8-8383-4d41-8b15-46cab419217f"). InnerVolumeSpecName "kube-api-access-vs4f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:19:22.526949 master-0 kubenswrapper[31830]: I0319 12:19:22.526838 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc3b0ed8-8383-4d41-8b15-46cab419217f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bc3b0ed8-8383-4d41-8b15-46cab419217f" (UID: "bc3b0ed8-8383-4d41-8b15-46cab419217f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:19:22.526949 master-0 kubenswrapper[31830]: I0319 12:19:22.526884 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bc3b0ed8-8383-4d41-8b15-46cab419217f" (UID: "bc3b0ed8-8383-4d41-8b15-46cab419217f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:19:22.625161 master-0 kubenswrapper[31830]: I0319 12:19:22.625024 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs4f6\" (UniqueName: \"kubernetes.io/projected/bc3b0ed8-8383-4d41-8b15-46cab419217f-kube-api-access-vs4f6\") on node \"master-0\" DevicePath \"\"" Mar 19 12:19:22.625161 master-0 kubenswrapper[31830]: I0319 12:19:22.625071 31830 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bc3b0ed8-8383-4d41-8b15-46cab419217f-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:19:22.625161 master-0 kubenswrapper[31830]: I0319 12:19:22.625081 31830 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 12:19:22.625161 master-0 kubenswrapper[31830]: I0319 12:19:22.625090 31830 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc3b0ed8-8383-4d41-8b15-46cab419217f-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 12:19:22.625161 master-0 kubenswrapper[31830]: I0319 12:19:22.625098 31830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc3b0ed8-8383-4d41-8b15-46cab419217f-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:19:22.822697 master-0 kubenswrapper[31830]: I0319 12:19:22.822666 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b87647974-5zv6r_bc3b0ed8-8383-4d41-8b15-46cab419217f/console/0.log" Mar 19 12:19:22.824506 master-0 kubenswrapper[31830]: I0319 12:19:22.824470 31830 generic.go:334] "Generic (PLEG): container finished" podID="bc3b0ed8-8383-4d41-8b15-46cab419217f" containerID="90a33faa50134547d6c60b38c98a8611b25b4954f2680aab415a68b40923d1e6" exitCode=2 Mar 19 12:19:22.824627 master-0 kubenswrapper[31830]: I0319 12:19:22.824608 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b87647974-5zv6r" event={"ID":"bc3b0ed8-8383-4d41-8b15-46cab419217f","Type":"ContainerDied","Data":"90a33faa50134547d6c60b38c98a8611b25b4954f2680aab415a68b40923d1e6"} Mar 19 12:19:22.824710 master-0 kubenswrapper[31830]: I0319 12:19:22.824697 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b87647974-5zv6r" event={"ID":"bc3b0ed8-8383-4d41-8b15-46cab419217f","Type":"ContainerDied","Data":"49ecaf020e7a505f51e846b428f11956754baa868ec994dbb1e60324401eb98f"} Mar 19 12:19:22.824784 master-0 kubenswrapper[31830]: I0319 12:19:22.824773 31830 scope.go:117] "RemoveContainer" containerID="90a33faa50134547d6c60b38c98a8611b25b4954f2680aab415a68b40923d1e6" Mar 19 12:19:22.825130 master-0 kubenswrapper[31830]: I0319 12:19:22.825087 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b87647974-5zv6r" Mar 19 12:19:22.849589 master-0 kubenswrapper[31830]: I0319 12:19:22.849534 31830 scope.go:117] "RemoveContainer" containerID="90a33faa50134547d6c60b38c98a8611b25b4954f2680aab415a68b40923d1e6" Mar 19 12:19:22.850375 master-0 kubenswrapper[31830]: E0319 12:19:22.850342 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90a33faa50134547d6c60b38c98a8611b25b4954f2680aab415a68b40923d1e6\": container with ID starting with 90a33faa50134547d6c60b38c98a8611b25b4954f2680aab415a68b40923d1e6 not found: ID does not exist" containerID="90a33faa50134547d6c60b38c98a8611b25b4954f2680aab415a68b40923d1e6" Mar 19 12:19:22.850481 master-0 kubenswrapper[31830]: I0319 12:19:22.850456 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90a33faa50134547d6c60b38c98a8611b25b4954f2680aab415a68b40923d1e6"} err="failed to get container status \"90a33faa50134547d6c60b38c98a8611b25b4954f2680aab415a68b40923d1e6\": rpc error: code = NotFound desc = could not find container \"90a33faa50134547d6c60b38c98a8611b25b4954f2680aab415a68b40923d1e6\": container with ID starting with 90a33faa50134547d6c60b38c98a8611b25b4954f2680aab415a68b40923d1e6 not found: ID does not exist" Mar 19 12:19:22.875036 master-0 kubenswrapper[31830]: I0319 12:19:22.874994 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b87647974-5zv6r"] Mar 19 12:19:22.890268 master-0 kubenswrapper[31830]: I0319 12:19:22.890137 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5b87647974-5zv6r"] Mar 19 12:19:23.685596 master-0 kubenswrapper[31830]: I0319 12:19:23.685559 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc3b0ed8-8383-4d41-8b15-46cab419217f" path="/var/lib/kubelet/pods/bc3b0ed8-8383-4d41-8b15-46cab419217f/volumes" Mar 19 12:19:27.253856 master-0 kubenswrapper[31830]: I0319 12:19:27.253703 31830 patch_prober.go:28] interesting pod/console-b5f5fdd67-r4lxc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Mar 19 12:19:27.253856 master-0 kubenswrapper[31830]: I0319 12:19:27.253820 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b5f5fdd67-r4lxc" podUID="8f224cab-e321-4e24-83bc-f99242f971b0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Mar 19 12:19:28.598394 master-0 kubenswrapper[31830]: I0319 12:19:28.598295 31830 patch_prober.go:28] interesting pod/console-575589487f-9nhq4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 19 12:19:28.598394 master-0 kubenswrapper[31830]: I0319 12:19:28.598380 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-575589487f-9nhq4" podUID="0a1dfc0b-250d-465f-a075-f088f5725873" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 19 12:19:28.872352 master-0 kubenswrapper[31830]: I0319 12:19:28.872230 31830 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 12:19:28.872933 master-0 kubenswrapper[31830]: E0319 12:19:28.872905 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3b0ed8-8383-4d41-8b15-46cab419217f" containerName="console" Mar 19 12:19:28.872933 master-0 kubenswrapper[31830]: I0319 12:19:28.872925 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3b0ed8-8383-4d41-8b15-46cab419217f" containerName="console" Mar 19 12:19:28.873135 master-0 kubenswrapper[31830]: I0319 12:19:28.873106 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc3b0ed8-8383-4d41-8b15-46cab419217f" containerName="console" Mar 19 12:19:28.874409 master-0 kubenswrapper[31830]: I0319 12:19:28.874379 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:19:28.916956 master-0 kubenswrapper[31830]: I0319 12:19:28.916820 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:19:28.917210 master-0 kubenswrapper[31830]: I0319 12:19:28.916964 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:19:28.917210 master-0 kubenswrapper[31830]: I0319 12:19:28.917034 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:19:28.917210 master-0 kubenswrapper[31830]: I0319 12:19:28.917098 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:19:28.917210 master-0 kubenswrapper[31830]: I0319 12:19:28.917160 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:19:28.938395 master-0 kubenswrapper[31830]: I0319 12:19:28.938291 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: I0319 12:19:28.938498 31830 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: I0319 12:19:28.938756 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver" containerID="cri-o://b2118998c1da7782e4346f69f0c3869f4e1ae1ec7cb5bc289a89a0d7d90426d9" gracePeriod=15 Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: I0319 12:19:28.938775 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" containerID="cri-o://731a0a38d103540215a217f28bc229d759cbf1a86aead714b3ade475b7eca9b1" gracePeriod=15 Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: I0319 12:19:28.938846 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f8e645437125672bdd10ef442dc191bdbb5b1aa4fc95d9600c7547a486e706a2" gracePeriod=15 Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: I0319 12:19:28.938847 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-syncer" containerID="cri-o://854780a5612a5e5e50c1280d682c36fa7794098294a6ebaedaccd9150dae588d" gracePeriod=15 Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: I0319 12:19:28.938880 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://fdac9b39a5f31dd5b3b9878e7394b85703ec8676c900ec32f992b5c4090bcde9" gracePeriod=15 Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: I0319 12:19:28.939847 31830 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: E0319 12:19:28.940144 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-syncer" Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: I0319 12:19:28.940156 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-syncer" Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: E0319 12:19:28.940183 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-regeneration-controller" Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: I0319 12:19:28.940189 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-regeneration-controller" Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: E0319 12:19:28.940199 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-insecure-readyz" Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: I0319 12:19:28.940206 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-insecure-readyz" Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: E0319 12:19:28.940217 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: I0319 12:19:28.940223 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: E0319 12:19:28.940238 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="setup" Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: I0319 12:19:28.940243 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="setup" Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: E0319 12:19:28.940252 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver" Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: I0319 12:19:28.940258 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver" Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: I0319 12:19:28.940368 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-syncer" Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: I0319 12:19:28.940391 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-regeneration-controller" Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: I0319 12:19:28.940407 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver" Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: I0319 12:19:28.940417 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-insecure-readyz" Mar 19 12:19:28.941507 master-0 kubenswrapper[31830]: I0319 12:19:28.940436 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" Mar 19 12:19:29.018219 master-0 kubenswrapper[31830]: I0319 12:19:29.018107 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:19:29.018219 master-0 kubenswrapper[31830]: I0319 12:19:29.018221 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:19:29.018453 master-0 kubenswrapper[31830]: I0319 12:19:29.018214 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:19:29.018453 master-0 kubenswrapper[31830]: I0319 12:19:29.018248 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:19:29.018453 master-0 kubenswrapper[31830]: I0319 12:19:29.018270 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:19:29.018453 master-0 kubenswrapper[31830]: I0319 12:19:29.018369 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:19:29.018683 master-0 kubenswrapper[31830]: I0319 12:19:29.018508 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:19:29.018683 master-0 kubenswrapper[31830]: I0319 12:19:29.018553 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:19:29.018783 master-0 kubenswrapper[31830]: I0319 12:19:29.018685 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:19:29.018903 master-0 kubenswrapper[31830]: I0319 12:19:29.018873 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:19:29.018965 master-0 kubenswrapper[31830]: I0319 12:19:29.018911 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:19:29.019042 master-0 kubenswrapper[31830]: I0319 12:19:29.019025 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:19:29.019090 master-0 kubenswrapper[31830]: I0319 12:19:29.019052 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:19:29.119558 master-0 kubenswrapper[31830]: I0319 12:19:29.119504 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:19:29.119714 master-0 kubenswrapper[31830]: I0319 12:19:29.119572 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:19:29.119714 master-0 kubenswrapper[31830]: I0319 12:19:29.119654 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:19:29.119907 master-0 kubenswrapper[31830]: I0319 12:19:29.119720 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:19:29.119907 master-0 kubenswrapper[31830]: I0319 12:19:29.119751 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:19:29.119907 master-0 kubenswrapper[31830]: I0319 12:19:29.119832 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:19:29.228737 master-0 kubenswrapper[31830]: I0319 12:19:29.228650 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:19:29.321658 master-0 kubenswrapper[31830]: W0319 12:19:29.321583 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebbfbf2b56df0323ba118d68bfdad8b9.slice/crio-a55d05277d95288ed326a7343575d1b6216bc953579a466548865be8d235ba55 WatchSource:0}: Error finding container a55d05277d95288ed326a7343575d1b6216bc953579a466548865be8d235ba55: Status 404 returned error can't find the container with id a55d05277d95288ed326a7343575d1b6216bc953579a466548865be8d235ba55 Mar 19 12:19:29.324379 master-0 kubenswrapper[31830]: E0319 12:19:29.324228 31830 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189e3d5c2c8122e7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:ebbfbf2b56df0323ba118d68bfdad8b9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 12:19:29.323500263 +0000 UTC m=+307.872460977,LastTimestamp:2026-03-19 12:19:29.323500263 +0000 UTC m=+307.872460977,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 12:19:29.897501 master-0 kubenswrapper[31830]: I0319 12:19:29.897346 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-cert-syncer/0.log" Mar 19 12:19:29.898547 master-0 kubenswrapper[31830]: I0319 12:19:29.898481 31830 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="731a0a38d103540215a217f28bc229d759cbf1a86aead714b3ade475b7eca9b1" exitCode=0 Mar 19 12:19:29.898547 master-0 kubenswrapper[31830]: I0319 12:19:29.898528 31830 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="f8e645437125672bdd10ef442dc191bdbb5b1aa4fc95d9600c7547a486e706a2" exitCode=0 Mar 19 12:19:29.898547 master-0 kubenswrapper[31830]: I0319 12:19:29.898548 31830 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="fdac9b39a5f31dd5b3b9878e7394b85703ec8676c900ec32f992b5c4090bcde9" exitCode=0 Mar 19 12:19:29.898838 master-0 kubenswrapper[31830]: I0319 12:19:29.898563 31830 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="854780a5612a5e5e50c1280d682c36fa7794098294a6ebaedaccd9150dae588d" exitCode=2 Mar 19 12:19:29.901091 master-0 kubenswrapper[31830]: I0319 12:19:29.901027 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ebbfbf2b56df0323ba118d68bfdad8b9","Type":"ContainerStarted","Data":"2e22ab1d258cd361a42b261f341cbe7d0efe71185a8f2bbd65df6e1954fbcb33"} Mar 19 12:19:29.901091 master-0 kubenswrapper[31830]: I0319 12:19:29.901078 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ebbfbf2b56df0323ba118d68bfdad8b9","Type":"ContainerStarted","Data":"a55d05277d95288ed326a7343575d1b6216bc953579a466548865be8d235ba55"} Mar 19 12:19:29.903426 master-0 kubenswrapper[31830]: I0319 12:19:29.903345 31830 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:29.905638 master-0 kubenswrapper[31830]: I0319 12:19:29.905594 31830 generic.go:334] "Generic (PLEG): container finished" podID="5136761d-3b51-4cf2-8689-88d0bfefd0b2" containerID="b246bb9a28555bbdd0e4b4b104ba4e2ddb4462f8b7d4f97825fea6862561477d" exitCode=0 Mar 19 12:19:29.905774 master-0 kubenswrapper[31830]: I0319 12:19:29.905653 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"5136761d-3b51-4cf2-8689-88d0bfefd0b2","Type":"ContainerDied","Data":"b246bb9a28555bbdd0e4b4b104ba4e2ddb4462f8b7d4f97825fea6862561477d"} Mar 19 12:19:29.907076 master-0 kubenswrapper[31830]: I0319 12:19:29.907008 31830 status_manager.go:851] "Failed to get status for pod" podUID="5136761d-3b51-4cf2-8689-88d0bfefd0b2" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:29.907859 master-0 kubenswrapper[31830]: I0319 12:19:29.907790 31830 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:31.031814 master-0 kubenswrapper[31830]: E0319 12:19:31.031749 31830 webhook.go:269] Failed to make webhook authorizer request: Post "https://api-int.sno.openstack.lab:6443/apis/authorization.k8s.io/v1/subjectaccessreviews": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 12:19:31.031814 master-0 kubenswrapper[31830]: E0319 12:19:31.031815 31830 server.go:324] "Authorization error" err="Post \"https://api-int.sno.openstack.lab:6443/apis/authorization.k8s.io/v1/subjectaccessreviews\": dial tcp 192.168.32.10:6443: connect: connection refused" user="system:serviceaccount:openshift-monitoring:prometheus-k8s" verb="get" resource="nodes" subresource="metrics" Mar 19 12:19:31.390041 master-0 kubenswrapper[31830]: I0319 12:19:31.389983 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 12:19:31.394276 master-0 kubenswrapper[31830]: I0319 12:19:31.394093 31830 status_manager.go:851] "Failed to get status for pod" podUID="5136761d-3b51-4cf2-8689-88d0bfefd0b2" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:31.394757 master-0 kubenswrapper[31830]: I0319 12:19:31.394708 31830 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:31.562946 master-0 kubenswrapper[31830]: I0319 12:19:31.562905 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5136761d-3b51-4cf2-8689-88d0bfefd0b2-kube-api-access\") pod \"5136761d-3b51-4cf2-8689-88d0bfefd0b2\" (UID: \"5136761d-3b51-4cf2-8689-88d0bfefd0b2\") " Mar 19 12:19:31.563059 master-0 kubenswrapper[31830]: I0319 12:19:31.563023 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5136761d-3b51-4cf2-8689-88d0bfefd0b2-kubelet-dir\") pod \"5136761d-3b51-4cf2-8689-88d0bfefd0b2\" (UID: \"5136761d-3b51-4cf2-8689-88d0bfefd0b2\") " Mar 19 12:19:31.563210 master-0 kubenswrapper[31830]: I0319 12:19:31.563161 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5136761d-3b51-4cf2-8689-88d0bfefd0b2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5136761d-3b51-4cf2-8689-88d0bfefd0b2" (UID: "5136761d-3b51-4cf2-8689-88d0bfefd0b2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:19:31.563287 master-0 kubenswrapper[31830]: I0319 12:19:31.563218 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5136761d-3b51-4cf2-8689-88d0bfefd0b2-var-lock" (OuterVolumeSpecName: "var-lock") pod "5136761d-3b51-4cf2-8689-88d0bfefd0b2" (UID: "5136761d-3b51-4cf2-8689-88d0bfefd0b2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:19:31.563287 master-0 kubenswrapper[31830]: I0319 12:19:31.563181 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5136761d-3b51-4cf2-8689-88d0bfefd0b2-var-lock\") pod \"5136761d-3b51-4cf2-8689-88d0bfefd0b2\" (UID: \"5136761d-3b51-4cf2-8689-88d0bfefd0b2\") " Mar 19 12:19:31.563559 master-0 kubenswrapper[31830]: I0319 12:19:31.563515 31830 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5136761d-3b51-4cf2-8689-88d0bfefd0b2-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 12:19:31.563559 master-0 kubenswrapper[31830]: I0319 12:19:31.563526 31830 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5136761d-3b51-4cf2-8689-88d0bfefd0b2-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:19:31.567538 master-0 kubenswrapper[31830]: I0319 12:19:31.567485 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5136761d-3b51-4cf2-8689-88d0bfefd0b2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5136761d-3b51-4cf2-8689-88d0bfefd0b2" (UID: "5136761d-3b51-4cf2-8689-88d0bfefd0b2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:19:31.638930 master-0 kubenswrapper[31830]: I0319 12:19:31.638873 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-cert-syncer/0.log" Mar 19 12:19:31.639560 master-0 kubenswrapper[31830]: I0319 12:19:31.639524 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:19:31.640520 master-0 kubenswrapper[31830]: I0319 12:19:31.640464 31830 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:31.640981 master-0 kubenswrapper[31830]: I0319 12:19:31.640933 31830 status_manager.go:851] "Failed to get status for pod" podUID="5136761d-3b51-4cf2-8689-88d0bfefd0b2" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:31.641410 master-0 kubenswrapper[31830]: I0319 12:19:31.641362 31830 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:31.665297 master-0 kubenswrapper[31830]: I0319 12:19:31.665265 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5136761d-3b51-4cf2-8689-88d0bfefd0b2-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 12:19:31.682342 master-0 kubenswrapper[31830]: I0319 12:19:31.682229 31830 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:31.683212 master-0 kubenswrapper[31830]: I0319 12:19:31.683149 31830 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:31.684014 master-0 kubenswrapper[31830]: I0319 12:19:31.683959 31830 status_manager.go:851] "Failed to get status for pod" podUID="5136761d-3b51-4cf2-8689-88d0bfefd0b2" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:31.767147 master-0 kubenswrapper[31830]: I0319 12:19:31.766878 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"7d5ce05b3d592e63f1f92202d52b9635\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " Mar 19 12:19:31.767413 master-0 kubenswrapper[31830]: I0319 12:19:31.767395 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"7d5ce05b3d592e63f1f92202d52b9635\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " Mar 19 12:19:31.767606 master-0 kubenswrapper[31830]: I0319 12:19:31.767150 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "7d5ce05b3d592e63f1f92202d52b9635" (UID: "7d5ce05b3d592e63f1f92202d52b9635"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:19:31.767671 master-0 kubenswrapper[31830]: I0319 12:19:31.767442 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "7d5ce05b3d592e63f1f92202d52b9635" (UID: "7d5ce05b3d592e63f1f92202d52b9635"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:19:31.767671 master-0 kubenswrapper[31830]: I0319 12:19:31.767588 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"7d5ce05b3d592e63f1f92202d52b9635\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " Mar 19 12:19:31.767857 master-0 kubenswrapper[31830]: I0319 12:19:31.767839 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7d5ce05b3d592e63f1f92202d52b9635" (UID: "7d5ce05b3d592e63f1f92202d52b9635"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:19:31.768473 master-0 kubenswrapper[31830]: I0319 12:19:31.768443 31830 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:19:31.768473 master-0 kubenswrapper[31830]: I0319 12:19:31.768469 31830 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:19:31.768594 master-0 kubenswrapper[31830]: I0319 12:19:31.768485 31830 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:19:31.922960 master-0 kubenswrapper[31830]: I0319 12:19:31.922929 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-cert-syncer/0.log" Mar 19 12:19:31.924133 master-0 kubenswrapper[31830]: I0319 12:19:31.924039 31830 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="b2118998c1da7782e4346f69f0c3869f4e1ae1ec7cb5bc289a89a0d7d90426d9" exitCode=0 Mar 19 12:19:31.924133 master-0 kubenswrapper[31830]: I0319 12:19:31.924113 31830 scope.go:117] "RemoveContainer" containerID="731a0a38d103540215a217f28bc229d759cbf1a86aead714b3ade475b7eca9b1" Mar 19 12:19:31.924269 master-0 kubenswrapper[31830]: I0319 12:19:31.924195 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:19:31.925627 master-0 kubenswrapper[31830]: I0319 12:19:31.925403 31830 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:31.925627 master-0 kubenswrapper[31830]: I0319 12:19:31.925593 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"5136761d-3b51-4cf2-8689-88d0bfefd0b2","Type":"ContainerDied","Data":"288fb28088d5954eb572e32989b87625c4cd224e8a20ff7b0b888d5972e06bc8"} Mar 19 12:19:31.925627 master-0 kubenswrapper[31830]: I0319 12:19:31.925618 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="288fb28088d5954eb572e32989b87625c4cd224e8a20ff7b0b888d5972e06bc8" Mar 19 12:19:31.925755 master-0 kubenswrapper[31830]: I0319 12:19:31.925670 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 12:19:31.926772 master-0 kubenswrapper[31830]: I0319 12:19:31.926734 31830 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:31.927379 master-0 kubenswrapper[31830]: I0319 12:19:31.927348 31830 status_manager.go:851] "Failed to get status for pod" podUID="5136761d-3b51-4cf2-8689-88d0bfefd0b2" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:31.930258 master-0 kubenswrapper[31830]: I0319 12:19:31.930218 31830 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:31.930701 master-0 kubenswrapper[31830]: I0319 12:19:31.930667 31830 status_manager.go:851] "Failed to get status for pod" podUID="5136761d-3b51-4cf2-8689-88d0bfefd0b2" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:31.934539 master-0 kubenswrapper[31830]: I0319 12:19:31.934495 31830 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:31.940953 master-0 kubenswrapper[31830]: I0319 12:19:31.940931 31830 scope.go:117] "RemoveContainer" containerID="f8e645437125672bdd10ef442dc191bdbb5b1aa4fc95d9600c7547a486e706a2" Mar 19 12:19:31.950637 master-0 kubenswrapper[31830]: I0319 12:19:31.950576 31830 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:31.951372 master-0 kubenswrapper[31830]: I0319 12:19:31.951313 31830 status_manager.go:851] "Failed to get status for pod" podUID="5136761d-3b51-4cf2-8689-88d0bfefd0b2" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:31.951929 master-0 kubenswrapper[31830]: I0319 12:19:31.951883 31830 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:31.959186 master-0 kubenswrapper[31830]: I0319 12:19:31.959160 31830 scope.go:117] "RemoveContainer" containerID="fdac9b39a5f31dd5b3b9878e7394b85703ec8676c900ec32f992b5c4090bcde9" Mar 19 12:19:31.973535 master-0 kubenswrapper[31830]: I0319 12:19:31.973516 31830 scope.go:117] "RemoveContainer" containerID="854780a5612a5e5e50c1280d682c36fa7794098294a6ebaedaccd9150dae588d" Mar 19 12:19:31.986533 master-0 kubenswrapper[31830]: I0319 12:19:31.986493 31830 scope.go:117] "RemoveContainer" containerID="b2118998c1da7782e4346f69f0c3869f4e1ae1ec7cb5bc289a89a0d7d90426d9" Mar 19 12:19:32.002618 master-0 kubenswrapper[31830]: I0319 12:19:32.002594 31830 scope.go:117] "RemoveContainer" containerID="718557be6f14b90fa10074703607c983ea0551f0bf0d37a2dbc9687c71e63b51" Mar 19 12:19:32.016276 master-0 kubenswrapper[31830]: I0319 12:19:32.016255 31830 scope.go:117] "RemoveContainer" containerID="731a0a38d103540215a217f28bc229d759cbf1a86aead714b3ade475b7eca9b1" Mar 19 12:19:32.016786 master-0 kubenswrapper[31830]: E0319 12:19:32.016744 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"731a0a38d103540215a217f28bc229d759cbf1a86aead714b3ade475b7eca9b1\": container with ID starting with 731a0a38d103540215a217f28bc229d759cbf1a86aead714b3ade475b7eca9b1 not found: ID does not exist" containerID="731a0a38d103540215a217f28bc229d759cbf1a86aead714b3ade475b7eca9b1" Mar 19 12:19:32.016866 master-0 kubenswrapper[31830]: I0319 12:19:32.016835 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"731a0a38d103540215a217f28bc229d759cbf1a86aead714b3ade475b7eca9b1"} err="failed to get container status \"731a0a38d103540215a217f28bc229d759cbf1a86aead714b3ade475b7eca9b1\": rpc error: code = NotFound desc = could not find container \"731a0a38d103540215a217f28bc229d759cbf1a86aead714b3ade475b7eca9b1\": container with ID starting with 731a0a38d103540215a217f28bc229d759cbf1a86aead714b3ade475b7eca9b1 not found: ID does not exist" Mar 19 12:19:32.016906 master-0 kubenswrapper[31830]: I0319 12:19:32.016870 31830 scope.go:117] "RemoveContainer" containerID="f8e645437125672bdd10ef442dc191bdbb5b1aa4fc95d9600c7547a486e706a2" Mar 19 12:19:32.017267 master-0 kubenswrapper[31830]: E0319 12:19:32.017184 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8e645437125672bdd10ef442dc191bdbb5b1aa4fc95d9600c7547a486e706a2\": container with ID starting with f8e645437125672bdd10ef442dc191bdbb5b1aa4fc95d9600c7547a486e706a2 not found: ID does not exist" containerID="f8e645437125672bdd10ef442dc191bdbb5b1aa4fc95d9600c7547a486e706a2" Mar 19 12:19:32.017321 master-0 kubenswrapper[31830]: I0319 12:19:32.017245 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8e645437125672bdd10ef442dc191bdbb5b1aa4fc95d9600c7547a486e706a2"} err="failed to get container status \"f8e645437125672bdd10ef442dc191bdbb5b1aa4fc95d9600c7547a486e706a2\": rpc error: code = NotFound desc = could not find container \"f8e645437125672bdd10ef442dc191bdbb5b1aa4fc95d9600c7547a486e706a2\": container with ID starting with f8e645437125672bdd10ef442dc191bdbb5b1aa4fc95d9600c7547a486e706a2 not found: ID does not exist" Mar 19 12:19:32.017321 master-0 kubenswrapper[31830]: I0319 12:19:32.017295 31830 scope.go:117] "RemoveContainer" containerID="fdac9b39a5f31dd5b3b9878e7394b85703ec8676c900ec32f992b5c4090bcde9" Mar 19 12:19:32.017699 master-0 kubenswrapper[31830]: E0319 12:19:32.017681 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdac9b39a5f31dd5b3b9878e7394b85703ec8676c900ec32f992b5c4090bcde9\": container with ID starting with fdac9b39a5f31dd5b3b9878e7394b85703ec8676c900ec32f992b5c4090bcde9 not found: ID does not exist" containerID="fdac9b39a5f31dd5b3b9878e7394b85703ec8676c900ec32f992b5c4090bcde9" Mar 19 12:19:32.017816 master-0 kubenswrapper[31830]: I0319 12:19:32.017772 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdac9b39a5f31dd5b3b9878e7394b85703ec8676c900ec32f992b5c4090bcde9"} err="failed to get container status \"fdac9b39a5f31dd5b3b9878e7394b85703ec8676c900ec32f992b5c4090bcde9\": rpc error: code = NotFound desc = could not find container \"fdac9b39a5f31dd5b3b9878e7394b85703ec8676c900ec32f992b5c4090bcde9\": container with ID starting with fdac9b39a5f31dd5b3b9878e7394b85703ec8676c900ec32f992b5c4090bcde9 not found: ID does not exist" Mar 19 12:19:32.017938 master-0 kubenswrapper[31830]: I0319 12:19:32.017918 31830 scope.go:117] "RemoveContainer" containerID="854780a5612a5e5e50c1280d682c36fa7794098294a6ebaedaccd9150dae588d" Mar 19 12:19:32.018318 master-0 kubenswrapper[31830]: E0319 12:19:32.018292 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"854780a5612a5e5e50c1280d682c36fa7794098294a6ebaedaccd9150dae588d\": container with ID starting with 854780a5612a5e5e50c1280d682c36fa7794098294a6ebaedaccd9150dae588d not found: ID does not exist" containerID="854780a5612a5e5e50c1280d682c36fa7794098294a6ebaedaccd9150dae588d" Mar 19 12:19:32.018380 master-0 kubenswrapper[31830]: I0319 12:19:32.018323 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"854780a5612a5e5e50c1280d682c36fa7794098294a6ebaedaccd9150dae588d"} err="failed to get container status \"854780a5612a5e5e50c1280d682c36fa7794098294a6ebaedaccd9150dae588d\": rpc error: code = NotFound desc = could not find container \"854780a5612a5e5e50c1280d682c36fa7794098294a6ebaedaccd9150dae588d\": container with ID starting with 854780a5612a5e5e50c1280d682c36fa7794098294a6ebaedaccd9150dae588d not found: ID does not exist" Mar 19 12:19:32.018380 master-0 kubenswrapper[31830]: I0319 12:19:32.018344 31830 scope.go:117] "RemoveContainer" containerID="b2118998c1da7782e4346f69f0c3869f4e1ae1ec7cb5bc289a89a0d7d90426d9" Mar 19 12:19:32.018783 master-0 kubenswrapper[31830]: E0319 12:19:32.018736 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2118998c1da7782e4346f69f0c3869f4e1ae1ec7cb5bc289a89a0d7d90426d9\": container with ID starting with b2118998c1da7782e4346f69f0c3869f4e1ae1ec7cb5bc289a89a0d7d90426d9 not found: ID does not exist" containerID="b2118998c1da7782e4346f69f0c3869f4e1ae1ec7cb5bc289a89a0d7d90426d9" Mar 19 12:19:32.018925 master-0 kubenswrapper[31830]: I0319 12:19:32.018874 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2118998c1da7782e4346f69f0c3869f4e1ae1ec7cb5bc289a89a0d7d90426d9"} err="failed to get container status \"b2118998c1da7782e4346f69f0c3869f4e1ae1ec7cb5bc289a89a0d7d90426d9\": rpc error: code = NotFound desc = could not find container \"b2118998c1da7782e4346f69f0c3869f4e1ae1ec7cb5bc289a89a0d7d90426d9\": container with ID starting with b2118998c1da7782e4346f69f0c3869f4e1ae1ec7cb5bc289a89a0d7d90426d9 not found: ID does not exist" Mar 19 12:19:32.018974 master-0 kubenswrapper[31830]: I0319 12:19:32.018922 31830 scope.go:117] "RemoveContainer" containerID="718557be6f14b90fa10074703607c983ea0551f0bf0d37a2dbc9687c71e63b51" Mar 19 12:19:32.019218 master-0 kubenswrapper[31830]: E0319 12:19:32.019194 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"718557be6f14b90fa10074703607c983ea0551f0bf0d37a2dbc9687c71e63b51\": container with ID starting with 718557be6f14b90fa10074703607c983ea0551f0bf0d37a2dbc9687c71e63b51 not found: ID does not exist" containerID="718557be6f14b90fa10074703607c983ea0551f0bf0d37a2dbc9687c71e63b51" Mar 19 12:19:32.019302 master-0 kubenswrapper[31830]: I0319 12:19:32.019246 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"718557be6f14b90fa10074703607c983ea0551f0bf0d37a2dbc9687c71e63b51"} err="failed to get container status \"718557be6f14b90fa10074703607c983ea0551f0bf0d37a2dbc9687c71e63b51\": rpc error: code = NotFound desc = could not find container \"718557be6f14b90fa10074703607c983ea0551f0bf0d37a2dbc9687c71e63b51\": container with ID starting with 718557be6f14b90fa10074703607c983ea0551f0bf0d37a2dbc9687c71e63b51 not found: ID does not exist" Mar 19 12:19:32.856420 master-0 kubenswrapper[31830]: E0319 12:19:32.856289 31830 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189e3d5c2c8122e7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:ebbfbf2b56df0323ba118d68bfdad8b9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 12:19:29.323500263 +0000 UTC m=+307.872460977,LastTimestamp:2026-03-19 12:19:29.323500263 +0000 UTC m=+307.872460977,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 12:19:33.687294 master-0 kubenswrapper[31830]: I0319 12:19:33.687238 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d5ce05b3d592e63f1f92202d52b9635" path="/var/lib/kubelet/pods/7d5ce05b3d592e63f1f92202d52b9635/volumes" Mar 19 12:19:34.855765 master-0 kubenswrapper[31830]: I0319 12:19:34.855664 31830 scope.go:117] "RemoveContainer" containerID="4dc6cd1098d9b181306d55e6f29d0f09a98838187ca958b399501163372876ca" Mar 19 12:19:35.975447 master-0 kubenswrapper[31830]: E0319 12:19:35.975376 31830 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:35.976578 master-0 kubenswrapper[31830]: E0319 12:19:35.976487 31830 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:35.977669 master-0 kubenswrapper[31830]: E0319 12:19:35.977584 31830 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:35.978526 master-0 kubenswrapper[31830]: E0319 12:19:35.978455 31830 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:35.979755 master-0 kubenswrapper[31830]: E0319 12:19:35.979522 31830 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:35.979755 master-0 kubenswrapper[31830]: I0319 12:19:35.979583 31830 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 19 12:19:35.980561 master-0 kubenswrapper[31830]: E0319 12:19:35.980485 31830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 19 12:19:36.182899 master-0 kubenswrapper[31830]: E0319 12:19:36.182757 31830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 19 12:19:36.584356 master-0 kubenswrapper[31830]: E0319 12:19:36.584245 31830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 19 12:19:37.254785 master-0 kubenswrapper[31830]: I0319 12:19:37.254661 31830 patch_prober.go:28] interesting pod/console-b5f5fdd67-r4lxc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Mar 19 12:19:37.255742 master-0 kubenswrapper[31830]: I0319 12:19:37.254771 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b5f5fdd67-r4lxc" podUID="8f224cab-e321-4e24-83bc-f99242f971b0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Mar 19 12:19:37.385953 master-0 kubenswrapper[31830]: E0319 12:19:37.385877 31830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 19 12:19:37.580507 master-0 kubenswrapper[31830]: E0319 12:19:37.580362 31830 webhook.go:269] Failed to make webhook authorizer request: Post "https://api-int.sno.openstack.lab:6443/apis/authorization.k8s.io/v1/subjectaccessreviews": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 12:19:37.580507 master-0 kubenswrapper[31830]: E0319 12:19:37.580428 31830 server.go:324] "Authorization error" err="Post \"https://api-int.sno.openstack.lab:6443/apis/authorization.k8s.io/v1/subjectaccessreviews\": dial tcp 192.168.32.10:6443: connect: connection refused" user="system:serviceaccount:openshift-monitoring:prometheus-k8s" verb="get" resource="nodes" subresource="metrics" Mar 19 12:19:38.598843 master-0 kubenswrapper[31830]: I0319 12:19:38.598698 31830 patch_prober.go:28] interesting pod/console-575589487f-9nhq4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 19 12:19:38.598843 master-0 kubenswrapper[31830]: I0319 12:19:38.598789 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-575589487f-9nhq4" podUID="0a1dfc0b-250d-465f-a075-f088f5725873" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 19 12:19:38.987980 master-0 kubenswrapper[31830]: E0319 12:19:38.987890 31830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 19 12:19:39.424427 master-0 kubenswrapper[31830]: E0319 12:19:39.424254 31830 webhook.go:269] Failed to make webhook authorizer request: Post "https://api-int.sno.openstack.lab:6443/apis/authorization.k8s.io/v1/subjectaccessreviews": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 12:19:39.424427 master-0 kubenswrapper[31830]: E0319 12:19:39.424328 31830 server.go:324] "Authorization error" err="Post \"https://api-int.sno.openstack.lab:6443/apis/authorization.k8s.io/v1/subjectaccessreviews\": dial tcp 192.168.32.10:6443: connect: connection refused" user="system:serviceaccount:openshift-monitoring:prometheus-k8s" verb="get" resource="nodes" subresource="metrics" Mar 19 12:19:39.677865 master-0 kubenswrapper[31830]: I0319 12:19:39.677704 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:19:39.680066 master-0 kubenswrapper[31830]: I0319 12:19:39.679982 31830 status_manager.go:851] "Failed to get status for pod" podUID="5136761d-3b51-4cf2-8689-88d0bfefd0b2" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:39.685860 master-0 kubenswrapper[31830]: I0319 12:19:39.685763 31830 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:39.705173 master-0 kubenswrapper[31830]: I0319 12:19:39.705073 31830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d736f0ce-242a-43e5-aabf-b298e1959069" Mar 19 12:19:39.705173 master-0 kubenswrapper[31830]: I0319 12:19:39.705162 31830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d736f0ce-242a-43e5-aabf-b298e1959069" Mar 19 12:19:39.706347 master-0 kubenswrapper[31830]: E0319 12:19:39.706249 31830 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:19:39.707361 master-0 kubenswrapper[31830]: I0319 12:19:39.707325 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:19:39.742882 master-0 kubenswrapper[31830]: W0319 12:19:39.742773 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod274c4bebf95a655851b2cf276fe43ef7.slice/crio-a8b04d6862ddd3c64ff6d91f629caac91a196af40f0756b2a1e078f9d45dcf30 WatchSource:0}: Error finding container a8b04d6862ddd3c64ff6d91f629caac91a196af40f0756b2a1e078f9d45dcf30: Status 404 returned error can't find the container with id a8b04d6862ddd3c64ff6d91f629caac91a196af40f0756b2a1e078f9d45dcf30 Mar 19 12:19:39.998784 master-0 kubenswrapper[31830]: I0319 12:19:39.998734 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"a8b04d6862ddd3c64ff6d91f629caac91a196af40f0756b2a1e078f9d45dcf30"} Mar 19 12:19:41.008241 master-0 kubenswrapper[31830]: I0319 12:19:41.008179 31830 generic.go:334] "Generic (PLEG): container finished" podID="274c4bebf95a655851b2cf276fe43ef7" containerID="9f29aa743f5b6b0457fe5c6e6d130cda84014b646b258f9bdefe99b9275a7e13" exitCode=0 Mar 19 12:19:41.008776 master-0 kubenswrapper[31830]: I0319 12:19:41.008240 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerDied","Data":"9f29aa743f5b6b0457fe5c6e6d130cda84014b646b258f9bdefe99b9275a7e13"} Mar 19 12:19:41.008776 master-0 kubenswrapper[31830]: I0319 12:19:41.008543 31830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d736f0ce-242a-43e5-aabf-b298e1959069" Mar 19 12:19:41.008776 master-0 kubenswrapper[31830]: I0319 12:19:41.008562 31830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d736f0ce-242a-43e5-aabf-b298e1959069" Mar 19 12:19:41.009632 master-0 kubenswrapper[31830]: I0319 12:19:41.009537 31830 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:41.009632 master-0 kubenswrapper[31830]: E0319 12:19:41.009596 31830 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:19:41.010728 master-0 kubenswrapper[31830]: I0319 12:19:41.010676 31830 status_manager.go:851] "Failed to get status for pod" podUID="5136761d-3b51-4cf2-8689-88d0bfefd0b2" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 12:19:42.034373 master-0 kubenswrapper[31830]: I0319 12:19:42.034306 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"c6560474b103b5a22e014eeae8526d2b5dcc3105a4ef1b6cabc3403a87bca7f9"} Mar 19 12:19:42.034373 master-0 kubenswrapper[31830]: I0319 12:19:42.034373 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"68a8cc86b1c99afdcb68a0d000e9ad2170fa3a626b5b96fcca93836a93d2279c"} Mar 19 12:19:42.034953 master-0 kubenswrapper[31830]: I0319 12:19:42.034388 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"775afc55c1e4e4928e236eae642f35260b48198ec1c6a1e42912871631d63c97"} Mar 19 12:19:42.034953 master-0 kubenswrapper[31830]: I0319 12:19:42.034399 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"bd343aa7385d76e877e5c740bd23136a5be3fbe0345e6280b35b4da62edff53b"} Mar 19 12:19:43.047636 master-0 kubenswrapper[31830]: I0319 12:19:43.047579 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"d66b0db1824e54709855a700bba3ba037abc5630894273dc566335fb6eab1b11"} Mar 19 12:19:43.048725 master-0 kubenswrapper[31830]: I0319 12:19:43.048690 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:19:43.049088 master-0 kubenswrapper[31830]: I0319 12:19:43.049057 31830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d736f0ce-242a-43e5-aabf-b298e1959069" Mar 19 12:19:43.049252 master-0 kubenswrapper[31830]: I0319 12:19:43.049229 31830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d736f0ce-242a-43e5-aabf-b298e1959069" Mar 19 12:19:44.056162 master-0 kubenswrapper[31830]: I0319 12:19:44.056092 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_09672015532ae9d1d74ae4d426cd904b/kube-controller-manager/2.log" Mar 19 12:19:44.056949 master-0 kubenswrapper[31830]: I0319 12:19:44.056887 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_09672015532ae9d1d74ae4d426cd904b/kube-controller-manager/1.log" Mar 19 12:19:44.058020 master-0 kubenswrapper[31830]: I0319 12:19:44.057950 31830 generic.go:334] "Generic (PLEG): container finished" podID="09672015532ae9d1d74ae4d426cd904b" containerID="a2f2d3c455898f0dff08ce78d00fccc2ef15d161401b675e3b61d3fc312756c6" exitCode=1 Mar 19 12:19:44.058020 master-0 kubenswrapper[31830]: I0319 12:19:44.057995 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"09672015532ae9d1d74ae4d426cd904b","Type":"ContainerDied","Data":"a2f2d3c455898f0dff08ce78d00fccc2ef15d161401b675e3b61d3fc312756c6"} Mar 19 12:19:44.058149 master-0 kubenswrapper[31830]: I0319 12:19:44.058040 31830 scope.go:117] "RemoveContainer" containerID="e5fbf9965772e33dc6dad1627c0ebaa9bcbb080610a9ab8137ea4a6a55a96ec1" Mar 19 12:19:44.058815 master-0 kubenswrapper[31830]: I0319 12:19:44.058754 31830 scope.go:117] "RemoveContainer" containerID="a2f2d3c455898f0dff08ce78d00fccc2ef15d161401b675e3b61d3fc312756c6" Mar 19 12:19:44.059618 master-0 kubenswrapper[31830]: E0319 12:19:44.059218 31830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(09672015532ae9d1d74ae4d426cd904b)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="09672015532ae9d1d74ae4d426cd904b" Mar 19 12:19:44.707733 master-0 kubenswrapper[31830]: I0319 12:19:44.707562 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:19:44.707733 master-0 kubenswrapper[31830]: I0319 12:19:44.707705 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:19:44.717106 master-0 kubenswrapper[31830]: I0319 12:19:44.717034 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:19:45.070855 master-0 kubenswrapper[31830]: I0319 12:19:45.070761 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_09672015532ae9d1d74ae4d426cd904b/kube-controller-manager/2.log" Mar 19 12:19:47.253459 master-0 kubenswrapper[31830]: I0319 12:19:47.253386 31830 patch_prober.go:28] interesting pod/console-b5f5fdd67-r4lxc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Mar 19 12:19:47.253459 master-0 kubenswrapper[31830]: I0319 12:19:47.253451 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b5f5fdd67-r4lxc" podUID="8f224cab-e321-4e24-83bc-f99242f971b0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Mar 19 12:19:47.722926 master-0 kubenswrapper[31830]: I0319 12:19:47.718295 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:19:47.722926 master-0 kubenswrapper[31830]: I0319 12:19:47.718936 31830 scope.go:117] "RemoveContainer" containerID="a2f2d3c455898f0dff08ce78d00fccc2ef15d161401b675e3b61d3fc312756c6" Mar 19 12:19:47.722926 master-0 kubenswrapper[31830]: E0319 12:19:47.719249 31830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(09672015532ae9d1d74ae4d426cd904b)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="09672015532ae9d1d74ae4d426cd904b" Mar 19 12:19:48.065860 master-0 kubenswrapper[31830]: I0319 12:19:48.065751 31830 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:19:48.091135 master-0 kubenswrapper[31830]: I0319 12:19:48.091028 31830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d736f0ce-242a-43e5-aabf-b298e1959069" Mar 19 12:19:48.091135 master-0 kubenswrapper[31830]: I0319 12:19:48.091066 31830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d736f0ce-242a-43e5-aabf-b298e1959069" Mar 19 12:19:48.095883 master-0 kubenswrapper[31830]: I0319 12:19:48.095853 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:19:48.100324 master-0 kubenswrapper[31830]: I0319 12:19:48.100284 31830 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="274c4bebf95a655851b2cf276fe43ef7" podUID="bdb90b96-3e3e-4206-bd9e-c2755cda8ca4" Mar 19 12:19:48.598189 master-0 kubenswrapper[31830]: I0319 12:19:48.598031 31830 patch_prober.go:28] interesting pod/console-575589487f-9nhq4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 19 12:19:48.598189 master-0 kubenswrapper[31830]: I0319 12:19:48.598116 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-575589487f-9nhq4" podUID="0a1dfc0b-250d-465f-a075-f088f5725873" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 19 12:19:48.780911 master-0 kubenswrapper[31830]: I0319 12:19:48.780860 31830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:19:48.781560 master-0 kubenswrapper[31830]: I0319 12:19:48.781534 31830 scope.go:117] "RemoveContainer" containerID="a2f2d3c455898f0dff08ce78d00fccc2ef15d161401b675e3b61d3fc312756c6" Mar 19 12:19:48.781890 master-0 kubenswrapper[31830]: E0319 12:19:48.781858 31830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(09672015532ae9d1d74ae4d426cd904b)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="09672015532ae9d1d74ae4d426cd904b" Mar 19 12:19:49.099984 master-0 kubenswrapper[31830]: I0319 12:19:49.099932 31830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d736f0ce-242a-43e5-aabf-b298e1959069" Mar 19 12:19:49.099984 master-0 kubenswrapper[31830]: I0319 12:19:49.099965 31830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d736f0ce-242a-43e5-aabf-b298e1959069" Mar 19 12:19:51.702462 master-0 kubenswrapper[31830]: I0319 12:19:51.702377 31830 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="274c4bebf95a655851b2cf276fe43ef7" podUID="bdb90b96-3e3e-4206-bd9e-c2755cda8ca4" Mar 19 12:19:52.381985 master-0 kubenswrapper[31830]: I0319 12:19:52.381915 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:19:52.382702 master-0 kubenswrapper[31830]: I0319 12:19:52.382670 31830 scope.go:117] "RemoveContainer" containerID="a2f2d3c455898f0dff08ce78d00fccc2ef15d161401b675e3b61d3fc312756c6" Mar 19 12:19:52.383127 master-0 kubenswrapper[31830]: E0319 12:19:52.383088 31830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(09672015532ae9d1d74ae4d426cd904b)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="09672015532ae9d1d74ae4d426cd904b" Mar 19 12:19:54.358516 master-0 kubenswrapper[31830]: I0319 12:19:54.358456 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 19 12:19:54.442823 master-0 kubenswrapper[31830]: I0319 12:19:54.442746 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 19 12:19:54.779333 master-0 kubenswrapper[31830]: I0319 12:19:54.779281 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 19 12:19:54.982901 master-0 kubenswrapper[31830]: I0319 12:19:54.982831 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-gfs2v" Mar 19 12:19:55.312242 master-0 kubenswrapper[31830]: I0319 12:19:55.312170 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-r8qg7" Mar 19 12:19:56.568690 master-0 kubenswrapper[31830]: I0319 12:19:56.568605 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 19 12:19:56.736211 master-0 kubenswrapper[31830]: I0319 12:19:56.736006 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 19 12:19:56.869347 master-0 kubenswrapper[31830]: I0319 12:19:56.869190 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 19 12:19:57.113312 master-0 kubenswrapper[31830]: I0319 12:19:57.113237 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 19 12:19:57.254224 master-0 kubenswrapper[31830]: I0319 12:19:57.254153 31830 patch_prober.go:28] interesting pod/console-b5f5fdd67-r4lxc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Mar 19 12:19:57.254497 master-0 kubenswrapper[31830]: I0319 12:19:57.254236 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b5f5fdd67-r4lxc" podUID="8f224cab-e321-4e24-83bc-f99242f971b0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Mar 19 12:19:57.641471 master-0 kubenswrapper[31830]: I0319 12:19:57.641267 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 19 12:19:57.886855 master-0 kubenswrapper[31830]: I0319 12:19:57.886788 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 19 12:19:57.927355 master-0 kubenswrapper[31830]: I0319 12:19:57.927239 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 19 12:19:58.317106 master-0 kubenswrapper[31830]: I0319 12:19:58.317033 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 19 12:19:58.598923 master-0 kubenswrapper[31830]: I0319 12:19:58.598757 31830 patch_prober.go:28] interesting pod/console-575589487f-9nhq4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 19 12:19:58.598923 master-0 kubenswrapper[31830]: I0319 12:19:58.598866 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-575589487f-9nhq4" podUID="0a1dfc0b-250d-465f-a075-f088f5725873" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 19 12:19:58.708939 master-0 kubenswrapper[31830]: I0319 12:19:58.708880 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-sklzz" Mar 19 12:19:59.115850 master-0 kubenswrapper[31830]: I0319 12:19:59.115752 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-dr8qt" Mar 19 12:19:59.335886 master-0 kubenswrapper[31830]: I0319 12:19:59.335784 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 19 12:19:59.358930 master-0 kubenswrapper[31830]: I0319 12:19:59.358876 31830 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 19 12:19:59.513912 master-0 kubenswrapper[31830]: I0319 12:19:59.513855 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-bfonl27j4vul7" Mar 19 12:19:59.577210 master-0 kubenswrapper[31830]: I0319 12:19:59.577109 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 19 12:19:59.585878 master-0 kubenswrapper[31830]: I0319 12:19:59.585782 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 19 12:19:59.593875 master-0 kubenswrapper[31830]: I0319 12:19:59.593777 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 19 12:19:59.618365 master-0 kubenswrapper[31830]: I0319 12:19:59.618270 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 19 12:19:59.880295 master-0 kubenswrapper[31830]: I0319 12:19:59.880157 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 19 12:19:59.963606 master-0 kubenswrapper[31830]: I0319 12:19:59.963555 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 19 12:20:00.041948 master-0 kubenswrapper[31830]: I0319 12:20:00.041894 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 19 12:20:00.042196 master-0 kubenswrapper[31830]: I0319 12:20:00.042036 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 19 12:20:00.078103 master-0 kubenswrapper[31830]: I0319 12:20:00.078022 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 19 12:20:00.129211 master-0 kubenswrapper[31830]: I0319 12:20:00.129147 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-xfzn8" Mar 19 12:20:00.201151 master-0 kubenswrapper[31830]: I0319 12:20:00.201012 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-gz8pl" Mar 19 12:20:00.219510 master-0 kubenswrapper[31830]: I0319 12:20:00.219480 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 19 12:20:00.292371 master-0 kubenswrapper[31830]: I0319 12:20:00.292304 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 19 12:20:00.340854 master-0 kubenswrapper[31830]: I0319 12:20:00.340763 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 19 12:20:00.542603 master-0 kubenswrapper[31830]: I0319 12:20:00.542545 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 19 12:20:00.615389 master-0 kubenswrapper[31830]: I0319 12:20:00.615354 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 19 12:20:00.689014 master-0 kubenswrapper[31830]: I0319 12:20:00.688934 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 19 12:20:00.731648 master-0 kubenswrapper[31830]: I0319 12:20:00.731582 31830 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 19 12:20:00.892487 master-0 kubenswrapper[31830]: I0319 12:20:00.892360 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 19 12:20:01.030041 master-0 kubenswrapper[31830]: I0319 12:20:01.029989 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-hw4t4" Mar 19 12:20:01.435893 master-0 kubenswrapper[31830]: I0319 12:20:01.435724 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 19 12:20:01.445808 master-0 kubenswrapper[31830]: I0319 12:20:01.445732 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 19 12:20:01.583667 master-0 kubenswrapper[31830]: I0319 12:20:01.583591 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 19 12:20:01.855474 master-0 kubenswrapper[31830]: I0319 12:20:01.855429 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-48w96" Mar 19 12:20:02.056449 master-0 kubenswrapper[31830]: I0319 12:20:02.056381 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 19 12:20:02.100036 master-0 kubenswrapper[31830]: I0319 12:20:02.099982 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 19 12:20:02.224628 master-0 kubenswrapper[31830]: I0319 12:20:02.224542 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-svqv2" Mar 19 12:20:02.238010 master-0 kubenswrapper[31830]: I0319 12:20:02.237951 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 19 12:20:02.364195 master-0 kubenswrapper[31830]: I0319 12:20:02.364136 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 19 12:20:02.382876 master-0 kubenswrapper[31830]: I0319 12:20:02.382791 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 19 12:20:02.429863 master-0 kubenswrapper[31830]: I0319 12:20:02.429787 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 19 12:20:02.494331 master-0 kubenswrapper[31830]: I0319 12:20:02.494205 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 19 12:20:02.519919 master-0 kubenswrapper[31830]: I0319 12:20:02.519877 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-79l7s" Mar 19 12:20:02.685893 master-0 kubenswrapper[31830]: I0319 12:20:02.685855 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 19 12:20:02.722973 master-0 kubenswrapper[31830]: I0319 12:20:02.722939 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 19 12:20:02.766217 master-0 kubenswrapper[31830]: I0319 12:20:02.765872 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 19 12:20:02.768624 master-0 kubenswrapper[31830]: I0319 12:20:02.768570 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 19 12:20:02.959495 master-0 kubenswrapper[31830]: I0319 12:20:02.959432 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 19 12:20:02.962854 master-0 kubenswrapper[31830]: I0319 12:20:02.962825 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 19 12:20:03.033832 master-0 kubenswrapper[31830]: I0319 12:20:03.033694 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 19 12:20:03.128877 master-0 kubenswrapper[31830]: I0319 12:20:03.128767 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 19 12:20:03.189663 master-0 kubenswrapper[31830]: I0319 12:20:03.189604 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 19 12:20:03.300033 master-0 kubenswrapper[31830]: I0319 12:20:03.299910 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-v8nqn" Mar 19 12:20:03.368402 master-0 kubenswrapper[31830]: I0319 12:20:03.367469 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 19 12:20:03.409683 master-0 kubenswrapper[31830]: I0319 12:20:03.409610 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 19 12:20:03.533792 master-0 kubenswrapper[31830]: I0319 12:20:03.532321 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 19 12:20:03.560544 master-0 kubenswrapper[31830]: I0319 12:20:03.560374 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-67sx5" Mar 19 12:20:03.584479 master-0 kubenswrapper[31830]: I0319 12:20:03.584409 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 19 12:20:03.588835 master-0 kubenswrapper[31830]: I0319 12:20:03.588790 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 19 12:20:03.673183 master-0 kubenswrapper[31830]: I0319 12:20:03.673097 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-hjms6" Mar 19 12:20:03.677941 master-0 kubenswrapper[31830]: I0319 12:20:03.677876 31830 scope.go:117] "RemoveContainer" containerID="a2f2d3c455898f0dff08ce78d00fccc2ef15d161401b675e3b61d3fc312756c6" Mar 19 12:20:03.707880 master-0 kubenswrapper[31830]: I0319 12:20:03.707798 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 19 12:20:03.715755 master-0 kubenswrapper[31830]: I0319 12:20:03.715723 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-37tn0b2qg70ml" Mar 19 12:20:03.848916 master-0 kubenswrapper[31830]: I0319 12:20:03.847591 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-5xkbm" Mar 19 12:20:03.857609 master-0 kubenswrapper[31830]: I0319 12:20:03.857550 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 19 12:20:03.870926 master-0 kubenswrapper[31830]: I0319 12:20:03.870881 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 19 12:20:03.925683 master-0 kubenswrapper[31830]: I0319 12:20:03.925644 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 19 12:20:03.965858 master-0 kubenswrapper[31830]: I0319 12:20:03.965789 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 19 12:20:04.001968 master-0 kubenswrapper[31830]: I0319 12:20:04.001922 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 19 12:20:04.036661 master-0 kubenswrapper[31830]: I0319 12:20:04.036615 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 19 12:20:04.057729 master-0 kubenswrapper[31830]: I0319 12:20:04.057681 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 19 12:20:04.091165 master-0 kubenswrapper[31830]: I0319 12:20:04.091089 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 19 12:20:04.094149 master-0 kubenswrapper[31830]: I0319 12:20:04.093963 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 19 12:20:04.102486 master-0 kubenswrapper[31830]: I0319 12:20:04.102361 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 19 12:20:04.119076 master-0 kubenswrapper[31830]: I0319 12:20:04.119015 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 19 12:20:04.219333 master-0 kubenswrapper[31830]: I0319 12:20:04.219278 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_09672015532ae9d1d74ae4d426cd904b/kube-controller-manager/2.log" Mar 19 12:20:04.220459 master-0 kubenswrapper[31830]: I0319 12:20:04.220265 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 19 12:20:04.220459 master-0 kubenswrapper[31830]: I0319 12:20:04.220383 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"09672015532ae9d1d74ae4d426cd904b","Type":"ContainerStarted","Data":"47fbcc830547b61bd29f055979e2109f1293c920ca05c188650fe3665f2e7c8f"} Mar 19 12:20:04.236659 master-0 kubenswrapper[31830]: I0319 12:20:04.236605 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 19 12:20:04.281263 master-0 kubenswrapper[31830]: I0319 12:20:04.281201 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 19 12:20:04.365609 master-0 kubenswrapper[31830]: I0319 12:20:04.365483 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-lcwzg" Mar 19 12:20:04.438536 master-0 kubenswrapper[31830]: I0319 12:20:04.438500 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 19 12:20:04.499691 master-0 kubenswrapper[31830]: I0319 12:20:04.499636 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 19 12:20:04.603666 master-0 kubenswrapper[31830]: I0319 12:20:04.603633 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 19 12:20:04.665563 master-0 kubenswrapper[31830]: I0319 12:20:04.665152 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 19 12:20:04.678950 master-0 kubenswrapper[31830]: I0319 12:20:04.676346 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-6flh6" Mar 19 12:20:04.703994 master-0 kubenswrapper[31830]: I0319 12:20:04.703893 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 19 12:20:04.723821 master-0 kubenswrapper[31830]: I0319 12:20:04.723726 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 19 12:20:04.749121 master-0 kubenswrapper[31830]: I0319 12:20:04.749042 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 19 12:20:04.762951 master-0 kubenswrapper[31830]: I0319 12:20:04.762896 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 19 12:20:04.773359 master-0 kubenswrapper[31830]: I0319 12:20:04.773286 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 19 12:20:04.789294 master-0 kubenswrapper[31830]: I0319 12:20:04.789233 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 19 12:20:04.851596 master-0 kubenswrapper[31830]: I0319 12:20:04.851508 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 19 12:20:04.859142 master-0 kubenswrapper[31830]: I0319 12:20:04.859092 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 19 12:20:04.944509 master-0 kubenswrapper[31830]: I0319 12:20:04.944389 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 19 12:20:05.133993 master-0 kubenswrapper[31830]: I0319 12:20:05.133940 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 19 12:20:05.229079 master-0 kubenswrapper[31830]: I0319 12:20:05.229018 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 19 12:20:05.255400 master-0 kubenswrapper[31830]: I0319 12:20:05.255345 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 19 12:20:05.268510 master-0 kubenswrapper[31830]: I0319 12:20:05.268448 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 19 12:20:05.346950 master-0 kubenswrapper[31830]: I0319 12:20:05.346859 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 19 12:20:05.349707 master-0 kubenswrapper[31830]: I0319 12:20:05.349666 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 19 12:20:05.376088 master-0 kubenswrapper[31830]: I0319 12:20:05.376034 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 19 12:20:05.394158 master-0 kubenswrapper[31830]: I0319 12:20:05.393428 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 19 12:20:05.422546 master-0 kubenswrapper[31830]: I0319 12:20:05.422483 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 19 12:20:05.439671 master-0 kubenswrapper[31830]: I0319 12:20:05.439624 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 19 12:20:05.678249 master-0 kubenswrapper[31830]: I0319 12:20:05.678120 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 19 12:20:05.775109 master-0 kubenswrapper[31830]: I0319 12:20:05.775028 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 19 12:20:05.869926 master-0 kubenswrapper[31830]: I0319 12:20:05.869865 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 19 12:20:05.907691 master-0 kubenswrapper[31830]: I0319 12:20:05.907568 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 19 12:20:06.046334 master-0 kubenswrapper[31830]: I0319 12:20:06.046272 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 19 12:20:06.164516 master-0 kubenswrapper[31830]: I0319 12:20:06.164449 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 19 12:20:06.290993 master-0 kubenswrapper[31830]: I0319 12:20:06.290946 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 19 12:20:06.318331 master-0 kubenswrapper[31830]: I0319 12:20:06.318212 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 19 12:20:06.328426 master-0 kubenswrapper[31830]: I0319 12:20:06.328387 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 19 12:20:06.411183 master-0 kubenswrapper[31830]: I0319 12:20:06.411144 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 19 12:20:06.443787 master-0 kubenswrapper[31830]: I0319 12:20:06.443755 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 19 12:20:06.523369 master-0 kubenswrapper[31830]: I0319 12:20:06.523315 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 19 12:20:06.637727 master-0 kubenswrapper[31830]: I0319 12:20:06.637581 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-mc2cj" Mar 19 12:20:06.638820 master-0 kubenswrapper[31830]: I0319 12:20:06.638771 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 19 12:20:06.749891 master-0 kubenswrapper[31830]: I0319 12:20:06.749835 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-77kwj" Mar 19 12:20:06.770017 master-0 kubenswrapper[31830]: I0319 12:20:06.769384 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 19 12:20:06.779947 master-0 kubenswrapper[31830]: I0319 12:20:06.779744 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 19 12:20:06.800952 master-0 kubenswrapper[31830]: I0319 12:20:06.800590 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 19 12:20:06.852682 master-0 kubenswrapper[31830]: I0319 12:20:06.852600 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 19 12:20:06.879597 master-0 kubenswrapper[31830]: I0319 12:20:06.879417 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 19 12:20:06.970868 master-0 kubenswrapper[31830]: I0319 12:20:06.970786 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 19 12:20:06.984593 master-0 kubenswrapper[31830]: I0319 12:20:06.984548 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 19 12:20:06.995835 master-0 kubenswrapper[31830]: I0319 12:20:06.995783 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 19 12:20:07.151457 master-0 kubenswrapper[31830]: I0319 12:20:07.151385 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 19 12:20:07.174760 master-0 kubenswrapper[31830]: I0319 12:20:07.174704 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 19 12:20:07.182769 master-0 kubenswrapper[31830]: I0319 12:20:07.182723 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 19 12:20:07.207678 master-0 kubenswrapper[31830]: I0319 12:20:07.207618 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 19 12:20:07.207678 master-0 kubenswrapper[31830]: I0319 12:20:07.207618 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 19 12:20:07.254407 master-0 kubenswrapper[31830]: I0319 12:20:07.254299 31830 patch_prober.go:28] interesting pod/console-b5f5fdd67-r4lxc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Mar 19 12:20:07.254407 master-0 kubenswrapper[31830]: I0319 12:20:07.254359 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b5f5fdd67-r4lxc" podUID="8f224cab-e321-4e24-83bc-f99242f971b0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Mar 19 12:20:07.272853 master-0 kubenswrapper[31830]: I0319 12:20:07.272810 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 19 12:20:07.287695 master-0 kubenswrapper[31830]: I0319 12:20:07.287668 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-ww9m4" Mar 19 12:20:07.330548 master-0 kubenswrapper[31830]: I0319 12:20:07.330495 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 19 12:20:07.393108 master-0 kubenswrapper[31830]: I0319 12:20:07.393009 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 19 12:20:07.402136 master-0 kubenswrapper[31830]: I0319 12:20:07.402072 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 19 12:20:07.455525 master-0 kubenswrapper[31830]: I0319 12:20:07.455446 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 19 12:20:07.502085 master-0 kubenswrapper[31830]: I0319 12:20:07.502005 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 19 12:20:07.596739 master-0 kubenswrapper[31830]: I0319 12:20:07.596611 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 19 12:20:07.624651 master-0 kubenswrapper[31830]: I0319 12:20:07.624569 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 19 12:20:07.626017 master-0 kubenswrapper[31830]: I0319 12:20:07.625985 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 19 12:20:07.712716 master-0 kubenswrapper[31830]: I0319 12:20:07.712642 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-lcm2r" Mar 19 12:20:07.717363 master-0 kubenswrapper[31830]: I0319 12:20:07.717335 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:20:07.734201 master-0 kubenswrapper[31830]: I0319 12:20:07.734168 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 19 12:20:07.736440 master-0 kubenswrapper[31830]: I0319 12:20:07.736356 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 19 12:20:07.749066 master-0 kubenswrapper[31830]: I0319 12:20:07.749015 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 19 12:20:07.779079 master-0 kubenswrapper[31830]: I0319 12:20:07.778995 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 19 12:20:07.790454 master-0 kubenswrapper[31830]: I0319 12:20:07.790417 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 19 12:20:07.853500 master-0 kubenswrapper[31830]: I0319 12:20:07.853389 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 19 12:20:07.901965 master-0 kubenswrapper[31830]: I0319 12:20:07.901905 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 19 12:20:07.968930 master-0 kubenswrapper[31830]: I0319 12:20:07.968840 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 19 12:20:08.016869 master-0 kubenswrapper[31830]: I0319 12:20:08.016819 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 19 12:20:08.116957 master-0 kubenswrapper[31830]: I0319 12:20:08.116786 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 19 12:20:08.123844 master-0 kubenswrapper[31830]: I0319 12:20:08.123786 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 19 12:20:08.137557 master-0 kubenswrapper[31830]: I0319 12:20:08.137312 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 19 12:20:08.141998 master-0 kubenswrapper[31830]: I0319 12:20:08.141948 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 19 12:20:08.193200 master-0 kubenswrapper[31830]: I0319 12:20:08.193114 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 19 12:20:08.210271 master-0 kubenswrapper[31830]: I0319 12:20:08.210154 31830 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 19 12:20:08.212048 master-0 kubenswrapper[31830]: I0319 12:20:08.211984 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 19 12:20:08.220909 master-0 kubenswrapper[31830]: I0319 12:20:08.220764 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 19 12:20:08.223821 master-0 kubenswrapper[31830]: I0319 12:20:08.223753 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 19 12:20:08.246427 master-0 kubenswrapper[31830]: I0319 12:20:08.246368 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 19 12:20:08.327559 master-0 kubenswrapper[31830]: I0319 12:20:08.327486 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 19 12:20:08.356770 master-0 kubenswrapper[31830]: I0319 12:20:08.356713 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 19 12:20:08.401318 master-0 kubenswrapper[31830]: I0319 12:20:08.401194 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 19 12:20:08.432317 master-0 kubenswrapper[31830]: I0319 12:20:08.432255 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 19 12:20:08.464083 master-0 kubenswrapper[31830]: I0319 12:20:08.464016 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 19 12:20:08.471614 master-0 kubenswrapper[31830]: I0319 12:20:08.467521 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 19 12:20:08.489674 master-0 kubenswrapper[31830]: I0319 12:20:08.489590 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 19 12:20:08.588747 master-0 kubenswrapper[31830]: I0319 12:20:08.588704 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 19 12:20:08.600486 master-0 kubenswrapper[31830]: I0319 12:20:08.600426 31830 patch_prober.go:28] interesting pod/console-575589487f-9nhq4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 19 12:20:08.600711 master-0 kubenswrapper[31830]: I0319 12:20:08.600493 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-575589487f-9nhq4" podUID="0a1dfc0b-250d-465f-a075-f088f5725873" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 19 12:20:08.625856 master-0 kubenswrapper[31830]: I0319 12:20:08.625515 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 19 12:20:08.642732 master-0 kubenswrapper[31830]: I0319 12:20:08.642682 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 19 12:20:08.679882 master-0 kubenswrapper[31830]: I0319 12:20:08.679763 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 19 12:20:08.760918 master-0 kubenswrapper[31830]: I0319 12:20:08.760773 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 19 12:20:08.766472 master-0 kubenswrapper[31830]: I0319 12:20:08.766375 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 19 12:20:08.964685 master-0 kubenswrapper[31830]: I0319 12:20:08.964550 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 19 12:20:08.989299 master-0 kubenswrapper[31830]: I0319 12:20:08.989222 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 19 12:20:09.049926 master-0 kubenswrapper[31830]: I0319 12:20:09.049853 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 19 12:20:09.060781 master-0 kubenswrapper[31830]: I0319 12:20:09.060731 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 19 12:20:09.069411 master-0 kubenswrapper[31830]: I0319 12:20:09.069381 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 19 12:20:09.069640 master-0 kubenswrapper[31830]: I0319 12:20:09.069402 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 19 12:20:09.140789 master-0 kubenswrapper[31830]: I0319 12:20:09.140749 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 19 12:20:09.154299 master-0 kubenswrapper[31830]: I0319 12:20:09.154261 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 19 12:20:09.197256 master-0 kubenswrapper[31830]: I0319 12:20:09.197197 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 19 12:20:09.256901 master-0 kubenswrapper[31830]: I0319 12:20:09.256841 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-h5t8s" Mar 19 12:20:09.258632 master-0 kubenswrapper[31830]: I0319 12:20:09.258600 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 19 12:20:09.300814 master-0 kubenswrapper[31830]: I0319 12:20:09.300742 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 19 12:20:09.308367 master-0 kubenswrapper[31830]: I0319 12:20:09.308329 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 19 12:20:09.322909 master-0 kubenswrapper[31830]: I0319 12:20:09.322859 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 19 12:20:09.351200 master-0 kubenswrapper[31830]: I0319 12:20:09.351154 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 19 12:20:09.570140 master-0 kubenswrapper[31830]: I0319 12:20:09.570014 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 19 12:20:09.607107 master-0 kubenswrapper[31830]: I0319 12:20:09.607048 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 19 12:20:09.715538 master-0 kubenswrapper[31830]: I0319 12:20:09.715436 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 19 12:20:09.751126 master-0 kubenswrapper[31830]: I0319 12:20:09.751059 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 19 12:20:09.819510 master-0 kubenswrapper[31830]: I0319 12:20:09.819462 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 19 12:20:09.825975 master-0 kubenswrapper[31830]: I0319 12:20:09.825384 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 19 12:20:09.925342 master-0 kubenswrapper[31830]: I0319 12:20:09.925307 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-mzg7v" Mar 19 12:20:09.958140 master-0 kubenswrapper[31830]: I0319 12:20:09.958095 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 19 12:20:09.959169 master-0 kubenswrapper[31830]: I0319 12:20:09.959146 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 19 12:20:09.959347 master-0 kubenswrapper[31830]: I0319 12:20:09.959313 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-cmchf" Mar 19 12:20:10.182699 master-0 kubenswrapper[31830]: I0319 12:20:10.182561 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 19 12:20:10.236561 master-0 kubenswrapper[31830]: I0319 12:20:10.236505 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 19 12:20:10.305754 master-0 kubenswrapper[31830]: I0319 12:20:10.305688 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 19 12:20:10.351883 master-0 kubenswrapper[31830]: I0319 12:20:10.351820 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 19 12:20:10.379844 master-0 kubenswrapper[31830]: I0319 12:20:10.379755 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 19 12:20:10.422700 master-0 kubenswrapper[31830]: I0319 12:20:10.422637 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 19 12:20:10.488232 master-0 kubenswrapper[31830]: I0319 12:20:10.488172 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 19 12:20:10.531316 master-0 kubenswrapper[31830]: I0319 12:20:10.531233 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-aiar26mnr5utb" Mar 19 12:20:10.544370 master-0 kubenswrapper[31830]: I0319 12:20:10.544300 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 19 12:20:10.573865 master-0 kubenswrapper[31830]: I0319 12:20:10.573757 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 19 12:20:10.607160 master-0 kubenswrapper[31830]: I0319 12:20:10.607102 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 19 12:20:10.652948 master-0 kubenswrapper[31830]: I0319 12:20:10.652909 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 19 12:20:10.714211 master-0 kubenswrapper[31830]: I0319 12:20:10.714148 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 19 12:20:10.810366 master-0 kubenswrapper[31830]: I0319 12:20:10.810240 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 19 12:20:10.878002 master-0 kubenswrapper[31830]: I0319 12:20:10.877962 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 19 12:20:11.035236 master-0 kubenswrapper[31830]: I0319 12:20:11.035153 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 19 12:20:11.169551 master-0 kubenswrapper[31830]: I0319 12:20:11.169375 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 19 12:20:11.215519 master-0 kubenswrapper[31830]: I0319 12:20:11.215467 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 19 12:20:11.321768 master-0 kubenswrapper[31830]: I0319 12:20:11.320557 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 19 12:20:11.330165 master-0 kubenswrapper[31830]: I0319 12:20:11.330129 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 19 12:20:11.387902 master-0 kubenswrapper[31830]: I0319 12:20:11.387650 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 19 12:20:11.388851 master-0 kubenswrapper[31830]: I0319 12:20:11.388816 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 19 12:20:11.480742 master-0 kubenswrapper[31830]: I0319 12:20:11.480680 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 19 12:20:11.489343 master-0 kubenswrapper[31830]: I0319 12:20:11.489285 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-xbkxv" Mar 19 12:20:11.490536 master-0 kubenswrapper[31830]: I0319 12:20:11.490477 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 19 12:20:11.516358 master-0 kubenswrapper[31830]: I0319 12:20:11.516261 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 19 12:20:11.535105 master-0 kubenswrapper[31830]: I0319 12:20:11.535020 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 19 12:20:11.546327 master-0 kubenswrapper[31830]: I0319 12:20:11.546244 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 19 12:20:11.608068 master-0 kubenswrapper[31830]: I0319 12:20:11.608011 31830 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 19 12:20:11.614942 master-0 kubenswrapper[31830]: I0319 12:20:11.614851 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=43.614830463 podStartE2EDuration="43.614830463s" podCreationTimestamp="2026-03-19 12:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:19:47.873871334 +0000 UTC m=+326.422832038" watchObservedRunningTime="2026-03-19 12:20:11.614830463 +0000 UTC m=+350.163791187" Mar 19 12:20:11.615947 master-0 kubenswrapper[31830]: I0319 12:20:11.615909 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 19 12:20:11.616080 master-0 kubenswrapper[31830]: I0319 12:20:11.615962 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 19 12:20:11.616441 master-0 kubenswrapper[31830]: I0319 12:20:11.616403 31830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d736f0ce-242a-43e5-aabf-b298e1959069" Mar 19 12:20:11.616441 master-0 kubenswrapper[31830]: I0319 12:20:11.616437 31830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d736f0ce-242a-43e5-aabf-b298e1959069" Mar 19 12:20:11.623394 master-0 kubenswrapper[31830]: I0319 12:20:11.623343 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 12:20:11.645921 master-0 kubenswrapper[31830]: I0319 12:20:11.643712 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 19 12:20:11.659870 master-0 kubenswrapper[31830]: I0319 12:20:11.657309 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=23.657283861 podStartE2EDuration="23.657283861s" podCreationTimestamp="2026-03-19 12:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:20:11.644084518 +0000 UTC m=+350.193045322" watchObservedRunningTime="2026-03-19 12:20:11.657283861 +0000 UTC m=+350.206244565" Mar 19 12:20:11.693473 master-0 kubenswrapper[31830]: I0319 12:20:11.693419 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 19 12:20:11.720235 master-0 kubenswrapper[31830]: I0319 12:20:11.720161 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 19 12:20:11.739885 master-0 kubenswrapper[31830]: I0319 12:20:11.739738 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 19 12:20:11.816342 master-0 kubenswrapper[31830]: I0319 12:20:11.816264 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 19 12:20:11.845144 master-0 kubenswrapper[31830]: I0319 12:20:11.845096 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 19 12:20:11.857383 master-0 kubenswrapper[31830]: I0319 12:20:11.857314 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 19 12:20:11.940756 master-0 kubenswrapper[31830]: I0319 12:20:11.940679 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 19 12:20:12.005991 master-0 kubenswrapper[31830]: I0319 12:20:12.005788 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 19 12:20:12.009518 master-0 kubenswrapper[31830]: I0319 12:20:12.009485 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 19 12:20:12.068252 master-0 kubenswrapper[31830]: I0319 12:20:12.068157 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 19 12:20:12.201663 master-0 kubenswrapper[31830]: I0319 12:20:12.201602 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 19 12:20:12.297567 master-0 kubenswrapper[31830]: I0319 12:20:12.297346 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 19 12:20:12.345956 master-0 kubenswrapper[31830]: I0319 12:20:12.344959 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 19 12:20:12.353949 master-0 kubenswrapper[31830]: I0319 12:20:12.353918 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 19 12:20:12.360889 master-0 kubenswrapper[31830]: I0319 12:20:12.360856 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 19 12:20:12.373955 master-0 kubenswrapper[31830]: I0319 12:20:12.373920 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 19 12:20:12.381917 master-0 kubenswrapper[31830]: I0319 12:20:12.381858 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:20:12.383463 master-0 kubenswrapper[31830]: I0319 12:20:12.383301 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 19 12:20:12.390924 master-0 kubenswrapper[31830]: I0319 12:20:12.390879 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:20:12.433562 master-0 kubenswrapper[31830]: I0319 12:20:12.433523 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 19 12:20:12.437924 master-0 kubenswrapper[31830]: I0319 12:20:12.437856 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-shkfs" Mar 19 12:20:12.479207 master-0 kubenswrapper[31830]: I0319 12:20:12.479154 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 19 12:20:12.584367 master-0 kubenswrapper[31830]: I0319 12:20:12.584258 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 19 12:20:12.621978 master-0 kubenswrapper[31830]: I0319 12:20:12.621925 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 19 12:20:12.636485 master-0 kubenswrapper[31830]: I0319 12:20:12.636449 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 19 12:20:12.687429 master-0 kubenswrapper[31830]: I0319 12:20:12.687341 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 19 12:20:12.704279 master-0 kubenswrapper[31830]: I0319 12:20:12.704033 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 19 12:20:12.720710 master-0 kubenswrapper[31830]: I0319 12:20:12.720641 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 19 12:20:12.734622 master-0 kubenswrapper[31830]: I0319 12:20:12.734555 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 19 12:20:12.836047 master-0 kubenswrapper[31830]: I0319 12:20:12.835362 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 19 12:20:12.865710 master-0 kubenswrapper[31830]: I0319 12:20:12.865616 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 19 12:20:12.901404 master-0 kubenswrapper[31830]: I0319 12:20:12.901324 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 19 12:20:12.962531 master-0 kubenswrapper[31830]: I0319 12:20:12.962473 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 19 12:20:12.965166 master-0 kubenswrapper[31830]: I0319 12:20:12.965118 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 19 12:20:13.012042 master-0 kubenswrapper[31830]: I0319 12:20:13.008988 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 19 12:20:13.034819 master-0 kubenswrapper[31830]: I0319 12:20:13.034770 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-9wk7c" Mar 19 12:20:13.101217 master-0 kubenswrapper[31830]: I0319 12:20:13.101098 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 19 12:20:13.135624 master-0 kubenswrapper[31830]: I0319 12:20:13.135563 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4kjzz" Mar 19 12:20:13.196717 master-0 kubenswrapper[31830]: I0319 12:20:13.196630 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 19 12:20:13.252334 master-0 kubenswrapper[31830]: I0319 12:20:13.252241 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 19 12:20:13.296300 master-0 kubenswrapper[31830]: I0319 12:20:13.296217 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:20:13.394347 master-0 kubenswrapper[31830]: I0319 12:20:13.394257 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 19 12:20:13.426726 master-0 kubenswrapper[31830]: I0319 12:20:13.426641 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 19 12:20:13.431407 master-0 kubenswrapper[31830]: I0319 12:20:13.431365 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 19 12:20:13.511402 master-0 kubenswrapper[31830]: I0319 12:20:13.511314 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-pcp8m" Mar 19 12:20:13.547157 master-0 kubenswrapper[31830]: I0319 12:20:13.547122 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 19 12:20:13.880355 master-0 kubenswrapper[31830]: I0319 12:20:13.880238 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 19 12:20:13.938949 master-0 kubenswrapper[31830]: I0319 12:20:13.938879 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 19 12:20:14.021558 master-0 kubenswrapper[31830]: I0319 12:20:14.021312 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 19 12:20:14.089125 master-0 kubenswrapper[31830]: I0319 12:20:14.089065 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 19 12:20:14.203923 master-0 kubenswrapper[31830]: I0319 12:20:14.203845 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 19 12:20:14.298154 master-0 kubenswrapper[31830]: I0319 12:20:14.298065 31830 generic.go:334] "Generic (PLEG): container finished" podID="6db3fcbe-0dbf-464f-944b-62427173c8d3" containerID="eeacdb60f8da61f85096f789c56cd94fccc18791a62d95df61660195a985a6a0" exitCode=0 Mar 19 12:20:14.299494 master-0 kubenswrapper[31830]: I0319 12:20:14.299443 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-86889676f6-phlgd" event={"ID":"6db3fcbe-0dbf-464f-944b-62427173c8d3","Type":"ContainerDied","Data":"eeacdb60f8da61f85096f789c56cd94fccc18791a62d95df61660195a985a6a0"} Mar 19 12:20:14.402165 master-0 kubenswrapper[31830]: I0319 12:20:14.402109 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 19 12:20:14.432941 master-0 kubenswrapper[31830]: I0319 12:20:14.432873 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 19 12:20:14.459305 master-0 kubenswrapper[31830]: I0319 12:20:14.459271 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 19 12:20:14.470949 master-0 kubenswrapper[31830]: I0319 12:20:14.470925 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 19 12:20:14.496467 master-0 kubenswrapper[31830]: I0319 12:20:14.496436 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 19 12:20:14.524752 master-0 kubenswrapper[31830]: I0319 12:20:14.524677 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:20:14.580872 master-0 kubenswrapper[31830]: I0319 12:20:14.580408 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-metrics-server-audit-profiles\") pod \"6db3fcbe-0dbf-464f-944b-62427173c8d3\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " Mar 19 12:20:14.580872 master-0 kubenswrapper[31830]: I0319 12:20:14.580576 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-configmap-kubelet-serving-ca-bundle\") pod \"6db3fcbe-0dbf-464f-944b-62427173c8d3\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " Mar 19 12:20:14.580872 master-0 kubenswrapper[31830]: I0319 12:20:14.580639 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lllml\" (UniqueName: \"kubernetes.io/projected/6db3fcbe-0dbf-464f-944b-62427173c8d3-kube-api-access-lllml\") pod \"6db3fcbe-0dbf-464f-944b-62427173c8d3\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " Mar 19 12:20:14.581350 master-0 kubenswrapper[31830]: I0319 12:20:14.581101 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-client-certs\") pod \"6db3fcbe-0dbf-464f-944b-62427173c8d3\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " Mar 19 12:20:14.581350 master-0 kubenswrapper[31830]: I0319 12:20:14.581137 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-server-tls\") pod \"6db3fcbe-0dbf-464f-944b-62427173c8d3\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " Mar 19 12:20:14.581350 master-0 kubenswrapper[31830]: I0319 12:20:14.581329 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6db3fcbe-0dbf-464f-944b-62427173c8d3-audit-log\") pod \"6db3fcbe-0dbf-464f-944b-62427173c8d3\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " Mar 19 12:20:14.581476 master-0 kubenswrapper[31830]: I0319 12:20:14.581427 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-client-ca-bundle\") pod \"6db3fcbe-0dbf-464f-944b-62427173c8d3\" (UID: \"6db3fcbe-0dbf-464f-944b-62427173c8d3\") " Mar 19 12:20:14.581866 master-0 kubenswrapper[31830]: I0319 12:20:14.581789 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "6db3fcbe-0dbf-464f-944b-62427173c8d3" (UID: "6db3fcbe-0dbf-464f-944b-62427173c8d3"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:20:14.581931 master-0 kubenswrapper[31830]: I0319 12:20:14.581847 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "6db3fcbe-0dbf-464f-944b-62427173c8d3" (UID: "6db3fcbe-0dbf-464f-944b-62427173c8d3"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:20:14.582552 master-0 kubenswrapper[31830]: I0319 12:20:14.582506 31830 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Mar 19 12:20:14.582552 master-0 kubenswrapper[31830]: I0319 12:20:14.582540 31830 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6db3fcbe-0dbf-464f-944b-62427173c8d3-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:20:14.582696 master-0 kubenswrapper[31830]: I0319 12:20:14.582573 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6db3fcbe-0dbf-464f-944b-62427173c8d3-audit-log" (OuterVolumeSpecName: "audit-log") pod "6db3fcbe-0dbf-464f-944b-62427173c8d3" (UID: "6db3fcbe-0dbf-464f-944b-62427173c8d3"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:20:14.585420 master-0 kubenswrapper[31830]: I0319 12:20:14.585379 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "6db3fcbe-0dbf-464f-944b-62427173c8d3" (UID: "6db3fcbe-0dbf-464f-944b-62427173c8d3"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:20:14.585754 master-0 kubenswrapper[31830]: I0319 12:20:14.585720 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "6db3fcbe-0dbf-464f-944b-62427173c8d3" (UID: "6db3fcbe-0dbf-464f-944b-62427173c8d3"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:20:14.586033 master-0 kubenswrapper[31830]: I0319 12:20:14.585996 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "6db3fcbe-0dbf-464f-944b-62427173c8d3" (UID: "6db3fcbe-0dbf-464f-944b-62427173c8d3"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:20:14.587071 master-0 kubenswrapper[31830]: I0319 12:20:14.587019 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6db3fcbe-0dbf-464f-944b-62427173c8d3-kube-api-access-lllml" (OuterVolumeSpecName: "kube-api-access-lllml") pod "6db3fcbe-0dbf-464f-944b-62427173c8d3" (UID: "6db3fcbe-0dbf-464f-944b-62427173c8d3"). InnerVolumeSpecName "kube-api-access-lllml". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:20:14.683910 master-0 kubenswrapper[31830]: I0319 12:20:14.683850 31830 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6db3fcbe-0dbf-464f-944b-62427173c8d3-audit-log\") on node \"master-0\" DevicePath \"\"" Mar 19 12:20:14.683910 master-0 kubenswrapper[31830]: I0319 12:20:14.683896 31830 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:20:14.683910 master-0 kubenswrapper[31830]: I0319 12:20:14.683911 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lllml\" (UniqueName: \"kubernetes.io/projected/6db3fcbe-0dbf-464f-944b-62427173c8d3-kube-api-access-lllml\") on node \"master-0\" DevicePath \"\"" Mar 19 12:20:14.684249 master-0 kubenswrapper[31830]: I0319 12:20:14.683923 31830 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:20:14.684249 master-0 kubenswrapper[31830]: I0319 12:20:14.683937 31830 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6db3fcbe-0dbf-464f-944b-62427173c8d3-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Mar 19 12:20:14.723653 master-0 kubenswrapper[31830]: I0319 12:20:14.723247 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 19 12:20:14.770886 master-0 kubenswrapper[31830]: I0319 12:20:14.770845 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 19 12:20:14.786843 master-0 kubenswrapper[31830]: I0319 12:20:14.786734 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 19 12:20:14.856882 master-0 kubenswrapper[31830]: I0319 12:20:14.856757 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 19 12:20:15.087636 master-0 kubenswrapper[31830]: I0319 12:20:15.087556 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 19 12:20:15.186195 master-0 kubenswrapper[31830]: I0319 12:20:15.186045 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-djzws" Mar 19 12:20:15.200570 master-0 kubenswrapper[31830]: I0319 12:20:15.200482 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 19 12:20:15.205645 master-0 kubenswrapper[31830]: I0319 12:20:15.205600 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 19 12:20:15.218092 master-0 kubenswrapper[31830]: I0319 12:20:15.218061 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 19 12:20:15.271050 master-0 kubenswrapper[31830]: I0319 12:20:15.270985 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 19 12:20:15.313456 master-0 kubenswrapper[31830]: I0319 12:20:15.313394 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-86889676f6-phlgd" event={"ID":"6db3fcbe-0dbf-464f-944b-62427173c8d3","Type":"ContainerDied","Data":"be807ecce9aec0f7633eaae2ed5203cb82f342ed739dc26f098d55766e987b78"} Mar 19 12:20:15.313456 master-0 kubenswrapper[31830]: I0319 12:20:15.313460 31830 scope.go:117] "RemoveContainer" containerID="eeacdb60f8da61f85096f789c56cd94fccc18791a62d95df61660195a985a6a0" Mar 19 12:20:15.313742 master-0 kubenswrapper[31830]: I0319 12:20:15.313497 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-86889676f6-phlgd" Mar 19 12:20:15.363620 master-0 kubenswrapper[31830]: I0319 12:20:15.363560 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-86889676f6-phlgd"] Mar 19 12:20:15.370232 master-0 kubenswrapper[31830]: I0319 12:20:15.370164 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-86889676f6-phlgd"] Mar 19 12:20:15.454790 master-0 kubenswrapper[31830]: I0319 12:20:15.454636 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 19 12:20:15.515060 master-0 kubenswrapper[31830]: I0319 12:20:15.514977 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 19 12:20:15.539228 master-0 kubenswrapper[31830]: I0319 12:20:15.539160 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 19 12:20:15.623690 master-0 kubenswrapper[31830]: I0319 12:20:15.623614 31830 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 19 12:20:15.686522 master-0 kubenswrapper[31830]: I0319 12:20:15.686439 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6db3fcbe-0dbf-464f-944b-62427173c8d3" path="/var/lib/kubelet/pods/6db3fcbe-0dbf-464f-944b-62427173c8d3/volumes" Mar 19 12:20:15.703750 master-0 kubenswrapper[31830]: I0319 12:20:15.703700 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 19 12:20:15.792496 master-0 kubenswrapper[31830]: I0319 12:20:15.792416 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 19 12:20:15.804324 master-0 kubenswrapper[31830]: I0319 12:20:15.804270 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 19 12:20:15.879573 master-0 kubenswrapper[31830]: I0319 12:20:15.879515 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 19 12:20:15.901304 master-0 kubenswrapper[31830]: I0319 12:20:15.901140 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 19 12:20:15.937727 master-0 kubenswrapper[31830]: I0319 12:20:15.937669 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 19 12:20:16.072803 master-0 kubenswrapper[31830]: I0319 12:20:16.072479 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 19 12:20:16.148994 master-0 kubenswrapper[31830]: I0319 12:20:16.148884 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 19 12:20:16.380711 master-0 kubenswrapper[31830]: I0319 12:20:16.380571 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 19 12:20:17.254197 master-0 kubenswrapper[31830]: I0319 12:20:17.254135 31830 patch_prober.go:28] interesting pod/console-b5f5fdd67-r4lxc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Mar 19 12:20:17.254889 master-0 kubenswrapper[31830]: I0319 12:20:17.254200 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b5f5fdd67-r4lxc" podUID="8f224cab-e321-4e24-83bc-f99242f971b0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Mar 19 12:20:17.270739 master-0 kubenswrapper[31830]: I0319 12:20:17.270666 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 19 12:20:17.498562 master-0 kubenswrapper[31830]: I0319 12:20:17.498505 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 19 12:20:17.793213 master-0 kubenswrapper[31830]: I0319 12:20:17.793159 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 19 12:20:18.301108 master-0 kubenswrapper[31830]: I0319 12:20:18.301060 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 19 12:20:18.599170 master-0 kubenswrapper[31830]: I0319 12:20:18.598982 31830 patch_prober.go:28] interesting pod/console-575589487f-9nhq4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 19 12:20:18.599170 master-0 kubenswrapper[31830]: I0319 12:20:18.599062 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-575589487f-9nhq4" podUID="0a1dfc0b-250d-465f-a075-f088f5725873" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 19 12:20:21.746546 master-0 kubenswrapper[31830]: I0319 12:20:21.746489 31830 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 12:20:21.747100 master-0 kubenswrapper[31830]: I0319 12:20:21.746686 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" containerID="cri-o://2e22ab1d258cd361a42b261f341cbe7d0efe71185a8f2bbd65df6e1954fbcb33" gracePeriod=5 Mar 19 12:20:26.909781 master-0 kubenswrapper[31830]: I0319 12:20:26.909716 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_ebbfbf2b56df0323ba118d68bfdad8b9/startup-monitor/0.log" Mar 19 12:20:26.910373 master-0 kubenswrapper[31830]: I0319 12:20:26.909873 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:20:26.929706 master-0 kubenswrapper[31830]: E0319 12:20:26.929649 31830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebbfbf2b56df0323ba118d68bfdad8b9.slice/crio-2e22ab1d258cd361a42b261f341cbe7d0efe71185a8f2bbd65df6e1954fbcb33.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebbfbf2b56df0323ba118d68bfdad8b9.slice/crio-conmon-2e22ab1d258cd361a42b261f341cbe7d0efe71185a8f2bbd65df6e1954fbcb33.scope\": RecentStats: unable to find data in memory cache]" Mar 19 12:20:27.072445 master-0 kubenswrapper[31830]: I0319 12:20:27.072256 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 19 12:20:27.072844 master-0 kubenswrapper[31830]: I0319 12:20:27.072784 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 19 12:20:27.073067 master-0 kubenswrapper[31830]: I0319 12:20:27.073041 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 19 12:20:27.073271 master-0 kubenswrapper[31830]: I0319 12:20:27.073246 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 19 12:20:27.073470 master-0 kubenswrapper[31830]: I0319 12:20:27.073443 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 19 12:20:27.073717 master-0 kubenswrapper[31830]: I0319 12:20:27.072403 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests" (OuterVolumeSpecName: "manifests") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:20:27.073843 master-0 kubenswrapper[31830]: I0319 12:20:27.073133 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock" (OuterVolumeSpecName: "var-lock") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:20:27.073843 master-0 kubenswrapper[31830]: I0319 12:20:27.073310 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:20:27.073843 master-0 kubenswrapper[31830]: I0319 12:20:27.073545 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log" (OuterVolumeSpecName: "var-log") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:20:27.074561 master-0 kubenswrapper[31830]: I0319 12:20:27.074531 31830 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") on node \"master-0\" DevicePath \"\"" Mar 19 12:20:27.074705 master-0 kubenswrapper[31830]: I0319 12:20:27.074683 31830 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 12:20:27.074899 master-0 kubenswrapper[31830]: I0319 12:20:27.074873 31830 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:20:27.075041 master-0 kubenswrapper[31830]: I0319 12:20:27.075020 31830 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") on node \"master-0\" DevicePath \"\"" Mar 19 12:20:27.078181 master-0 kubenswrapper[31830]: I0319 12:20:27.078096 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:20:27.176936 master-0 kubenswrapper[31830]: I0319 12:20:27.176864 31830 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:20:27.258261 master-0 kubenswrapper[31830]: I0319 12:20:27.258117 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:20:27.262386 master-0 kubenswrapper[31830]: I0319 12:20:27.262324 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:20:27.414380 master-0 kubenswrapper[31830]: I0319 12:20:27.414231 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-575589487f-9nhq4"] Mar 19 12:20:27.431056 master-0 kubenswrapper[31830]: I0319 12:20:27.431005 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_ebbfbf2b56df0323ba118d68bfdad8b9/startup-monitor/0.log" Mar 19 12:20:27.431268 master-0 kubenswrapper[31830]: I0319 12:20:27.431079 31830 generic.go:334] "Generic (PLEG): container finished" podID="ebbfbf2b56df0323ba118d68bfdad8b9" containerID="2e22ab1d258cd361a42b261f341cbe7d0efe71185a8f2bbd65df6e1954fbcb33" exitCode=137 Mar 19 12:20:27.431268 master-0 kubenswrapper[31830]: I0319 12:20:27.431220 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 12:20:27.431268 master-0 kubenswrapper[31830]: I0319 12:20:27.431224 31830 scope.go:117] "RemoveContainer" containerID="2e22ab1d258cd361a42b261f341cbe7d0efe71185a8f2bbd65df6e1954fbcb33" Mar 19 12:20:27.451346 master-0 kubenswrapper[31830]: I0319 12:20:27.451280 31830 scope.go:117] "RemoveContainer" containerID="2e22ab1d258cd361a42b261f341cbe7d0efe71185a8f2bbd65df6e1954fbcb33" Mar 19 12:20:27.451771 master-0 kubenswrapper[31830]: E0319 12:20:27.451729 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e22ab1d258cd361a42b261f341cbe7d0efe71185a8f2bbd65df6e1954fbcb33\": container with ID starting with 2e22ab1d258cd361a42b261f341cbe7d0efe71185a8f2bbd65df6e1954fbcb33 not found: ID does not exist" containerID="2e22ab1d258cd361a42b261f341cbe7d0efe71185a8f2bbd65df6e1954fbcb33" Mar 19 12:20:27.451851 master-0 kubenswrapper[31830]: I0319 12:20:27.451772 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e22ab1d258cd361a42b261f341cbe7d0efe71185a8f2bbd65df6e1954fbcb33"} err="failed to get container status \"2e22ab1d258cd361a42b261f341cbe7d0efe71185a8f2bbd65df6e1954fbcb33\": rpc error: code = NotFound desc = could not find container \"2e22ab1d258cd361a42b261f341cbe7d0efe71185a8f2bbd65df6e1954fbcb33\": container with ID starting with 2e22ab1d258cd361a42b261f341cbe7d0efe71185a8f2bbd65df6e1954fbcb33 not found: ID does not exist" Mar 19 12:20:27.695552 master-0 kubenswrapper[31830]: I0319 12:20:27.695321 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" path="/var/lib/kubelet/pods/ebbfbf2b56df0323ba118d68bfdad8b9/volumes" Mar 19 12:20:27.696117 master-0 kubenswrapper[31830]: I0319 12:20:27.696056 31830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 19 12:20:27.716471 master-0 kubenswrapper[31830]: I0319 12:20:27.716384 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 12:20:27.716471 master-0 kubenswrapper[31830]: I0319 12:20:27.716442 31830 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="61d6eea7-e053-427f-835f-d307ca9bf036" Mar 19 12:20:27.726093 master-0 kubenswrapper[31830]: I0319 12:20:27.726022 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 12:20:27.726093 master-0 kubenswrapper[31830]: I0319 12:20:27.726085 31830 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="61d6eea7-e053-427f-835f-d307ca9bf036" Mar 19 12:20:41.556112 master-0 kubenswrapper[31830]: I0319 12:20:41.556041 31830 generic.go:334] "Generic (PLEG): container finished" podID="b0f5939c-48b1-4d6c-9712-9128a78d603b" containerID="b9abe9cab7461378d1a9d129c7d55c4ae34a94e8d47d80f7732236c8c95d320b" exitCode=0 Mar 19 12:20:41.556112 master-0 kubenswrapper[31830]: I0319 12:20:41.556105 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" event={"ID":"b0f5939c-48b1-4d6c-9712-9128a78d603b","Type":"ContainerDied","Data":"b9abe9cab7461378d1a9d129c7d55c4ae34a94e8d47d80f7732236c8c95d320b"} Mar 19 12:20:41.556940 master-0 kubenswrapper[31830]: I0319 12:20:41.556145 31830 scope.go:117] "RemoveContainer" containerID="3cb3f801dd00591244b19b3ad51ca78e956ed275b4329bac7bcfc1f2f8080cd6" Mar 19 12:20:41.556940 master-0 kubenswrapper[31830]: I0319 12:20:41.556657 31830 scope.go:117] "RemoveContainer" containerID="b9abe9cab7461378d1a9d129c7d55c4ae34a94e8d47d80f7732236c8c95d320b" Mar 19 12:20:41.951361 master-0 kubenswrapper[31830]: I0319 12:20:41.951279 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 12:20:42.566916 master-0 kubenswrapper[31830]: I0319 12:20:42.566835 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" event={"ID":"b0f5939c-48b1-4d6c-9712-9128a78d603b","Type":"ContainerStarted","Data":"830c194a72633242b7db28de53d3866e2e9d7510de74e0843ede7186aedef4f5"} Mar 19 12:20:42.567478 master-0 kubenswrapper[31830]: I0319 12:20:42.567419 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 12:20:42.569779 master-0 kubenswrapper[31830]: I0319 12:20:42.569750 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-pr7gk" Mar 19 12:20:52.450212 master-0 kubenswrapper[31830]: I0319 12:20:52.450156 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-575589487f-9nhq4" podUID="0a1dfc0b-250d-465f-a075-f088f5725873" containerName="console" containerID="cri-o://15e02e8bcdf411a7b55a0689ad33ce3a8e430d3a47cdd9b4d8ebfc49858aed75" gracePeriod=15 Mar 19 12:20:52.677313 master-0 kubenswrapper[31830]: I0319 12:20:52.677268 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-575589487f-9nhq4_0a1dfc0b-250d-465f-a075-f088f5725873/console/0.log" Mar 19 12:20:52.677521 master-0 kubenswrapper[31830]: I0319 12:20:52.677317 31830 generic.go:334] "Generic (PLEG): container finished" podID="0a1dfc0b-250d-465f-a075-f088f5725873" containerID="15e02e8bcdf411a7b55a0689ad33ce3a8e430d3a47cdd9b4d8ebfc49858aed75" exitCode=2 Mar 19 12:20:52.677521 master-0 kubenswrapper[31830]: I0319 12:20:52.677345 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-575589487f-9nhq4" event={"ID":"0a1dfc0b-250d-465f-a075-f088f5725873","Type":"ContainerDied","Data":"15e02e8bcdf411a7b55a0689ad33ce3a8e430d3a47cdd9b4d8ebfc49858aed75"} Mar 19 12:20:52.918394 master-0 kubenswrapper[31830]: I0319 12:20:52.918366 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-575589487f-9nhq4_0a1dfc0b-250d-465f-a075-f088f5725873/console/0.log" Mar 19 12:20:52.918653 master-0 kubenswrapper[31830]: I0319 12:20:52.918638 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:20:53.100924 master-0 kubenswrapper[31830]: I0319 12:20:53.100853 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-service-ca\") pod \"0a1dfc0b-250d-465f-a075-f088f5725873\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " Mar 19 12:20:53.101289 master-0 kubenswrapper[31830]: I0319 12:20:53.100958 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-oauth-serving-cert\") pod \"0a1dfc0b-250d-465f-a075-f088f5725873\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " Mar 19 12:20:53.101289 master-0 kubenswrapper[31830]: I0319 12:20:53.101053 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a1dfc0b-250d-465f-a075-f088f5725873-console-oauth-config\") pod \"0a1dfc0b-250d-465f-a075-f088f5725873\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " Mar 19 12:20:53.101289 master-0 kubenswrapper[31830]: I0319 12:20:53.101160 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-trusted-ca-bundle\") pod \"0a1dfc0b-250d-465f-a075-f088f5725873\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " Mar 19 12:20:53.101289 master-0 kubenswrapper[31830]: I0319 12:20:53.101251 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a1dfc0b-250d-465f-a075-f088f5725873-console-serving-cert\") pod \"0a1dfc0b-250d-465f-a075-f088f5725873\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " Mar 19 12:20:53.101289 master-0 kubenswrapper[31830]: I0319 12:20:53.101288 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxd8l\" (UniqueName: \"kubernetes.io/projected/0a1dfc0b-250d-465f-a075-f088f5725873-kube-api-access-lxd8l\") pod \"0a1dfc0b-250d-465f-a075-f088f5725873\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " Mar 19 12:20:53.101693 master-0 kubenswrapper[31830]: I0319 12:20:53.101320 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-console-config\") pod \"0a1dfc0b-250d-465f-a075-f088f5725873\" (UID: \"0a1dfc0b-250d-465f-a075-f088f5725873\") " Mar 19 12:20:53.101872 master-0 kubenswrapper[31830]: I0319 12:20:53.101824 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "0a1dfc0b-250d-465f-a075-f088f5725873" (UID: "0a1dfc0b-250d-465f-a075-f088f5725873"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:20:53.102053 master-0 kubenswrapper[31830]: I0319 12:20:53.101950 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0a1dfc0b-250d-465f-a075-f088f5725873" (UID: "0a1dfc0b-250d-465f-a075-f088f5725873"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:20:53.102135 master-0 kubenswrapper[31830]: I0319 12:20:53.102093 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-service-ca" (OuterVolumeSpecName: "service-ca") pod "0a1dfc0b-250d-465f-a075-f088f5725873" (UID: "0a1dfc0b-250d-465f-a075-f088f5725873"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:20:53.102387 master-0 kubenswrapper[31830]: I0319 12:20:53.102316 31830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:20:53.102387 master-0 kubenswrapper[31830]: I0319 12:20:53.102346 31830 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 12:20:53.102387 master-0 kubenswrapper[31830]: I0319 12:20:53.102356 31830 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 12:20:53.103034 master-0 kubenswrapper[31830]: I0319 12:20:53.102977 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-console-config" (OuterVolumeSpecName: "console-config") pod "0a1dfc0b-250d-465f-a075-f088f5725873" (UID: "0a1dfc0b-250d-465f-a075-f088f5725873"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:20:53.104750 master-0 kubenswrapper[31830]: I0319 12:20:53.104706 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a1dfc0b-250d-465f-a075-f088f5725873-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "0a1dfc0b-250d-465f-a075-f088f5725873" (UID: "0a1dfc0b-250d-465f-a075-f088f5725873"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:20:53.104965 master-0 kubenswrapper[31830]: I0319 12:20:53.104845 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a1dfc0b-250d-465f-a075-f088f5725873-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "0a1dfc0b-250d-465f-a075-f088f5725873" (UID: "0a1dfc0b-250d-465f-a075-f088f5725873"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:20:53.106037 master-0 kubenswrapper[31830]: I0319 12:20:53.105986 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a1dfc0b-250d-465f-a075-f088f5725873-kube-api-access-lxd8l" (OuterVolumeSpecName: "kube-api-access-lxd8l") pod "0a1dfc0b-250d-465f-a075-f088f5725873" (UID: "0a1dfc0b-250d-465f-a075-f088f5725873"). InnerVolumeSpecName "kube-api-access-lxd8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:20:53.203490 master-0 kubenswrapper[31830]: I0319 12:20:53.203317 31830 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a1dfc0b-250d-465f-a075-f088f5725873-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 12:20:53.203490 master-0 kubenswrapper[31830]: I0319 12:20:53.203360 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxd8l\" (UniqueName: \"kubernetes.io/projected/0a1dfc0b-250d-465f-a075-f088f5725873-kube-api-access-lxd8l\") on node \"master-0\" DevicePath \"\"" Mar 19 12:20:53.203490 master-0 kubenswrapper[31830]: I0319 12:20:53.203375 31830 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a1dfc0b-250d-465f-a075-f088f5725873-console-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:20:53.203490 master-0 kubenswrapper[31830]: I0319 12:20:53.203385 31830 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a1dfc0b-250d-465f-a075-f088f5725873-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:20:53.685051 master-0 kubenswrapper[31830]: I0319 12:20:53.685003 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-575589487f-9nhq4_0a1dfc0b-250d-465f-a075-f088f5725873/console/0.log" Mar 19 12:20:53.686290 master-0 kubenswrapper[31830]: I0319 12:20:53.685082 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-575589487f-9nhq4" Mar 19 12:20:53.688310 master-0 kubenswrapper[31830]: I0319 12:20:53.688224 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-575589487f-9nhq4" event={"ID":"0a1dfc0b-250d-465f-a075-f088f5725873","Type":"ContainerDied","Data":"0248a7172bf1cd2d7f2dc54cf87753389a1b846e5d72dd00c6cb6d15a27bb0b2"} Mar 19 12:20:53.688310 master-0 kubenswrapper[31830]: I0319 12:20:53.688276 31830 scope.go:117] "RemoveContainer" containerID="15e02e8bcdf411a7b55a0689ad33ce3a8e430d3a47cdd9b4d8ebfc49858aed75" Mar 19 12:20:53.731398 master-0 kubenswrapper[31830]: I0319 12:20:53.731310 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-575589487f-9nhq4"] Mar 19 12:20:53.737967 master-0 kubenswrapper[31830]: I0319 12:20:53.737917 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-575589487f-9nhq4"] Mar 19 12:20:55.687437 master-0 kubenswrapper[31830]: I0319 12:20:55.687370 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a1dfc0b-250d-465f-a075-f088f5725873" path="/var/lib/kubelet/pods/0a1dfc0b-250d-465f-a075-f088f5725873/volumes" Mar 19 12:20:56.807172 master-0 kubenswrapper[31830]: I0319 12:20:56.807111 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-845bb9776f-9p49g"] Mar 19 12:20:56.807778 master-0 kubenswrapper[31830]: E0319 12:20:56.807486 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db3fcbe-0dbf-464f-944b-62427173c8d3" containerName="metrics-server" Mar 19 12:20:56.807778 master-0 kubenswrapper[31830]: I0319 12:20:56.807506 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db3fcbe-0dbf-464f-944b-62427173c8d3" containerName="metrics-server" Mar 19 12:20:56.807778 master-0 kubenswrapper[31830]: E0319 12:20:56.807533 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5136761d-3b51-4cf2-8689-88d0bfefd0b2" containerName="installer" Mar 19 12:20:56.807778 master-0 kubenswrapper[31830]: I0319 12:20:56.807545 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5136761d-3b51-4cf2-8689-88d0bfefd0b2" containerName="installer" Mar 19 12:20:56.807778 master-0 kubenswrapper[31830]: E0319 12:20:56.807564 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a1dfc0b-250d-465f-a075-f088f5725873" containerName="console" Mar 19 12:20:56.807778 master-0 kubenswrapper[31830]: I0319 12:20:56.807578 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a1dfc0b-250d-465f-a075-f088f5725873" containerName="console" Mar 19 12:20:56.807778 master-0 kubenswrapper[31830]: E0319 12:20:56.807596 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" Mar 19 12:20:56.807778 master-0 kubenswrapper[31830]: I0319 12:20:56.807606 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" Mar 19 12:20:56.808255 master-0 kubenswrapper[31830]: I0319 12:20:56.807836 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a1dfc0b-250d-465f-a075-f088f5725873" containerName="console" Mar 19 12:20:56.808255 master-0 kubenswrapper[31830]: I0319 12:20:56.807880 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" Mar 19 12:20:56.808255 master-0 kubenswrapper[31830]: I0319 12:20:56.807899 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="5136761d-3b51-4cf2-8689-88d0bfefd0b2" containerName="installer" Mar 19 12:20:56.808255 master-0 kubenswrapper[31830]: I0319 12:20:56.807921 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6db3fcbe-0dbf-464f-944b-62427173c8d3" containerName="metrics-server" Mar 19 12:20:56.808664 master-0 kubenswrapper[31830]: I0319 12:20:56.808578 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:56.820938 master-0 kubenswrapper[31830]: I0319 12:20:56.820880 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-845bb9776f-9p49g"] Mar 19 12:20:56.957999 master-0 kubenswrapper[31830]: I0319 12:20:56.957922 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-console-config\") pod \"console-845bb9776f-9p49g\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:56.957999 master-0 kubenswrapper[31830]: I0319 12:20:56.957981 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8168e523-f491-4c1d-9588-ae2963e93927-console-oauth-config\") pod \"console-845bb9776f-9p49g\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:56.957999 master-0 kubenswrapper[31830]: I0319 12:20:56.958012 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-service-ca\") pod \"console-845bb9776f-9p49g\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:56.958434 master-0 kubenswrapper[31830]: I0319 12:20:56.958209 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8168e523-f491-4c1d-9588-ae2963e93927-console-serving-cert\") pod \"console-845bb9776f-9p49g\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:56.958434 master-0 kubenswrapper[31830]: I0319 12:20:56.958281 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjbvj\" (UniqueName: \"kubernetes.io/projected/8168e523-f491-4c1d-9588-ae2963e93927-kube-api-access-pjbvj\") pod \"console-845bb9776f-9p49g\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:56.958434 master-0 kubenswrapper[31830]: I0319 12:20:56.958379 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-oauth-serving-cert\") pod \"console-845bb9776f-9p49g\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:56.958434 master-0 kubenswrapper[31830]: I0319 12:20:56.958421 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-trusted-ca-bundle\") pod \"console-845bb9776f-9p49g\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:57.059566 master-0 kubenswrapper[31830]: I0319 12:20:57.059393 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjbvj\" (UniqueName: \"kubernetes.io/projected/8168e523-f491-4c1d-9588-ae2963e93927-kube-api-access-pjbvj\") pod \"console-845bb9776f-9p49g\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:57.059566 master-0 kubenswrapper[31830]: I0319 12:20:57.059470 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-oauth-serving-cert\") pod \"console-845bb9776f-9p49g\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:57.060187 master-0 kubenswrapper[31830]: I0319 12:20:57.059636 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-trusted-ca-bundle\") pod \"console-845bb9776f-9p49g\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:57.060187 master-0 kubenswrapper[31830]: I0319 12:20:57.059903 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-console-config\") pod \"console-845bb9776f-9p49g\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:57.060187 master-0 kubenswrapper[31830]: I0319 12:20:57.059944 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8168e523-f491-4c1d-9588-ae2963e93927-console-oauth-config\") pod \"console-845bb9776f-9p49g\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:57.060187 master-0 kubenswrapper[31830]: I0319 12:20:57.060007 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-service-ca\") pod \"console-845bb9776f-9p49g\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:57.060187 master-0 kubenswrapper[31830]: I0319 12:20:57.060123 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8168e523-f491-4c1d-9588-ae2963e93927-console-serving-cert\") pod \"console-845bb9776f-9p49g\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:57.060880 master-0 kubenswrapper[31830]: I0319 12:20:57.060839 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-trusted-ca-bundle\") pod \"console-845bb9776f-9p49g\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:57.060978 master-0 kubenswrapper[31830]: I0319 12:20:57.060920 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-oauth-serving-cert\") pod \"console-845bb9776f-9p49g\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:57.061873 master-0 kubenswrapper[31830]: I0319 12:20:57.061521 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-service-ca\") pod \"console-845bb9776f-9p49g\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:57.062522 master-0 kubenswrapper[31830]: I0319 12:20:57.062491 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-console-config\") pod \"console-845bb9776f-9p49g\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:57.065483 master-0 kubenswrapper[31830]: I0319 12:20:57.065436 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8168e523-f491-4c1d-9588-ae2963e93927-console-serving-cert\") pod \"console-845bb9776f-9p49g\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:57.066879 master-0 kubenswrapper[31830]: I0319 12:20:57.066836 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8168e523-f491-4c1d-9588-ae2963e93927-console-oauth-config\") pod \"console-845bb9776f-9p49g\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:57.077467 master-0 kubenswrapper[31830]: I0319 12:20:57.077422 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjbvj\" (UniqueName: \"kubernetes.io/projected/8168e523-f491-4c1d-9588-ae2963e93927-kube-api-access-pjbvj\") pod \"console-845bb9776f-9p49g\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:57.128825 master-0 kubenswrapper[31830]: I0319 12:20:57.128743 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:20:57.551734 master-0 kubenswrapper[31830]: I0319 12:20:57.551691 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-845bb9776f-9p49g"] Mar 19 12:20:57.555928 master-0 kubenswrapper[31830]: W0319 12:20:57.555858 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8168e523_f491_4c1d_9588_ae2963e93927.slice/crio-407c9f4b9a56ecc1169e1b0477f4da5e663759480e30a4c3ad0776841eb3d82f WatchSource:0}: Error finding container 407c9f4b9a56ecc1169e1b0477f4da5e663759480e30a4c3ad0776841eb3d82f: Status 404 returned error can't find the container with id 407c9f4b9a56ecc1169e1b0477f4da5e663759480e30a4c3ad0776841eb3d82f Mar 19 12:20:57.735348 master-0 kubenswrapper[31830]: I0319 12:20:57.735288 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-845bb9776f-9p49g" event={"ID":"8168e523-f491-4c1d-9588-ae2963e93927","Type":"ContainerStarted","Data":"3cef942edc4c49ce47897eb1e611d525880e69e956d225cbab8ebfa47c4c4e55"} Mar 19 12:20:57.735348 master-0 kubenswrapper[31830]: I0319 12:20:57.735344 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-845bb9776f-9p49g" event={"ID":"8168e523-f491-4c1d-9588-ae2963e93927","Type":"ContainerStarted","Data":"407c9f4b9a56ecc1169e1b0477f4da5e663759480e30a4c3ad0776841eb3d82f"} Mar 19 12:20:57.764051 master-0 kubenswrapper[31830]: I0319 12:20:57.763964 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-845bb9776f-9p49g" podStartSLOduration=1.763931002 podStartE2EDuration="1.763931002s" podCreationTimestamp="2026-03-19 12:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:20:57.761259811 +0000 UTC m=+396.310220525" watchObservedRunningTime="2026-03-19 12:20:57.763931002 +0000 UTC m=+396.312891706" Mar 19 12:21:07.129343 master-0 kubenswrapper[31830]: I0319 12:21:07.129266 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:21:07.129343 master-0 kubenswrapper[31830]: I0319 12:21:07.129331 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:21:07.134416 master-0 kubenswrapper[31830]: I0319 12:21:07.134374 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:21:07.815289 master-0 kubenswrapper[31830]: I0319 12:21:07.815217 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:21:07.876519 master-0 kubenswrapper[31830]: I0319 12:21:07.875645 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-b5f5fdd67-r4lxc"] Mar 19 12:21:32.919893 master-0 kubenswrapper[31830]: I0319 12:21:32.919709 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-b5f5fdd67-r4lxc" podUID="8f224cab-e321-4e24-83bc-f99242f971b0" containerName="console" containerID="cri-o://6265d7dbb0a319c10109ed5dc5151dfca4a590c22cd594631e9826123ab8e603" gracePeriod=15 Mar 19 12:21:33.057088 master-0 kubenswrapper[31830]: I0319 12:21:33.057018 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-b5f5fdd67-r4lxc_8f224cab-e321-4e24-83bc-f99242f971b0/console/0.log" Mar 19 12:21:33.057088 master-0 kubenswrapper[31830]: I0319 12:21:33.057077 31830 generic.go:334] "Generic (PLEG): container finished" podID="8f224cab-e321-4e24-83bc-f99242f971b0" containerID="6265d7dbb0a319c10109ed5dc5151dfca4a590c22cd594631e9826123ab8e603" exitCode=2 Mar 19 12:21:33.057088 master-0 kubenswrapper[31830]: I0319 12:21:33.057109 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b5f5fdd67-r4lxc" event={"ID":"8f224cab-e321-4e24-83bc-f99242f971b0","Type":"ContainerDied","Data":"6265d7dbb0a319c10109ed5dc5151dfca4a590c22cd594631e9826123ab8e603"} Mar 19 12:21:33.370517 master-0 kubenswrapper[31830]: I0319 12:21:33.370462 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-b5f5fdd67-r4lxc_8f224cab-e321-4e24-83bc-f99242f971b0/console/0.log" Mar 19 12:21:33.370731 master-0 kubenswrapper[31830]: I0319 12:21:33.370537 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:21:33.527786 master-0 kubenswrapper[31830]: I0319 12:21:33.527684 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f224cab-e321-4e24-83bc-f99242f971b0-console-serving-cert\") pod \"8f224cab-e321-4e24-83bc-f99242f971b0\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " Mar 19 12:21:33.528021 master-0 kubenswrapper[31830]: I0319 12:21:33.527928 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-service-ca\") pod \"8f224cab-e321-4e24-83bc-f99242f971b0\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " Mar 19 12:21:33.528021 master-0 kubenswrapper[31830]: I0319 12:21:33.527966 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-trusted-ca-bundle\") pod \"8f224cab-e321-4e24-83bc-f99242f971b0\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " Mar 19 12:21:33.528021 master-0 kubenswrapper[31830]: I0319 12:21:33.527996 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsk9k\" (UniqueName: \"kubernetes.io/projected/8f224cab-e321-4e24-83bc-f99242f971b0-kube-api-access-rsk9k\") pod \"8f224cab-e321-4e24-83bc-f99242f971b0\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " Mar 19 12:21:33.528159 master-0 kubenswrapper[31830]: I0319 12:21:33.528027 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8f224cab-e321-4e24-83bc-f99242f971b0-console-oauth-config\") pod \"8f224cab-e321-4e24-83bc-f99242f971b0\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " Mar 19 12:21:33.528159 master-0 kubenswrapper[31830]: I0319 12:21:33.528049 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-oauth-serving-cert\") pod \"8f224cab-e321-4e24-83bc-f99242f971b0\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " Mar 19 12:21:33.528159 master-0 kubenswrapper[31830]: I0319 12:21:33.528072 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-console-config\") pod \"8f224cab-e321-4e24-83bc-f99242f971b0\" (UID: \"8f224cab-e321-4e24-83bc-f99242f971b0\") " Mar 19 12:21:33.528769 master-0 kubenswrapper[31830]: I0319 12:21:33.528580 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8f224cab-e321-4e24-83bc-f99242f971b0" (UID: "8f224cab-e321-4e24-83bc-f99242f971b0"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:21:33.529016 master-0 kubenswrapper[31830]: I0319 12:21:33.528950 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-console-config" (OuterVolumeSpecName: "console-config") pod "8f224cab-e321-4e24-83bc-f99242f971b0" (UID: "8f224cab-e321-4e24-83bc-f99242f971b0"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:21:33.529076 master-0 kubenswrapper[31830]: I0319 12:21:33.529014 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8f224cab-e321-4e24-83bc-f99242f971b0" (UID: "8f224cab-e321-4e24-83bc-f99242f971b0"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:21:33.529901 master-0 kubenswrapper[31830]: I0319 12:21:33.529840 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-service-ca" (OuterVolumeSpecName: "service-ca") pod "8f224cab-e321-4e24-83bc-f99242f971b0" (UID: "8f224cab-e321-4e24-83bc-f99242f971b0"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:21:33.531285 master-0 kubenswrapper[31830]: I0319 12:21:33.531231 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f224cab-e321-4e24-83bc-f99242f971b0-kube-api-access-rsk9k" (OuterVolumeSpecName: "kube-api-access-rsk9k") pod "8f224cab-e321-4e24-83bc-f99242f971b0" (UID: "8f224cab-e321-4e24-83bc-f99242f971b0"). InnerVolumeSpecName "kube-api-access-rsk9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:21:33.534949 master-0 kubenswrapper[31830]: I0319 12:21:33.533208 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f224cab-e321-4e24-83bc-f99242f971b0-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "8f224cab-e321-4e24-83bc-f99242f971b0" (UID: "8f224cab-e321-4e24-83bc-f99242f971b0"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:21:33.534949 master-0 kubenswrapper[31830]: I0319 12:21:33.533508 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f224cab-e321-4e24-83bc-f99242f971b0-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8f224cab-e321-4e24-83bc-f99242f971b0" (UID: "8f224cab-e321-4e24-83bc-f99242f971b0"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:21:33.630053 master-0 kubenswrapper[31830]: I0319 12:21:33.629867 31830 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 12:21:33.630053 master-0 kubenswrapper[31830]: I0319 12:21:33.629937 31830 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-console-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:21:33.630053 master-0 kubenswrapper[31830]: I0319 12:21:33.629956 31830 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f224cab-e321-4e24-83bc-f99242f971b0-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 12:21:33.630053 master-0 kubenswrapper[31830]: I0319 12:21:33.629974 31830 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 12:21:33.630053 master-0 kubenswrapper[31830]: I0319 12:21:33.629991 31830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f224cab-e321-4e24-83bc-f99242f971b0-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:21:33.630053 master-0 kubenswrapper[31830]: I0319 12:21:33.630010 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsk9k\" (UniqueName: \"kubernetes.io/projected/8f224cab-e321-4e24-83bc-f99242f971b0-kube-api-access-rsk9k\") on node \"master-0\" DevicePath \"\"" Mar 19 12:21:33.630053 master-0 kubenswrapper[31830]: I0319 12:21:33.630027 31830 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8f224cab-e321-4e24-83bc-f99242f971b0-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:21:34.066049 master-0 kubenswrapper[31830]: I0319 12:21:34.066009 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-b5f5fdd67-r4lxc_8f224cab-e321-4e24-83bc-f99242f971b0/console/0.log" Mar 19 12:21:34.066522 master-0 kubenswrapper[31830]: I0319 12:21:34.066075 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b5f5fdd67-r4lxc" event={"ID":"8f224cab-e321-4e24-83bc-f99242f971b0","Type":"ContainerDied","Data":"1944f1b02e25c74d5e13b760781b624f047cb27669e749bab0f7f7f79cb67d59"} Mar 19 12:21:34.066522 master-0 kubenswrapper[31830]: I0319 12:21:34.066118 31830 scope.go:117] "RemoveContainer" containerID="6265d7dbb0a319c10109ed5dc5151dfca4a590c22cd594631e9826123ab8e603" Mar 19 12:21:34.066522 master-0 kubenswrapper[31830]: I0319 12:21:34.066274 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b5f5fdd67-r4lxc" Mar 19 12:21:34.099189 master-0 kubenswrapper[31830]: I0319 12:21:34.099132 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-b5f5fdd67-r4lxc"] Mar 19 12:21:34.108846 master-0 kubenswrapper[31830]: I0319 12:21:34.108772 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-b5f5fdd67-r4lxc"] Mar 19 12:21:35.686453 master-0 kubenswrapper[31830]: I0319 12:21:35.686395 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f224cab-e321-4e24-83bc-f99242f971b0" path="/var/lib/kubelet/pods/8f224cab-e321-4e24-83bc-f99242f971b0/volumes" Mar 19 12:21:36.180785 master-0 kubenswrapper[31830]: I0319 12:21:36.180733 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 19 12:21:36.181032 master-0 kubenswrapper[31830]: E0319 12:21:36.181016 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f224cab-e321-4e24-83bc-f99242f971b0" containerName="console" Mar 19 12:21:36.181032 master-0 kubenswrapper[31830]: I0319 12:21:36.181027 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f224cab-e321-4e24-83bc-f99242f971b0" containerName="console" Mar 19 12:21:36.181205 master-0 kubenswrapper[31830]: I0319 12:21:36.181156 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f224cab-e321-4e24-83bc-f99242f971b0" containerName="console" Mar 19 12:21:36.181629 master-0 kubenswrapper[31830]: I0319 12:21:36.181604 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 12:21:36.183666 master-0 kubenswrapper[31830]: I0319 12:21:36.183628 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kqbhm" Mar 19 12:21:36.184607 master-0 kubenswrapper[31830]: I0319 12:21:36.184574 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 19 12:21:36.187843 master-0 kubenswrapper[31830]: I0319 12:21:36.187809 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 19 12:21:36.267642 master-0 kubenswrapper[31830]: I0319 12:21:36.267557 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ccf67da5-2a55-45c8-b62d-36d0834a9c2b-var-lock\") pod \"installer-5-master-0\" (UID: \"ccf67da5-2a55-45c8-b62d-36d0834a9c2b\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 12:21:36.267908 master-0 kubenswrapper[31830]: I0319 12:21:36.267658 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ccf67da5-2a55-45c8-b62d-36d0834a9c2b-kube-api-access\") pod \"installer-5-master-0\" (UID: \"ccf67da5-2a55-45c8-b62d-36d0834a9c2b\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 12:21:36.267994 master-0 kubenswrapper[31830]: I0319 12:21:36.267873 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ccf67da5-2a55-45c8-b62d-36d0834a9c2b-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"ccf67da5-2a55-45c8-b62d-36d0834a9c2b\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 12:21:36.369194 master-0 kubenswrapper[31830]: I0319 12:21:36.369110 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ccf67da5-2a55-45c8-b62d-36d0834a9c2b-var-lock\") pod \"installer-5-master-0\" (UID: \"ccf67da5-2a55-45c8-b62d-36d0834a9c2b\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 12:21:36.369423 master-0 kubenswrapper[31830]: I0319 12:21:36.369212 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ccf67da5-2a55-45c8-b62d-36d0834a9c2b-var-lock\") pod \"installer-5-master-0\" (UID: \"ccf67da5-2a55-45c8-b62d-36d0834a9c2b\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 12:21:36.369423 master-0 kubenswrapper[31830]: I0319 12:21:36.369227 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ccf67da5-2a55-45c8-b62d-36d0834a9c2b-kube-api-access\") pod \"installer-5-master-0\" (UID: \"ccf67da5-2a55-45c8-b62d-36d0834a9c2b\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 12:21:36.369423 master-0 kubenswrapper[31830]: I0319 12:21:36.369324 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ccf67da5-2a55-45c8-b62d-36d0834a9c2b-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"ccf67da5-2a55-45c8-b62d-36d0834a9c2b\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 12:21:36.369556 master-0 kubenswrapper[31830]: I0319 12:21:36.369484 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ccf67da5-2a55-45c8-b62d-36d0834a9c2b-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"ccf67da5-2a55-45c8-b62d-36d0834a9c2b\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 12:21:36.389044 master-0 kubenswrapper[31830]: I0319 12:21:36.389010 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ccf67da5-2a55-45c8-b62d-36d0834a9c2b-kube-api-access\") pod \"installer-5-master-0\" (UID: \"ccf67da5-2a55-45c8-b62d-36d0834a9c2b\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 12:21:36.503235 master-0 kubenswrapper[31830]: I0319 12:21:36.503196 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 12:21:36.961272 master-0 kubenswrapper[31830]: I0319 12:21:36.959429 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 19 12:21:36.965827 master-0 kubenswrapper[31830]: W0319 12:21:36.965727 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podccf67da5_2a55_45c8_b62d_36d0834a9c2b.slice/crio-ab3519beb898ea3f220fc37f85c9c54d6bbfb5f2c80b76f435349b0fdb6830e2 WatchSource:0}: Error finding container ab3519beb898ea3f220fc37f85c9c54d6bbfb5f2c80b76f435349b0fdb6830e2: Status 404 returned error can't find the container with id ab3519beb898ea3f220fc37f85c9c54d6bbfb5f2c80b76f435349b0fdb6830e2 Mar 19 12:21:37.087758 master-0 kubenswrapper[31830]: I0319 12:21:37.087679 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"ccf67da5-2a55-45c8-b62d-36d0834a9c2b","Type":"ContainerStarted","Data":"ab3519beb898ea3f220fc37f85c9c54d6bbfb5f2c80b76f435349b0fdb6830e2"} Mar 19 12:21:38.096407 master-0 kubenswrapper[31830]: I0319 12:21:38.096348 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"ccf67da5-2a55-45c8-b62d-36d0834a9c2b","Type":"ContainerStarted","Data":"3b1f3e2fb213d07581b8eeee52f3dd082554e64af36b831a04105552b1b5226b"} Mar 19 12:21:38.114253 master-0 kubenswrapper[31830]: I0319 12:21:38.114166 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-5-master-0" podStartSLOduration=2.114147449 podStartE2EDuration="2.114147449s" podCreationTimestamp="2026-03-19 12:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:21:38.114022845 +0000 UTC m=+436.662983559" watchObservedRunningTime="2026-03-19 12:21:38.114147449 +0000 UTC m=+436.663108153" Mar 19 12:22:10.062953 master-0 kubenswrapper[31830]: I0319 12:22:10.062708 31830 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 12:22:10.064279 master-0 kubenswrapper[31830]: I0319 12:22:10.063925 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="cluster-policy-controller" containerID="cri-o://e2254e5955e606c47be9604d12c39e06178d4d59ccf279a6986ce5edd6dc066e" gracePeriod=30 Mar 19 12:22:10.064430 master-0 kubenswrapper[31830]: I0319 12:22:10.064267 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager" containerID="cri-o://47fbcc830547b61bd29f055979e2109f1293c920ca05c188650fe3665f2e7c8f" gracePeriod=30 Mar 19 12:22:10.064538 master-0 kubenswrapper[31830]: I0319 12:22:10.064424 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://3fff7305ffab3c7b2d64fb017b4d322893f65a346d3d05dc9207a0c3f727bb4b" gracePeriod=30 Mar 19 12:22:10.065677 master-0 kubenswrapper[31830]: I0319 12:22:10.064299 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://14e2eab8d6fc7f70b2c656df6e5623f56e87c29ceaaedf3b47b4662d233279d5" gracePeriod=30 Mar 19 12:22:10.067144 master-0 kubenswrapper[31830]: I0319 12:22:10.067084 31830 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 12:22:10.068659 master-0 kubenswrapper[31830]: E0319 12:22:10.068614 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager" Mar 19 12:22:10.069102 master-0 kubenswrapper[31830]: I0319 12:22:10.069070 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager" Mar 19 12:22:10.069354 master-0 kubenswrapper[31830]: E0319 12:22:10.069321 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="cluster-policy-controller" Mar 19 12:22:10.069535 master-0 kubenswrapper[31830]: I0319 12:22:10.069506 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="cluster-policy-controller" Mar 19 12:22:10.069746 master-0 kubenswrapper[31830]: E0319 12:22:10.069714 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager" Mar 19 12:22:10.070053 master-0 kubenswrapper[31830]: I0319 12:22:10.070019 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager" Mar 19 12:22:10.085922 master-0 kubenswrapper[31830]: E0319 12:22:10.071054 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager" Mar 19 12:22:10.086672 master-0 kubenswrapper[31830]: I0319 12:22:10.086603 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager" Mar 19 12:22:10.087054 master-0 kubenswrapper[31830]: E0319 12:22:10.087031 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager-recovery-controller" Mar 19 12:22:10.087303 master-0 kubenswrapper[31830]: I0319 12:22:10.087282 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager-recovery-controller" Mar 19 12:22:10.087536 master-0 kubenswrapper[31830]: E0319 12:22:10.087484 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager-cert-syncer" Mar 19 12:22:10.087713 master-0 kubenswrapper[31830]: I0319 12:22:10.087694 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager-cert-syncer" Mar 19 12:22:10.088979 master-0 kubenswrapper[31830]: I0319 12:22:10.088932 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager-cert-syncer" Mar 19 12:22:10.089204 master-0 kubenswrapper[31830]: I0319 12:22:10.089185 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager" Mar 19 12:22:10.089420 master-0 kubenswrapper[31830]: I0319 12:22:10.089402 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager" Mar 19 12:22:10.089590 master-0 kubenswrapper[31830]: I0319 12:22:10.089572 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager" Mar 19 12:22:10.089821 master-0 kubenswrapper[31830]: I0319 12:22:10.089772 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager-recovery-controller" Mar 19 12:22:10.090220 master-0 kubenswrapper[31830]: I0319 12:22:10.090197 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="cluster-policy-controller" Mar 19 12:22:10.090970 master-0 kubenswrapper[31830]: E0319 12:22:10.090945 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager" Mar 19 12:22:10.091278 master-0 kubenswrapper[31830]: I0319 12:22:10.091256 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager" Mar 19 12:22:10.092149 master-0 kubenswrapper[31830]: I0319 12:22:10.092121 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="09672015532ae9d1d74ae4d426cd904b" containerName="kube-controller-manager" Mar 19 12:22:10.300188 master-0 kubenswrapper[31830]: I0319 12:22:10.300062 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9be86b25fc7753f0a0b1d72c639e6610-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9be86b25fc7753f0a0b1d72c639e6610\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:22:10.300728 master-0 kubenswrapper[31830]: I0319 12:22:10.300656 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9be86b25fc7753f0a0b1d72c639e6610-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9be86b25fc7753f0a0b1d72c639e6610\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:22:10.328955 master-0 kubenswrapper[31830]: I0319 12:22:10.328855 31830 generic.go:334] "Generic (PLEG): container finished" podID="ccf67da5-2a55-45c8-b62d-36d0834a9c2b" containerID="3b1f3e2fb213d07581b8eeee52f3dd082554e64af36b831a04105552b1b5226b" exitCode=0 Mar 19 12:22:10.329157 master-0 kubenswrapper[31830]: I0319 12:22:10.328931 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"ccf67da5-2a55-45c8-b62d-36d0834a9c2b","Type":"ContainerDied","Data":"3b1f3e2fb213d07581b8eeee52f3dd082554e64af36b831a04105552b1b5226b"} Mar 19 12:22:10.331382 master-0 kubenswrapper[31830]: I0319 12:22:10.331338 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_09672015532ae9d1d74ae4d426cd904b/kube-controller-manager/2.log" Mar 19 12:22:10.332120 master-0 kubenswrapper[31830]: I0319 12:22:10.332098 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_09672015532ae9d1d74ae4d426cd904b/kube-controller-manager-cert-syncer/0.log" Mar 19 12:22:10.332520 master-0 kubenswrapper[31830]: I0319 12:22:10.332497 31830 generic.go:334] "Generic (PLEG): container finished" podID="09672015532ae9d1d74ae4d426cd904b" containerID="47fbcc830547b61bd29f055979e2109f1293c920ca05c188650fe3665f2e7c8f" exitCode=0 Mar 19 12:22:10.332620 master-0 kubenswrapper[31830]: I0319 12:22:10.332604 31830 generic.go:334] "Generic (PLEG): container finished" podID="09672015532ae9d1d74ae4d426cd904b" containerID="3fff7305ffab3c7b2d64fb017b4d322893f65a346d3d05dc9207a0c3f727bb4b" exitCode=0 Mar 19 12:22:10.332716 master-0 kubenswrapper[31830]: I0319 12:22:10.332700 31830 generic.go:334] "Generic (PLEG): container finished" podID="09672015532ae9d1d74ae4d426cd904b" containerID="14e2eab8d6fc7f70b2c656df6e5623f56e87c29ceaaedf3b47b4662d233279d5" exitCode=2 Mar 19 12:22:10.332817 master-0 kubenswrapper[31830]: I0319 12:22:10.332788 31830 generic.go:334] "Generic (PLEG): container finished" podID="09672015532ae9d1d74ae4d426cd904b" containerID="e2254e5955e606c47be9604d12c39e06178d4d59ccf279a6986ce5edd6dc066e" exitCode=0 Mar 19 12:22:10.332935 master-0 kubenswrapper[31830]: I0319 12:22:10.332558 31830 scope.go:117] "RemoveContainer" containerID="a2f2d3c455898f0dff08ce78d00fccc2ef15d161401b675e3b61d3fc312756c6" Mar 19 12:22:10.333035 master-0 kubenswrapper[31830]: I0319 12:22:10.332642 31830 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="09672015532ae9d1d74ae4d426cd904b" podUID="9be86b25fc7753f0a0b1d72c639e6610" Mar 19 12:22:10.333121 master-0 kubenswrapper[31830]: I0319 12:22:10.332912 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="401877adce8e78dfdd3ac293a53a75da77fa4a3177086a087aa6915ac4d36604" Mar 19 12:22:10.391248 master-0 kubenswrapper[31830]: I0319 12:22:10.391194 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_09672015532ae9d1d74ae4d426cd904b/kube-controller-manager-cert-syncer/0.log" Mar 19 12:22:10.391872 master-0 kubenswrapper[31830]: I0319 12:22:10.391855 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:22:10.395600 master-0 kubenswrapper[31830]: I0319 12:22:10.395574 31830 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="09672015532ae9d1d74ae4d426cd904b" podUID="9be86b25fc7753f0a0b1d72c639e6610" Mar 19 12:22:10.402552 master-0 kubenswrapper[31830]: I0319 12:22:10.402519 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/09672015532ae9d1d74ae4d426cd904b-cert-dir\") pod \"09672015532ae9d1d74ae4d426cd904b\" (UID: \"09672015532ae9d1d74ae4d426cd904b\") " Mar 19 12:22:10.402657 master-0 kubenswrapper[31830]: I0319 12:22:10.402610 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09672015532ae9d1d74ae4d426cd904b-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "09672015532ae9d1d74ae4d426cd904b" (UID: "09672015532ae9d1d74ae4d426cd904b"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:22:10.402704 master-0 kubenswrapper[31830]: I0319 12:22:10.402656 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/09672015532ae9d1d74ae4d426cd904b-resource-dir\") pod \"09672015532ae9d1d74ae4d426cd904b\" (UID: \"09672015532ae9d1d74ae4d426cd904b\") " Mar 19 12:22:10.402704 master-0 kubenswrapper[31830]: I0319 12:22:10.402687 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09672015532ae9d1d74ae4d426cd904b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "09672015532ae9d1d74ae4d426cd904b" (UID: "09672015532ae9d1d74ae4d426cd904b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:22:10.402885 master-0 kubenswrapper[31830]: I0319 12:22:10.402865 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9be86b25fc7753f0a0b1d72c639e6610-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9be86b25fc7753f0a0b1d72c639e6610\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:22:10.402954 master-0 kubenswrapper[31830]: I0319 12:22:10.402940 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9be86b25fc7753f0a0b1d72c639e6610-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9be86b25fc7753f0a0b1d72c639e6610\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:22:10.402994 master-0 kubenswrapper[31830]: I0319 12:22:10.402955 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9be86b25fc7753f0a0b1d72c639e6610-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9be86b25fc7753f0a0b1d72c639e6610\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:22:10.403123 master-0 kubenswrapper[31830]: I0319 12:22:10.403086 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9be86b25fc7753f0a0b1d72c639e6610-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9be86b25fc7753f0a0b1d72c639e6610\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:22:10.403123 master-0 kubenswrapper[31830]: I0319 12:22:10.403112 31830 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/09672015532ae9d1d74ae4d426cd904b-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:22:10.403198 master-0 kubenswrapper[31830]: I0319 12:22:10.403154 31830 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/09672015532ae9d1d74ae4d426cd904b-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:22:11.343504 master-0 kubenswrapper[31830]: I0319 12:22:11.343408 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_09672015532ae9d1d74ae4d426cd904b/kube-controller-manager-cert-syncer/0.log" Mar 19 12:22:11.344991 master-0 kubenswrapper[31830]: I0319 12:22:11.344826 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:22:11.348640 master-0 kubenswrapper[31830]: I0319 12:22:11.348603 31830 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="09672015532ae9d1d74ae4d426cd904b" podUID="9be86b25fc7753f0a0b1d72c639e6610" Mar 19 12:22:11.362814 master-0 kubenswrapper[31830]: I0319 12:22:11.362741 31830 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="09672015532ae9d1d74ae4d426cd904b" podUID="9be86b25fc7753f0a0b1d72c639e6610" Mar 19 12:22:11.663695 master-0 kubenswrapper[31830]: I0319 12:22:11.663145 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 12:22:11.698465 master-0 kubenswrapper[31830]: I0319 12:22:11.698404 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09672015532ae9d1d74ae4d426cd904b" path="/var/lib/kubelet/pods/09672015532ae9d1d74ae4d426cd904b/volumes" Mar 19 12:22:11.726592 master-0 kubenswrapper[31830]: I0319 12:22:11.726511 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ccf67da5-2a55-45c8-b62d-36d0834a9c2b-var-lock\") pod \"ccf67da5-2a55-45c8-b62d-36d0834a9c2b\" (UID: \"ccf67da5-2a55-45c8-b62d-36d0834a9c2b\") " Mar 19 12:22:11.726971 master-0 kubenswrapper[31830]: I0319 12:22:11.726725 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ccf67da5-2a55-45c8-b62d-36d0834a9c2b-kube-api-access\") pod \"ccf67da5-2a55-45c8-b62d-36d0834a9c2b\" (UID: \"ccf67da5-2a55-45c8-b62d-36d0834a9c2b\") " Mar 19 12:22:11.726971 master-0 kubenswrapper[31830]: I0319 12:22:11.726696 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccf67da5-2a55-45c8-b62d-36d0834a9c2b-var-lock" (OuterVolumeSpecName: "var-lock") pod "ccf67da5-2a55-45c8-b62d-36d0834a9c2b" (UID: "ccf67da5-2a55-45c8-b62d-36d0834a9c2b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:22:11.727077 master-0 kubenswrapper[31830]: I0319 12:22:11.727020 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ccf67da5-2a55-45c8-b62d-36d0834a9c2b-kubelet-dir\") pod \"ccf67da5-2a55-45c8-b62d-36d0834a9c2b\" (UID: \"ccf67da5-2a55-45c8-b62d-36d0834a9c2b\") " Mar 19 12:22:11.727120 master-0 kubenswrapper[31830]: I0319 12:22:11.727100 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccf67da5-2a55-45c8-b62d-36d0834a9c2b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ccf67da5-2a55-45c8-b62d-36d0834a9c2b" (UID: "ccf67da5-2a55-45c8-b62d-36d0834a9c2b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:22:11.727494 master-0 kubenswrapper[31830]: I0319 12:22:11.727451 31830 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ccf67da5-2a55-45c8-b62d-36d0834a9c2b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 12:22:11.727494 master-0 kubenswrapper[31830]: I0319 12:22:11.727480 31830 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ccf67da5-2a55-45c8-b62d-36d0834a9c2b-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 12:22:11.731173 master-0 kubenswrapper[31830]: I0319 12:22:11.731121 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccf67da5-2a55-45c8-b62d-36d0834a9c2b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ccf67da5-2a55-45c8-b62d-36d0834a9c2b" (UID: "ccf67da5-2a55-45c8-b62d-36d0834a9c2b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:22:11.828776 master-0 kubenswrapper[31830]: I0319 12:22:11.828657 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ccf67da5-2a55-45c8-b62d-36d0834a9c2b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 12:22:12.353541 master-0 kubenswrapper[31830]: I0319 12:22:12.353450 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"ccf67da5-2a55-45c8-b62d-36d0834a9c2b","Type":"ContainerDied","Data":"ab3519beb898ea3f220fc37f85c9c54d6bbfb5f2c80b76f435349b0fdb6830e2"} Mar 19 12:22:12.353541 master-0 kubenswrapper[31830]: I0319 12:22:12.353523 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 12:22:12.354180 master-0 kubenswrapper[31830]: I0319 12:22:12.353527 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab3519beb898ea3f220fc37f85c9c54d6bbfb5f2c80b76f435349b0fdb6830e2" Mar 19 12:22:22.678099 master-0 kubenswrapper[31830]: I0319 12:22:22.678028 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:22:22.694071 master-0 kubenswrapper[31830]: I0319 12:22:22.694027 31830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="2febd7cd-583a-46ea-9569-2f0917b00df6" Mar 19 12:22:22.694071 master-0 kubenswrapper[31830]: I0319 12:22:22.694064 31830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="2febd7cd-583a-46ea-9569-2f0917b00df6" Mar 19 12:22:22.704416 master-0 kubenswrapper[31830]: I0319 12:22:22.704361 31830 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:22:22.712126 master-0 kubenswrapper[31830]: I0319 12:22:22.711726 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 12:22:22.716041 master-0 kubenswrapper[31830]: I0319 12:22:22.715999 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:22:22.718683 master-0 kubenswrapper[31830]: I0319 12:22:22.718333 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 12:22:22.733919 master-0 kubenswrapper[31830]: I0319 12:22:22.733864 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 12:22:22.751112 master-0 kubenswrapper[31830]: W0319 12:22:22.751059 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9be86b25fc7753f0a0b1d72c639e6610.slice/crio-9ed4b95b6f987a56281a109059f5d94e03a358c7ccfa80066e3e964e307ec67b WatchSource:0}: Error finding container 9ed4b95b6f987a56281a109059f5d94e03a358c7ccfa80066e3e964e307ec67b: Status 404 returned error can't find the container with id 9ed4b95b6f987a56281a109059f5d94e03a358c7ccfa80066e3e964e307ec67b Mar 19 12:22:23.444555 master-0 kubenswrapper[31830]: I0319 12:22:23.444462 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9be86b25fc7753f0a0b1d72c639e6610","Type":"ContainerStarted","Data":"c4379ad1e81582d224dadd8f93d4fdf1fdd70336209d116e3fcc4c0f845cabed"} Mar 19 12:22:23.444555 master-0 kubenswrapper[31830]: I0319 12:22:23.444543 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9be86b25fc7753f0a0b1d72c639e6610","Type":"ContainerStarted","Data":"58379c55b0aa48a895ce40e0e0940558cc73c9662118cd4ffd5118d979ab7da2"} Mar 19 12:22:23.444555 master-0 kubenswrapper[31830]: I0319 12:22:23.444558 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9be86b25fc7753f0a0b1d72c639e6610","Type":"ContainerStarted","Data":"5a434db3fb9495078c4e7b495af3f29acf4f02240427d40a228cf7caf508d997"} Mar 19 12:22:23.444555 master-0 kubenswrapper[31830]: I0319 12:22:23.444570 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9be86b25fc7753f0a0b1d72c639e6610","Type":"ContainerStarted","Data":"9ed4b95b6f987a56281a109059f5d94e03a358c7ccfa80066e3e964e307ec67b"} Mar 19 12:22:24.456291 master-0 kubenswrapper[31830]: I0319 12:22:24.456201 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9be86b25fc7753f0a0b1d72c639e6610","Type":"ContainerStarted","Data":"162e8714d1ed4fe9a49006c14c550da3fe42d7cb31d5647e797950a9527a5f67"} Mar 19 12:22:24.491747 master-0 kubenswrapper[31830]: I0319 12:22:24.490680 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.4906533140000002 podStartE2EDuration="2.490653314s" podCreationTimestamp="2026-03-19 12:22:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:22:24.484398364 +0000 UTC m=+483.033359068" watchObservedRunningTime="2026-03-19 12:22:24.490653314 +0000 UTC m=+483.039614038" Mar 19 12:22:32.716922 master-0 kubenswrapper[31830]: I0319 12:22:32.716876 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:22:32.716922 master-0 kubenswrapper[31830]: I0319 12:22:32.716923 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:22:32.717403 master-0 kubenswrapper[31830]: I0319 12:22:32.716935 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:22:32.717403 master-0 kubenswrapper[31830]: I0319 12:22:32.716945 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:22:32.718834 master-0 kubenswrapper[31830]: I0319 12:22:32.717636 31830 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 19 12:22:32.719787 master-0 kubenswrapper[31830]: I0319 12:22:32.718884 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="9be86b25fc7753f0a0b1d72c639e6610" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 19 12:22:32.721469 master-0 kubenswrapper[31830]: I0319 12:22:32.721433 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:22:33.533333 master-0 kubenswrapper[31830]: I0319 12:22:33.533262 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:22:35.022010 master-0 kubenswrapper[31830]: I0319 12:22:35.021943 31830 scope.go:117] "RemoveContainer" containerID="3fff7305ffab3c7b2d64fb017b4d322893f65a346d3d05dc9207a0c3f727bb4b" Mar 19 12:22:35.048494 master-0 kubenswrapper[31830]: I0319 12:22:35.048453 31830 scope.go:117] "RemoveContainer" containerID="e2254e5955e606c47be9604d12c39e06178d4d59ccf279a6986ce5edd6dc066e" Mar 19 12:22:35.073755 master-0 kubenswrapper[31830]: I0319 12:22:35.073680 31830 scope.go:117] "RemoveContainer" containerID="14e2eab8d6fc7f70b2c656df6e5623f56e87c29ceaaedf3b47b4662d233279d5" Mar 19 12:22:42.721748 master-0 kubenswrapper[31830]: I0319 12:22:42.721677 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:22:42.726696 master-0 kubenswrapper[31830]: I0319 12:22:42.726666 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 12:22:58.706226 master-0 kubenswrapper[31830]: I0319 12:22:58.706164 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-xr2x4"] Mar 19 12:22:58.706860 master-0 kubenswrapper[31830]: E0319 12:22:58.706518 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccf67da5-2a55-45c8-b62d-36d0834a9c2b" containerName="installer" Mar 19 12:22:58.706860 master-0 kubenswrapper[31830]: I0319 12:22:58.706537 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccf67da5-2a55-45c8-b62d-36d0834a9c2b" containerName="installer" Mar 19 12:22:58.706860 master-0 kubenswrapper[31830]: I0319 12:22:58.706745 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccf67da5-2a55-45c8-b62d-36d0834a9c2b" containerName="installer" Mar 19 12:22:58.707372 master-0 kubenswrapper[31830]: I0319 12:22:58.707320 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-59477995f9-xr2x4" Mar 19 12:22:58.709445 master-0 kubenswrapper[31830]: I0319 12:22:58.709413 31830 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Mar 19 12:22:58.709642 master-0 kubenswrapper[31830]: I0319 12:22:58.709625 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Mar 19 12:22:58.709774 master-0 kubenswrapper[31830]: I0319 12:22:58.709757 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Mar 19 12:22:58.711416 master-0 kubenswrapper[31830]: I0319 12:22:58.711387 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Mar 19 12:22:58.723281 master-0 kubenswrapper[31830]: I0319 12:22:58.723236 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-xr2x4"] Mar 19 12:22:58.726233 master-0 kubenswrapper[31830]: I0319 12:22:58.726180 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/8b0ad3e5-2568-41fc-93e6-149704905dda-os-client-config\") pod \"sushy-emulator-59477995f9-xr2x4\" (UID: \"8b0ad3e5-2568-41fc-93e6-149704905dda\") " pod="sushy-emulator/sushy-emulator-59477995f9-xr2x4" Mar 19 12:22:58.726385 master-0 kubenswrapper[31830]: I0319 12:22:58.726363 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5pjv\" (UniqueName: \"kubernetes.io/projected/8b0ad3e5-2568-41fc-93e6-149704905dda-kube-api-access-l5pjv\") pod \"sushy-emulator-59477995f9-xr2x4\" (UID: \"8b0ad3e5-2568-41fc-93e6-149704905dda\") " pod="sushy-emulator/sushy-emulator-59477995f9-xr2x4" Mar 19 12:22:58.726482 master-0 kubenswrapper[31830]: I0319 12:22:58.726458 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/8b0ad3e5-2568-41fc-93e6-149704905dda-sushy-emulator-config\") pod \"sushy-emulator-59477995f9-xr2x4\" (UID: \"8b0ad3e5-2568-41fc-93e6-149704905dda\") " pod="sushy-emulator/sushy-emulator-59477995f9-xr2x4" Mar 19 12:22:58.827183 master-0 kubenswrapper[31830]: I0319 12:22:58.827123 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/8b0ad3e5-2568-41fc-93e6-149704905dda-os-client-config\") pod \"sushy-emulator-59477995f9-xr2x4\" (UID: \"8b0ad3e5-2568-41fc-93e6-149704905dda\") " pod="sushy-emulator/sushy-emulator-59477995f9-xr2x4" Mar 19 12:22:58.827504 master-0 kubenswrapper[31830]: I0319 12:22:58.827230 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5pjv\" (UniqueName: \"kubernetes.io/projected/8b0ad3e5-2568-41fc-93e6-149704905dda-kube-api-access-l5pjv\") pod \"sushy-emulator-59477995f9-xr2x4\" (UID: \"8b0ad3e5-2568-41fc-93e6-149704905dda\") " pod="sushy-emulator/sushy-emulator-59477995f9-xr2x4" Mar 19 12:22:58.827504 master-0 kubenswrapper[31830]: I0319 12:22:58.827274 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/8b0ad3e5-2568-41fc-93e6-149704905dda-sushy-emulator-config\") pod \"sushy-emulator-59477995f9-xr2x4\" (UID: \"8b0ad3e5-2568-41fc-93e6-149704905dda\") " pod="sushy-emulator/sushy-emulator-59477995f9-xr2x4" Mar 19 12:22:58.828362 master-0 kubenswrapper[31830]: I0319 12:22:58.828341 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/8b0ad3e5-2568-41fc-93e6-149704905dda-sushy-emulator-config\") pod \"sushy-emulator-59477995f9-xr2x4\" (UID: \"8b0ad3e5-2568-41fc-93e6-149704905dda\") " pod="sushy-emulator/sushy-emulator-59477995f9-xr2x4" Mar 19 12:22:58.830157 master-0 kubenswrapper[31830]: I0319 12:22:58.830119 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/8b0ad3e5-2568-41fc-93e6-149704905dda-os-client-config\") pod \"sushy-emulator-59477995f9-xr2x4\" (UID: \"8b0ad3e5-2568-41fc-93e6-149704905dda\") " pod="sushy-emulator/sushy-emulator-59477995f9-xr2x4" Mar 19 12:22:58.843385 master-0 kubenswrapper[31830]: I0319 12:22:58.843135 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5pjv\" (UniqueName: \"kubernetes.io/projected/8b0ad3e5-2568-41fc-93e6-149704905dda-kube-api-access-l5pjv\") pod \"sushy-emulator-59477995f9-xr2x4\" (UID: \"8b0ad3e5-2568-41fc-93e6-149704905dda\") " pod="sushy-emulator/sushy-emulator-59477995f9-xr2x4" Mar 19 12:22:59.032114 master-0 kubenswrapper[31830]: I0319 12:22:59.032060 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-59477995f9-xr2x4" Mar 19 12:22:59.491555 master-0 kubenswrapper[31830]: I0319 12:22:59.491487 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-xr2x4"] Mar 19 12:22:59.495166 master-0 kubenswrapper[31830]: I0319 12:22:59.495132 31830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 12:22:59.740785 master-0 kubenswrapper[31830]: I0319 12:22:59.740630 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-59477995f9-xr2x4" event={"ID":"8b0ad3e5-2568-41fc-93e6-149704905dda","Type":"ContainerStarted","Data":"a0bb7140ff177265366d4febcf25d15fabef29113c63863efa0f9e8571b9d01c"} Mar 19 12:23:06.791903 master-0 kubenswrapper[31830]: I0319 12:23:06.791811 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-59477995f9-xr2x4" event={"ID":"8b0ad3e5-2568-41fc-93e6-149704905dda","Type":"ContainerStarted","Data":"450b0a758434be97818d83430871b93d083a5ad4d16245c56a8a673afd98a931"} Mar 19 12:23:06.819682 master-0 kubenswrapper[31830]: I0319 12:23:06.819552 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-59477995f9-xr2x4" podStartSLOduration=2.603941326 podStartE2EDuration="8.81952523s" podCreationTimestamp="2026-03-19 12:22:58 +0000 UTC" firstStartedPulling="2026-03-19 12:22:59.495081973 +0000 UTC m=+518.044042677" lastFinishedPulling="2026-03-19 12:23:05.710665877 +0000 UTC m=+524.259626581" observedRunningTime="2026-03-19 12:23:06.810738923 +0000 UTC m=+525.359699637" watchObservedRunningTime="2026-03-19 12:23:06.81952523 +0000 UTC m=+525.368485944" Mar 19 12:23:09.033108 master-0 kubenswrapper[31830]: I0319 12:23:09.033029 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-59477995f9-xr2x4" Mar 19 12:23:09.033615 master-0 kubenswrapper[31830]: I0319 12:23:09.033127 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-59477995f9-xr2x4" Mar 19 12:23:09.049454 master-0 kubenswrapper[31830]: I0319 12:23:09.049381 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-59477995f9-xr2x4" Mar 19 12:23:09.816948 master-0 kubenswrapper[31830]: I0319 12:23:09.816862 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-59477995f9-xr2x4" Mar 19 12:23:29.286599 master-0 kubenswrapper[31830]: I0319 12:23:29.286536 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-poller-788b6c6f59-pjj5j"] Mar 19 12:23:29.288188 master-0 kubenswrapper[31830]: I0319 12:23:29.288149 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-788b6c6f59-pjj5j" Mar 19 12:23:29.291312 master-0 kubenswrapper[31830]: I0319 12:23:29.291242 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q95w\" (UniqueName: \"kubernetes.io/projected/97798523-b52e-456f-8883-1743f33bc097-kube-api-access-4q95w\") pod \"nova-console-poller-788b6c6f59-pjj5j\" (UID: \"97798523-b52e-456f-8883-1743f33bc097\") " pod="sushy-emulator/nova-console-poller-788b6c6f59-pjj5j" Mar 19 12:23:29.291512 master-0 kubenswrapper[31830]: I0319 12:23:29.291362 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/97798523-b52e-456f-8883-1743f33bc097-os-client-config\") pod \"nova-console-poller-788b6c6f59-pjj5j\" (UID: \"97798523-b52e-456f-8883-1743f33bc097\") " pod="sushy-emulator/nova-console-poller-788b6c6f59-pjj5j" Mar 19 12:23:29.298065 master-0 kubenswrapper[31830]: I0319 12:23:29.298016 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-788b6c6f59-pjj5j"] Mar 19 12:23:29.392453 master-0 kubenswrapper[31830]: I0319 12:23:29.392380 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q95w\" (UniqueName: \"kubernetes.io/projected/97798523-b52e-456f-8883-1743f33bc097-kube-api-access-4q95w\") pod \"nova-console-poller-788b6c6f59-pjj5j\" (UID: \"97798523-b52e-456f-8883-1743f33bc097\") " pod="sushy-emulator/nova-console-poller-788b6c6f59-pjj5j" Mar 19 12:23:29.392746 master-0 kubenswrapper[31830]: I0319 12:23:29.392702 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/97798523-b52e-456f-8883-1743f33bc097-os-client-config\") pod \"nova-console-poller-788b6c6f59-pjj5j\" (UID: \"97798523-b52e-456f-8883-1743f33bc097\") " pod="sushy-emulator/nova-console-poller-788b6c6f59-pjj5j" Mar 19 12:23:29.397522 master-0 kubenswrapper[31830]: I0319 12:23:29.397477 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/97798523-b52e-456f-8883-1743f33bc097-os-client-config\") pod \"nova-console-poller-788b6c6f59-pjj5j\" (UID: \"97798523-b52e-456f-8883-1743f33bc097\") " pod="sushy-emulator/nova-console-poller-788b6c6f59-pjj5j" Mar 19 12:23:29.406644 master-0 kubenswrapper[31830]: I0319 12:23:29.406607 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q95w\" (UniqueName: \"kubernetes.io/projected/97798523-b52e-456f-8883-1743f33bc097-kube-api-access-4q95w\") pod \"nova-console-poller-788b6c6f59-pjj5j\" (UID: \"97798523-b52e-456f-8883-1743f33bc097\") " pod="sushy-emulator/nova-console-poller-788b6c6f59-pjj5j" Mar 19 12:23:29.608774 master-0 kubenswrapper[31830]: I0319 12:23:29.608593 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-788b6c6f59-pjj5j" Mar 19 12:23:30.056826 master-0 kubenswrapper[31830]: I0319 12:23:30.056728 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-788b6c6f59-pjj5j"] Mar 19 12:23:30.060462 master-0 kubenswrapper[31830]: W0319 12:23:30.060396 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97798523_b52e_456f_8883_1743f33bc097.slice/crio-8de5bc4ee3a65e9a406ac7be6d10dfab3dd7c2fd5f8123fbff33617ea9082f0e WatchSource:0}: Error finding container 8de5bc4ee3a65e9a406ac7be6d10dfab3dd7c2fd5f8123fbff33617ea9082f0e: Status 404 returned error can't find the container with id 8de5bc4ee3a65e9a406ac7be6d10dfab3dd7c2fd5f8123fbff33617ea9082f0e Mar 19 12:23:30.979984 master-0 kubenswrapper[31830]: I0319 12:23:30.979887 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-788b6c6f59-pjj5j" event={"ID":"97798523-b52e-456f-8883-1743f33bc097","Type":"ContainerStarted","Data":"8de5bc4ee3a65e9a406ac7be6d10dfab3dd7c2fd5f8123fbff33617ea9082f0e"} Mar 19 12:23:36.019336 master-0 kubenswrapper[31830]: I0319 12:23:36.019165 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-788b6c6f59-pjj5j" event={"ID":"97798523-b52e-456f-8883-1743f33bc097","Type":"ContainerStarted","Data":"60d980742b99b21efe706c060a89fcf0180bada3f1dbbb9008679349bc2398c0"} Mar 19 12:23:37.026281 master-0 kubenswrapper[31830]: I0319 12:23:37.026197 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-788b6c6f59-pjj5j" event={"ID":"97798523-b52e-456f-8883-1743f33bc097","Type":"ContainerStarted","Data":"969811dd3714641865c6a9e1c5fd20d425373d0439438c7c515662ccdcd150f0"} Mar 19 12:23:37.045361 master-0 kubenswrapper[31830]: I0319 12:23:37.045248 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-poller-788b6c6f59-pjj5j" podStartSLOduration=1.7290068870000002 podStartE2EDuration="8.045223396s" podCreationTimestamp="2026-03-19 12:23:29 +0000 UTC" firstStartedPulling="2026-03-19 12:23:30.062209205 +0000 UTC m=+548.611169909" lastFinishedPulling="2026-03-19 12:23:36.378425704 +0000 UTC m=+554.927386418" observedRunningTime="2026-03-19 12:23:37.041696248 +0000 UTC m=+555.590656972" watchObservedRunningTime="2026-03-19 12:23:37.045223396 +0000 UTC m=+555.594184100" Mar 19 12:24:02.365555 master-0 kubenswrapper[31830]: I0319 12:24:02.365390 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-recorder-67b9dc5579-kt6tv"] Mar 19 12:24:02.367295 master-0 kubenswrapper[31830]: I0319 12:24:02.367248 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-67b9dc5579-kt6tv" Mar 19 12:24:02.385622 master-0 kubenswrapper[31830]: I0319 12:24:02.385586 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-67b9dc5579-kt6tv"] Mar 19 12:24:02.410060 master-0 kubenswrapper[31830]: I0319 12:24:02.409977 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/45262fbb-7795-4ac2-88ff-409bc7cc56ac-os-client-config\") pod \"nova-console-recorder-67b9dc5579-kt6tv\" (UID: \"45262fbb-7795-4ac2-88ff-409bc7cc56ac\") " pod="sushy-emulator/nova-console-recorder-67b9dc5579-kt6tv" Mar 19 12:24:02.512220 master-0 kubenswrapper[31830]: I0319 12:24:02.512170 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/45262fbb-7795-4ac2-88ff-409bc7cc56ac-os-client-config\") pod \"nova-console-recorder-67b9dc5579-kt6tv\" (UID: \"45262fbb-7795-4ac2-88ff-409bc7cc56ac\") " pod="sushy-emulator/nova-console-recorder-67b9dc5579-kt6tv" Mar 19 12:24:02.512487 master-0 kubenswrapper[31830]: I0319 12:24:02.512234 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mczzl\" (UniqueName: \"kubernetes.io/projected/45262fbb-7795-4ac2-88ff-409bc7cc56ac-kube-api-access-mczzl\") pod \"nova-console-recorder-67b9dc5579-kt6tv\" (UID: \"45262fbb-7795-4ac2-88ff-409bc7cc56ac\") " pod="sushy-emulator/nova-console-recorder-67b9dc5579-kt6tv" Mar 19 12:24:02.512538 master-0 kubenswrapper[31830]: I0319 12:24:02.512484 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/45262fbb-7795-4ac2-88ff-409bc7cc56ac-nova-console-recordings-pv\") pod \"nova-console-recorder-67b9dc5579-kt6tv\" (UID: \"45262fbb-7795-4ac2-88ff-409bc7cc56ac\") " pod="sushy-emulator/nova-console-recorder-67b9dc5579-kt6tv" Mar 19 12:24:02.517200 master-0 kubenswrapper[31830]: I0319 12:24:02.517164 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/45262fbb-7795-4ac2-88ff-409bc7cc56ac-os-client-config\") pod \"nova-console-recorder-67b9dc5579-kt6tv\" (UID: \"45262fbb-7795-4ac2-88ff-409bc7cc56ac\") " pod="sushy-emulator/nova-console-recorder-67b9dc5579-kt6tv" Mar 19 12:24:02.614071 master-0 kubenswrapper[31830]: I0319 12:24:02.614027 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mczzl\" (UniqueName: \"kubernetes.io/projected/45262fbb-7795-4ac2-88ff-409bc7cc56ac-kube-api-access-mczzl\") pod \"nova-console-recorder-67b9dc5579-kt6tv\" (UID: \"45262fbb-7795-4ac2-88ff-409bc7cc56ac\") " pod="sushy-emulator/nova-console-recorder-67b9dc5579-kt6tv" Mar 19 12:24:02.614360 master-0 kubenswrapper[31830]: I0319 12:24:02.614343 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/45262fbb-7795-4ac2-88ff-409bc7cc56ac-nova-console-recordings-pv\") pod \"nova-console-recorder-67b9dc5579-kt6tv\" (UID: \"45262fbb-7795-4ac2-88ff-409bc7cc56ac\") " pod="sushy-emulator/nova-console-recorder-67b9dc5579-kt6tv" Mar 19 12:24:02.629689 master-0 kubenswrapper[31830]: I0319 12:24:02.629609 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mczzl\" (UniqueName: \"kubernetes.io/projected/45262fbb-7795-4ac2-88ff-409bc7cc56ac-kube-api-access-mczzl\") pod \"nova-console-recorder-67b9dc5579-kt6tv\" (UID: \"45262fbb-7795-4ac2-88ff-409bc7cc56ac\") " pod="sushy-emulator/nova-console-recorder-67b9dc5579-kt6tv" Mar 19 12:24:03.294830 master-0 kubenswrapper[31830]: I0319 12:24:03.294713 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/45262fbb-7795-4ac2-88ff-409bc7cc56ac-nova-console-recordings-pv\") pod \"nova-console-recorder-67b9dc5579-kt6tv\" (UID: \"45262fbb-7795-4ac2-88ff-409bc7cc56ac\") " pod="sushy-emulator/nova-console-recorder-67b9dc5579-kt6tv" Mar 19 12:24:03.584885 master-0 kubenswrapper[31830]: I0319 12:24:03.584776 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-67b9dc5579-kt6tv" Mar 19 12:24:03.998778 master-0 kubenswrapper[31830]: I0319 12:24:03.997568 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-67b9dc5579-kt6tv"] Mar 19 12:24:04.003287 master-0 kubenswrapper[31830]: W0319 12:24:04.003210 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45262fbb_7795_4ac2_88ff_409bc7cc56ac.slice/crio-6fdb0700910aa86520a40cee2501b1b3205fe55d99aaf861c15b5de34bd71687 WatchSource:0}: Error finding container 6fdb0700910aa86520a40cee2501b1b3205fe55d99aaf861c15b5de34bd71687: Status 404 returned error can't find the container with id 6fdb0700910aa86520a40cee2501b1b3205fe55d99aaf861c15b5de34bd71687 Mar 19 12:24:04.257508 master-0 kubenswrapper[31830]: I0319 12:24:04.257348 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-67b9dc5579-kt6tv" event={"ID":"45262fbb-7795-4ac2-88ff-409bc7cc56ac","Type":"ContainerStarted","Data":"6fdb0700910aa86520a40cee2501b1b3205fe55d99aaf861c15b5de34bd71687"} Mar 19 12:24:13.335048 master-0 kubenswrapper[31830]: I0319 12:24:13.334979 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-67b9dc5579-kt6tv" event={"ID":"45262fbb-7795-4ac2-88ff-409bc7cc56ac","Type":"ContainerStarted","Data":"09e37d38f6c6e4d445f3280efad33fa4306a80338fc33b41f86ec858c0f9102f"} Mar 19 12:24:13.335048 master-0 kubenswrapper[31830]: I0319 12:24:13.335046 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-67b9dc5579-kt6tv" event={"ID":"45262fbb-7795-4ac2-88ff-409bc7cc56ac","Type":"ContainerStarted","Data":"28c458c1830db9c5f38f9dc11b442125d5fae987ed13bf17c7bb28e83f127320"} Mar 19 12:24:13.363667 master-0 kubenswrapper[31830]: I0319 12:24:13.363537 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-recorder-67b9dc5579-kt6tv" podStartSLOduration=2.437539623 podStartE2EDuration="11.363512368s" podCreationTimestamp="2026-03-19 12:24:02 +0000 UTC" firstStartedPulling="2026-03-19 12:24:04.005263747 +0000 UTC m=+582.554224451" lastFinishedPulling="2026-03-19 12:24:12.931236492 +0000 UTC m=+591.480197196" observedRunningTime="2026-03-19 12:24:13.3596655 +0000 UTC m=+591.908626224" watchObservedRunningTime="2026-03-19 12:24:13.363512368 +0000 UTC m=+591.912473122" Mar 19 12:24:44.598541 master-0 kubenswrapper[31830]: I0319 12:24:44.598475 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-ftml6_19de6601-10d4-4112-a21f-0398d2b160d1/cluster-baremetal-operator/2.log" Mar 19 12:24:44.600060 master-0 kubenswrapper[31830]: I0319 12:24:44.600020 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-ftml6_19de6601-10d4-4112-a21f-0398d2b160d1/cluster-baremetal-operator/1.log" Mar 19 12:24:44.600457 master-0 kubenswrapper[31830]: I0319 12:24:44.600412 31830 generic.go:334] "Generic (PLEG): container finished" podID="19de6601-10d4-4112-a21f-0398d2b160d1" containerID="b2e13fc5e0e47b30a814c50b22ebab528689038f4224f101e1963ee3ecce529a" exitCode=1 Mar 19 12:24:44.600513 master-0 kubenswrapper[31830]: I0319 12:24:44.600455 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" event={"ID":"19de6601-10d4-4112-a21f-0398d2b160d1","Type":"ContainerDied","Data":"b2e13fc5e0e47b30a814c50b22ebab528689038f4224f101e1963ee3ecce529a"} Mar 19 12:24:44.600513 master-0 kubenswrapper[31830]: I0319 12:24:44.600493 31830 scope.go:117] "RemoveContainer" containerID="dbd72cd315e8f5fa6faaefc2be981b3f9a0d499a3d7eead86b3d71318cde1c34" Mar 19 12:24:44.601349 master-0 kubenswrapper[31830]: I0319 12:24:44.601251 31830 scope.go:117] "RemoveContainer" containerID="b2e13fc5e0e47b30a814c50b22ebab528689038f4224f101e1963ee3ecce529a" Mar 19 12:24:45.608768 master-0 kubenswrapper[31830]: I0319 12:24:45.608717 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-ftml6_19de6601-10d4-4112-a21f-0398d2b160d1/cluster-baremetal-operator/2.log" Mar 19 12:24:45.609306 master-0 kubenswrapper[31830]: I0319 12:24:45.609045 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-ftml6" event={"ID":"19de6601-10d4-4112-a21f-0398d2b160d1","Type":"ContainerStarted","Data":"4847293da7450fa4954f4352e2de3f901a1df502fece93e6a753117098564f14"} Mar 19 12:26:35.165554 master-0 kubenswrapper[31830]: I0319 12:26:35.165506 31830 scope.go:117] "RemoveContainer" containerID="47fbcc830547b61bd29f055979e2109f1293c920ca05c188650fe3665f2e7c8f" Mar 19 12:27:32.265400 master-0 kubenswrapper[31830]: I0319 12:27:32.265342 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/metal3-65f8c5cc94-trthc"] Mar 19 12:27:32.267452 master-0 kubenswrapper[31830]: I0319 12:27:32.267417 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.269177 master-0 kubenswrapper[31830]: I0319 12:27:32.269144 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"metal3-ironic-password" Mar 19 12:27:32.269512 master-0 kubenswrapper[31830]: I0319 12:27:32.269486 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-qznjf" Mar 19 12:27:32.272328 master-0 kubenswrapper[31830]: I0319 12:27:32.272285 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"baremetal-operator-webhook-server-cert" Mar 19 12:27:32.275205 master-0 kubenswrapper[31830]: I0319 12:27:32.275155 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metal3-ironic-tls\" (UniqueName: \"kubernetes.io/secret/f262d280-de9c-40ab-a879-abfec51007e6-metal3-ironic-tls\") pod \"metal3-65f8c5cc94-trthc\" (UID: \"f262d280-de9c-40ab-a879-abfec51007e6\") " pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.275275 master-0 kubenswrapper[31830]: I0319 12:27:32.275206 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metal3-shared\" (UniqueName: \"kubernetes.io/empty-dir/f262d280-de9c-40ab-a879-abfec51007e6-metal3-shared\") pod \"metal3-65f8c5cc94-trthc\" (UID: \"f262d280-de9c-40ab-a879-abfec51007e6\") " pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.275349 master-0 kubenswrapper[31830]: I0319 12:27:32.275312 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crz6g\" (UniqueName: \"kubernetes.io/projected/f262d280-de9c-40ab-a879-abfec51007e6-kube-api-access-crz6g\") pod \"metal3-65f8c5cc94-trthc\" (UID: \"f262d280-de9c-40ab-a879-abfec51007e6\") " pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.275393 master-0 kubenswrapper[31830]: I0319 12:27:32.275378 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metal3-shared-image-cache\" (UniqueName: \"kubernetes.io/host-path/f262d280-de9c-40ab-a879-abfec51007e6-metal3-shared-image-cache\") pod \"metal3-65f8c5cc94-trthc\" (UID: \"f262d280-de9c-40ab-a879-abfec51007e6\") " pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.275427 master-0 kubenswrapper[31830]: I0319 12:27:32.275413 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metal3-vmedia-tls\" (UniqueName: \"kubernetes.io/secret/f262d280-de9c-40ab-a879-abfec51007e6-metal3-vmedia-tls\") pod \"metal3-65f8c5cc94-trthc\" (UID: \"f262d280-de9c-40ab-a879-abfec51007e6\") " pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.275463 master-0 kubenswrapper[31830]: I0319 12:27:32.275438 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f262d280-de9c-40ab-a879-abfec51007e6-trusted-ca\") pod \"metal3-65f8c5cc94-trthc\" (UID: \"f262d280-de9c-40ab-a879-abfec51007e6\") " pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.275524 master-0 kubenswrapper[31830]: I0319 12:27:32.275497 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metal3-ironic-basic-auth\" (UniqueName: \"kubernetes.io/secret/f262d280-de9c-40ab-a879-abfec51007e6-metal3-ironic-basic-auth\") pod \"metal3-65f8c5cc94-trthc\" (UID: \"f262d280-de9c-40ab-a879-abfec51007e6\") " pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.275958 master-0 kubenswrapper[31830]: I0319 12:27:32.275926 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"metal3-ironic-tls" Mar 19 12:27:32.285075 master-0 kubenswrapper[31830]: I0319 12:27:32.285020 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cbo-trusted-ca" Mar 19 12:27:32.376609 master-0 kubenswrapper[31830]: I0319 12:27:32.376543 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metal3-ironic-tls\" (UniqueName: \"kubernetes.io/secret/f262d280-de9c-40ab-a879-abfec51007e6-metal3-ironic-tls\") pod \"metal3-65f8c5cc94-trthc\" (UID: \"f262d280-de9c-40ab-a879-abfec51007e6\") " pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.376609 master-0 kubenswrapper[31830]: I0319 12:27:32.376603 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metal3-shared\" (UniqueName: \"kubernetes.io/empty-dir/f262d280-de9c-40ab-a879-abfec51007e6-metal3-shared\") pod \"metal3-65f8c5cc94-trthc\" (UID: \"f262d280-de9c-40ab-a879-abfec51007e6\") " pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.376946 master-0 kubenswrapper[31830]: I0319 12:27:32.376671 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crz6g\" (UniqueName: \"kubernetes.io/projected/f262d280-de9c-40ab-a879-abfec51007e6-kube-api-access-crz6g\") pod \"metal3-65f8c5cc94-trthc\" (UID: \"f262d280-de9c-40ab-a879-abfec51007e6\") " pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.376946 master-0 kubenswrapper[31830]: I0319 12:27:32.376717 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metal3-shared-image-cache\" (UniqueName: \"kubernetes.io/host-path/f262d280-de9c-40ab-a879-abfec51007e6-metal3-shared-image-cache\") pod \"metal3-65f8c5cc94-trthc\" (UID: \"f262d280-de9c-40ab-a879-abfec51007e6\") " pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.376946 master-0 kubenswrapper[31830]: I0319 12:27:32.376752 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metal3-vmedia-tls\" (UniqueName: \"kubernetes.io/secret/f262d280-de9c-40ab-a879-abfec51007e6-metal3-vmedia-tls\") pod \"metal3-65f8c5cc94-trthc\" (UID: \"f262d280-de9c-40ab-a879-abfec51007e6\") " pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.377086 master-0 kubenswrapper[31830]: I0319 12:27:32.376956 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f262d280-de9c-40ab-a879-abfec51007e6-trusted-ca\") pod \"metal3-65f8c5cc94-trthc\" (UID: \"f262d280-de9c-40ab-a879-abfec51007e6\") " pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.377086 master-0 kubenswrapper[31830]: I0319 12:27:32.376996 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metal3-shared-image-cache\" (UniqueName: \"kubernetes.io/host-path/f262d280-de9c-40ab-a879-abfec51007e6-metal3-shared-image-cache\") pod \"metal3-65f8c5cc94-trthc\" (UID: \"f262d280-de9c-40ab-a879-abfec51007e6\") " pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.377086 master-0 kubenswrapper[31830]: I0319 12:27:32.377038 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metal3-ironic-basic-auth\" (UniqueName: \"kubernetes.io/secret/f262d280-de9c-40ab-a879-abfec51007e6-metal3-ironic-basic-auth\") pod \"metal3-65f8c5cc94-trthc\" (UID: \"f262d280-de9c-40ab-a879-abfec51007e6\") " pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.377446 master-0 kubenswrapper[31830]: I0319 12:27:32.377404 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metal3-shared\" (UniqueName: \"kubernetes.io/empty-dir/f262d280-de9c-40ab-a879-abfec51007e6-metal3-shared\") pod \"metal3-65f8c5cc94-trthc\" (UID: \"f262d280-de9c-40ab-a879-abfec51007e6\") " pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.378857 master-0 kubenswrapper[31830]: I0319 12:27:32.378814 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f262d280-de9c-40ab-a879-abfec51007e6-trusted-ca\") pod \"metal3-65f8c5cc94-trthc\" (UID: \"f262d280-de9c-40ab-a879-abfec51007e6\") " pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.380316 master-0 kubenswrapper[31830]: I0319 12:27:32.380284 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metal3-ironic-tls\" (UniqueName: \"kubernetes.io/secret/f262d280-de9c-40ab-a879-abfec51007e6-metal3-ironic-tls\") pod \"metal3-65f8c5cc94-trthc\" (UID: \"f262d280-de9c-40ab-a879-abfec51007e6\") " pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.380441 master-0 kubenswrapper[31830]: I0319 12:27:32.380398 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metal3-ironic-basic-auth\" (UniqueName: \"kubernetes.io/secret/f262d280-de9c-40ab-a879-abfec51007e6-metal3-ironic-basic-auth\") pod \"metal3-65f8c5cc94-trthc\" (UID: \"f262d280-de9c-40ab-a879-abfec51007e6\") " pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.380613 master-0 kubenswrapper[31830]: I0319 12:27:32.380578 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metal3-vmedia-tls\" (UniqueName: \"kubernetes.io/secret/f262d280-de9c-40ab-a879-abfec51007e6-metal3-vmedia-tls\") pod \"metal3-65f8c5cc94-trthc\" (UID: \"f262d280-de9c-40ab-a879-abfec51007e6\") " pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.396132 master-0 kubenswrapper[31830]: I0319 12:27:32.396081 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crz6g\" (UniqueName: \"kubernetes.io/projected/f262d280-de9c-40ab-a879-abfec51007e6-kube-api-access-crz6g\") pod \"metal3-65f8c5cc94-trthc\" (UID: \"f262d280-de9c-40ab-a879-abfec51007e6\") " pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.582789 master-0 kubenswrapper[31830]: I0319 12:27:32.582637 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/metal3-65f8c5cc94-trthc" Mar 19 12:27:32.622256 master-0 kubenswrapper[31830]: W0319 12:27:32.622190 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf262d280_de9c_40ab_a879_abfec51007e6.slice/crio-fd9a4540c4de709b57b39de1895620dd3ba916f30dc5b5822a1b337d70238218 WatchSource:0}: Error finding container fd9a4540c4de709b57b39de1895620dd3ba916f30dc5b5822a1b337d70238218: Status 404 returned error can't find the container with id fd9a4540c4de709b57b39de1895620dd3ba916f30dc5b5822a1b337d70238218 Mar 19 12:27:32.690978 master-0 kubenswrapper[31830]: I0319 12:27:32.690871 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/metal3-65f8c5cc94-trthc" event={"ID":"f262d280-de9c-40ab-a879-abfec51007e6","Type":"ContainerStarted","Data":"fd9a4540c4de709b57b39de1895620dd3ba916f30dc5b5822a1b337d70238218"} Mar 19 12:27:32.715550 master-0 kubenswrapper[31830]: I0319 12:27:32.715496 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n"] Mar 19 12:27:32.716831 master-0 kubenswrapper[31830]: I0319 12:27:32.716778 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" Mar 19 12:27:32.730820 master-0 kubenswrapper[31830]: I0319 12:27:32.730157 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n"] Mar 19 12:27:32.884068 master-0 kubenswrapper[31830]: I0319 12:27:32.883975 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4d8205e-157b-4a66-9ee7-318bae255129-cert\") pod \"metal3-baremetal-operator-78474bdc48-sl88n\" (UID: \"c4d8205e-157b-4a66-9ee7-318bae255129\") " pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" Mar 19 12:27:32.884068 master-0 kubenswrapper[31830]: I0319 12:27:32.884036 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metal3-ironic-tls\" (UniqueName: \"kubernetes.io/secret/c4d8205e-157b-4a66-9ee7-318bae255129-metal3-ironic-tls\") pod \"metal3-baremetal-operator-78474bdc48-sl88n\" (UID: \"c4d8205e-157b-4a66-9ee7-318bae255129\") " pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" Mar 19 12:27:32.884268 master-0 kubenswrapper[31830]: I0319 12:27:32.884073 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metal3-ironic-basic-auth\" (UniqueName: \"kubernetes.io/secret/c4d8205e-157b-4a66-9ee7-318bae255129-metal3-ironic-basic-auth\") pod \"metal3-baremetal-operator-78474bdc48-sl88n\" (UID: \"c4d8205e-157b-4a66-9ee7-318bae255129\") " pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" Mar 19 12:27:32.884268 master-0 kubenswrapper[31830]: I0319 12:27:32.884142 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4d8205e-157b-4a66-9ee7-318bae255129-trusted-ca\") pod \"metal3-baremetal-operator-78474bdc48-sl88n\" (UID: \"c4d8205e-157b-4a66-9ee7-318bae255129\") " pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" Mar 19 12:27:32.884268 master-0 kubenswrapper[31830]: I0319 12:27:32.884201 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqs8w\" (UniqueName: \"kubernetes.io/projected/c4d8205e-157b-4a66-9ee7-318bae255129-kube-api-access-hqs8w\") pod \"metal3-baremetal-operator-78474bdc48-sl88n\" (UID: \"c4d8205e-157b-4a66-9ee7-318bae255129\") " pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" Mar 19 12:27:32.985563 master-0 kubenswrapper[31830]: I0319 12:27:32.985505 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4d8205e-157b-4a66-9ee7-318bae255129-trusted-ca\") pod \"metal3-baremetal-operator-78474bdc48-sl88n\" (UID: \"c4d8205e-157b-4a66-9ee7-318bae255129\") " pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" Mar 19 12:27:32.985563 master-0 kubenswrapper[31830]: I0319 12:27:32.985571 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqs8w\" (UniqueName: \"kubernetes.io/projected/c4d8205e-157b-4a66-9ee7-318bae255129-kube-api-access-hqs8w\") pod \"metal3-baremetal-operator-78474bdc48-sl88n\" (UID: \"c4d8205e-157b-4a66-9ee7-318bae255129\") " pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" Mar 19 12:27:32.986054 master-0 kubenswrapper[31830]: I0319 12:27:32.985614 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4d8205e-157b-4a66-9ee7-318bae255129-cert\") pod \"metal3-baremetal-operator-78474bdc48-sl88n\" (UID: \"c4d8205e-157b-4a66-9ee7-318bae255129\") " pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" Mar 19 12:27:32.986054 master-0 kubenswrapper[31830]: I0319 12:27:32.985644 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metal3-ironic-tls\" (UniqueName: \"kubernetes.io/secret/c4d8205e-157b-4a66-9ee7-318bae255129-metal3-ironic-tls\") pod \"metal3-baremetal-operator-78474bdc48-sl88n\" (UID: \"c4d8205e-157b-4a66-9ee7-318bae255129\") " pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" Mar 19 12:27:32.986054 master-0 kubenswrapper[31830]: I0319 12:27:32.985671 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metal3-ironic-basic-auth\" (UniqueName: \"kubernetes.io/secret/c4d8205e-157b-4a66-9ee7-318bae255129-metal3-ironic-basic-auth\") pod \"metal3-baremetal-operator-78474bdc48-sl88n\" (UID: \"c4d8205e-157b-4a66-9ee7-318bae255129\") " pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" Mar 19 12:27:32.986680 master-0 kubenswrapper[31830]: E0319 12:27:32.986656 31830 secret.go:189] Couldn't get secret openshift-machine-api/baremetal-operator-webhook-server-cert: secret "baremetal-operator-webhook-server-cert" not found Mar 19 12:27:32.986749 master-0 kubenswrapper[31830]: E0319 12:27:32.986710 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4d8205e-157b-4a66-9ee7-318bae255129-cert podName:c4d8205e-157b-4a66-9ee7-318bae255129 nodeName:}" failed. No retries permitted until 2026-03-19 12:27:33.486693591 +0000 UTC m=+792.035654295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4d8205e-157b-4a66-9ee7-318bae255129-cert") pod "metal3-baremetal-operator-78474bdc48-sl88n" (UID: "c4d8205e-157b-4a66-9ee7-318bae255129") : secret "baremetal-operator-webhook-server-cert" not found Mar 19 12:27:32.987411 master-0 kubenswrapper[31830]: I0319 12:27:32.987370 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4d8205e-157b-4a66-9ee7-318bae255129-trusted-ca\") pod \"metal3-baremetal-operator-78474bdc48-sl88n\" (UID: \"c4d8205e-157b-4a66-9ee7-318bae255129\") " pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" Mar 19 12:27:32.989245 master-0 kubenswrapper[31830]: I0319 12:27:32.989210 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metal3-ironic-basic-auth\" (UniqueName: \"kubernetes.io/secret/c4d8205e-157b-4a66-9ee7-318bae255129-metal3-ironic-basic-auth\") pod \"metal3-baremetal-operator-78474bdc48-sl88n\" (UID: \"c4d8205e-157b-4a66-9ee7-318bae255129\") " pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" Mar 19 12:27:32.991123 master-0 kubenswrapper[31830]: I0319 12:27:32.991083 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metal3-ironic-tls\" (UniqueName: \"kubernetes.io/secret/c4d8205e-157b-4a66-9ee7-318bae255129-metal3-ironic-tls\") pod \"metal3-baremetal-operator-78474bdc48-sl88n\" (UID: \"c4d8205e-157b-4a66-9ee7-318bae255129\") " pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" Mar 19 12:27:33.006698 master-0 kubenswrapper[31830]: I0319 12:27:33.006642 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqs8w\" (UniqueName: \"kubernetes.io/projected/c4d8205e-157b-4a66-9ee7-318bae255129-kube-api-access-hqs8w\") pod \"metal3-baremetal-operator-78474bdc48-sl88n\" (UID: \"c4d8205e-157b-4a66-9ee7-318bae255129\") " pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" Mar 19 12:27:33.493621 master-0 kubenswrapper[31830]: I0319 12:27:33.493555 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4d8205e-157b-4a66-9ee7-318bae255129-cert\") pod \"metal3-baremetal-operator-78474bdc48-sl88n\" (UID: \"c4d8205e-157b-4a66-9ee7-318bae255129\") " pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" Mar 19 12:27:33.494149 master-0 kubenswrapper[31830]: E0319 12:27:33.493778 31830 secret.go:189] Couldn't get secret openshift-machine-api/baremetal-operator-webhook-server-cert: secret "baremetal-operator-webhook-server-cert" not found Mar 19 12:27:33.494149 master-0 kubenswrapper[31830]: E0319 12:27:33.493966 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4d8205e-157b-4a66-9ee7-318bae255129-cert podName:c4d8205e-157b-4a66-9ee7-318bae255129 nodeName:}" failed. No retries permitted until 2026-03-19 12:27:34.493896559 +0000 UTC m=+793.042857253 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4d8205e-157b-4a66-9ee7-318bae255129-cert") pod "metal3-baremetal-operator-78474bdc48-sl88n" (UID: "c4d8205e-157b-4a66-9ee7-318bae255129") : secret "baremetal-operator-webhook-server-cert" not found Mar 19 12:27:34.509863 master-0 kubenswrapper[31830]: I0319 12:27:34.509812 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4d8205e-157b-4a66-9ee7-318bae255129-cert\") pod \"metal3-baremetal-operator-78474bdc48-sl88n\" (UID: \"c4d8205e-157b-4a66-9ee7-318bae255129\") " pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" Mar 19 12:27:34.513294 master-0 kubenswrapper[31830]: I0319 12:27:34.513257 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4d8205e-157b-4a66-9ee7-318bae255129-cert\") pod \"metal3-baremetal-operator-78474bdc48-sl88n\" (UID: \"c4d8205e-157b-4a66-9ee7-318bae255129\") " pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" Mar 19 12:27:34.558569 master-0 kubenswrapper[31830]: I0319 12:27:34.558490 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" Mar 19 12:27:35.350613 master-0 kubenswrapper[31830]: I0319 12:27:35.350564 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n"] Mar 19 12:27:35.527951 master-0 kubenswrapper[31830]: I0319 12:27:35.527853 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp"] Mar 19 12:27:35.529694 master-0 kubenswrapper[31830]: I0319 12:27:35.529650 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" Mar 19 12:27:35.535565 master-0 kubenswrapper[31830]: I0319 12:27:35.535510 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"pull-secret" Mar 19 12:27:35.563540 master-0 kubenswrapper[31830]: I0319 12:27:35.563472 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp"] Mar 19 12:27:35.646637 master-0 kubenswrapper[31830]: I0319 12:27:35.640250 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ironic-agent-pull-secret\" (UniqueName: \"kubernetes.io/secret/ed8f0c5d-4f16-444c-b706-e78cf4036b87-ironic-agent-pull-secret\") pod \"metal3-image-customization-5b889bff9b-dxbkp\" (UID: \"ed8f0c5d-4f16-444c-b706-e78cf4036b87\") " pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" Mar 19 12:27:35.646637 master-0 kubenswrapper[31830]: I0319 12:27:35.640308 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metal3-shared-image-cache\" (UniqueName: \"kubernetes.io/host-path/ed8f0c5d-4f16-444c-b706-e78cf4036b87-metal3-shared-image-cache\") pod \"metal3-image-customization-5b889bff9b-dxbkp\" (UID: \"ed8f0c5d-4f16-444c-b706-e78cf4036b87\") " pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" Mar 19 12:27:35.646637 master-0 kubenswrapper[31830]: I0319 12:27:35.640353 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"user-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/ed8f0c5d-4f16-444c-b706-e78cf4036b87-user-ca-bundle\") pod \"metal3-image-customization-5b889bff9b-dxbkp\" (UID: \"ed8f0c5d-4f16-444c-b706-e78cf4036b87\") " pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" Mar 19 12:27:35.646637 master-0 kubenswrapper[31830]: I0319 12:27:35.640370 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99dct\" (UniqueName: \"kubernetes.io/projected/ed8f0c5d-4f16-444c-b706-e78cf4036b87-kube-api-access-99dct\") pod \"metal3-image-customization-5b889bff9b-dxbkp\" (UID: \"ed8f0c5d-4f16-444c-b706-e78cf4036b87\") " pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" Mar 19 12:27:35.646637 master-0 kubenswrapper[31830]: I0319 12:27:35.640405 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metal3-image-customization-volume\" (UniqueName: \"kubernetes.io/host-path/ed8f0c5d-4f16-444c-b706-e78cf4036b87-metal3-image-customization-volume\") pod \"metal3-image-customization-5b889bff9b-dxbkp\" (UID: \"ed8f0c5d-4f16-444c-b706-e78cf4036b87\") " pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" Mar 19 12:27:35.646637 master-0 kubenswrapper[31830]: I0319 12:27:35.640430 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed8f0c5d-4f16-444c-b706-e78cf4036b87-trusted-ca\") pod \"metal3-image-customization-5b889bff9b-dxbkp\" (UID: \"ed8f0c5d-4f16-444c-b706-e78cf4036b87\") " pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" Mar 19 12:27:35.720821 master-0 kubenswrapper[31830]: I0319 12:27:35.720584 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" event={"ID":"c4d8205e-157b-4a66-9ee7-318bae255129","Type":"ContainerStarted","Data":"09e0421566c4356fd36e5e02f6813306d0e830fd2c927e92ac007cc2af0aec00"} Mar 19 12:27:35.741865 master-0 kubenswrapper[31830]: I0319 12:27:35.741582 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed8f0c5d-4f16-444c-b706-e78cf4036b87-trusted-ca\") pod \"metal3-image-customization-5b889bff9b-dxbkp\" (UID: \"ed8f0c5d-4f16-444c-b706-e78cf4036b87\") " pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" Mar 19 12:27:35.741865 master-0 kubenswrapper[31830]: I0319 12:27:35.741669 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ironic-agent-pull-secret\" (UniqueName: \"kubernetes.io/secret/ed8f0c5d-4f16-444c-b706-e78cf4036b87-ironic-agent-pull-secret\") pod \"metal3-image-customization-5b889bff9b-dxbkp\" (UID: \"ed8f0c5d-4f16-444c-b706-e78cf4036b87\") " pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" Mar 19 12:27:35.742137 master-0 kubenswrapper[31830]: I0319 12:27:35.741921 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metal3-shared-image-cache\" (UniqueName: \"kubernetes.io/host-path/ed8f0c5d-4f16-444c-b706-e78cf4036b87-metal3-shared-image-cache\") pod \"metal3-image-customization-5b889bff9b-dxbkp\" (UID: \"ed8f0c5d-4f16-444c-b706-e78cf4036b87\") " pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" Mar 19 12:27:35.742137 master-0 kubenswrapper[31830]: I0319 12:27:35.741980 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"user-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/ed8f0c5d-4f16-444c-b706-e78cf4036b87-user-ca-bundle\") pod \"metal3-image-customization-5b889bff9b-dxbkp\" (UID: \"ed8f0c5d-4f16-444c-b706-e78cf4036b87\") " pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" Mar 19 12:27:35.742137 master-0 kubenswrapper[31830]: I0319 12:27:35.742005 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99dct\" (UniqueName: \"kubernetes.io/projected/ed8f0c5d-4f16-444c-b706-e78cf4036b87-kube-api-access-99dct\") pod \"metal3-image-customization-5b889bff9b-dxbkp\" (UID: \"ed8f0c5d-4f16-444c-b706-e78cf4036b87\") " pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" Mar 19 12:27:35.742137 master-0 kubenswrapper[31830]: I0319 12:27:35.742044 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metal3-image-customization-volume\" (UniqueName: \"kubernetes.io/host-path/ed8f0c5d-4f16-444c-b706-e78cf4036b87-metal3-image-customization-volume\") pod \"metal3-image-customization-5b889bff9b-dxbkp\" (UID: \"ed8f0c5d-4f16-444c-b706-e78cf4036b87\") " pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" Mar 19 12:27:35.742137 master-0 kubenswrapper[31830]: I0319 12:27:35.742130 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metal3-image-customization-volume\" (UniqueName: \"kubernetes.io/host-path/ed8f0c5d-4f16-444c-b706-e78cf4036b87-metal3-image-customization-volume\") pod \"metal3-image-customization-5b889bff9b-dxbkp\" (UID: \"ed8f0c5d-4f16-444c-b706-e78cf4036b87\") " pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" Mar 19 12:27:35.742395 master-0 kubenswrapper[31830]: I0319 12:27:35.742180 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metal3-shared-image-cache\" (UniqueName: \"kubernetes.io/host-path/ed8f0c5d-4f16-444c-b706-e78cf4036b87-metal3-shared-image-cache\") pod \"metal3-image-customization-5b889bff9b-dxbkp\" (UID: \"ed8f0c5d-4f16-444c-b706-e78cf4036b87\") " pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" Mar 19 12:27:35.742395 master-0 kubenswrapper[31830]: I0319 12:27:35.742360 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"user-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/ed8f0c5d-4f16-444c-b706-e78cf4036b87-user-ca-bundle\") pod \"metal3-image-customization-5b889bff9b-dxbkp\" (UID: \"ed8f0c5d-4f16-444c-b706-e78cf4036b87\") " pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" Mar 19 12:27:35.743120 master-0 kubenswrapper[31830]: I0319 12:27:35.743046 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed8f0c5d-4f16-444c-b706-e78cf4036b87-trusted-ca\") pod \"metal3-image-customization-5b889bff9b-dxbkp\" (UID: \"ed8f0c5d-4f16-444c-b706-e78cf4036b87\") " pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" Mar 19 12:27:35.755843 master-0 kubenswrapper[31830]: I0319 12:27:35.744873 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ironic-agent-pull-secret\" (UniqueName: \"kubernetes.io/secret/ed8f0c5d-4f16-444c-b706-e78cf4036b87-ironic-agent-pull-secret\") pod \"metal3-image-customization-5b889bff9b-dxbkp\" (UID: \"ed8f0c5d-4f16-444c-b706-e78cf4036b87\") " pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" Mar 19 12:27:35.769961 master-0 kubenswrapper[31830]: I0319 12:27:35.766828 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99dct\" (UniqueName: \"kubernetes.io/projected/ed8f0c5d-4f16-444c-b706-e78cf4036b87-kube-api-access-99dct\") pod \"metal3-image-customization-5b889bff9b-dxbkp\" (UID: \"ed8f0c5d-4f16-444c-b706-e78cf4036b87\") " pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" Mar 19 12:27:35.858852 master-0 kubenswrapper[31830]: I0319 12:27:35.858735 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" Mar 19 12:27:36.299102 master-0 kubenswrapper[31830]: I0319 12:27:36.298997 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp"] Mar 19 12:27:36.300645 master-0 kubenswrapper[31830]: W0319 12:27:36.300609 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded8f0c5d_4f16_444c_b706_e78cf4036b87.slice/crio-f516648fe9fee91b59499976dc6906ef35d89c2a36cbf6c32b03b0811aa07c4f WatchSource:0}: Error finding container f516648fe9fee91b59499976dc6906ef35d89c2a36cbf6c32b03b0811aa07c4f: Status 404 returned error can't find the container with id f516648fe9fee91b59499976dc6906ef35d89c2a36cbf6c32b03b0811aa07c4f Mar 19 12:27:36.466628 master-0 kubenswrapper[31830]: I0319 12:27:36.466573 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/ironic-proxy-mnfjh"] Mar 19 12:27:36.467563 master-0 kubenswrapper[31830]: I0319 12:27:36.467514 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/ironic-proxy-mnfjh" Mar 19 12:27:36.560496 master-0 kubenswrapper[31830]: I0319 12:27:36.560355 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metal3-ironic-tls\" (UniqueName: \"kubernetes.io/secret/0d92c44a-db10-4400-8eef-4d9930650684-metal3-ironic-tls\") pod \"ironic-proxy-mnfjh\" (UID: \"0d92c44a-db10-4400-8eef-4d9930650684\") " pod="openshift-machine-api/ironic-proxy-mnfjh" Mar 19 12:27:36.560496 master-0 kubenswrapper[31830]: I0319 12:27:36.560429 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d92c44a-db10-4400-8eef-4d9930650684-trusted-ca\") pod \"ironic-proxy-mnfjh\" (UID: \"0d92c44a-db10-4400-8eef-4d9930650684\") " pod="openshift-machine-api/ironic-proxy-mnfjh" Mar 19 12:27:36.560496 master-0 kubenswrapper[31830]: I0319 12:27:36.560483 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6jh8\" (UniqueName: \"kubernetes.io/projected/0d92c44a-db10-4400-8eef-4d9930650684-kube-api-access-s6jh8\") pod \"ironic-proxy-mnfjh\" (UID: \"0d92c44a-db10-4400-8eef-4d9930650684\") " pod="openshift-machine-api/ironic-proxy-mnfjh" Mar 19 12:27:36.661642 master-0 kubenswrapper[31830]: I0319 12:27:36.661585 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metal3-ironic-tls\" (UniqueName: \"kubernetes.io/secret/0d92c44a-db10-4400-8eef-4d9930650684-metal3-ironic-tls\") pod \"ironic-proxy-mnfjh\" (UID: \"0d92c44a-db10-4400-8eef-4d9930650684\") " pod="openshift-machine-api/ironic-proxy-mnfjh" Mar 19 12:27:36.661642 master-0 kubenswrapper[31830]: I0319 12:27:36.661637 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d92c44a-db10-4400-8eef-4d9930650684-trusted-ca\") pod \"ironic-proxy-mnfjh\" (UID: \"0d92c44a-db10-4400-8eef-4d9930650684\") " pod="openshift-machine-api/ironic-proxy-mnfjh" Mar 19 12:27:36.661935 master-0 kubenswrapper[31830]: I0319 12:27:36.661662 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6jh8\" (UniqueName: \"kubernetes.io/projected/0d92c44a-db10-4400-8eef-4d9930650684-kube-api-access-s6jh8\") pod \"ironic-proxy-mnfjh\" (UID: \"0d92c44a-db10-4400-8eef-4d9930650684\") " pod="openshift-machine-api/ironic-proxy-mnfjh" Mar 19 12:27:36.665073 master-0 kubenswrapper[31830]: I0319 12:27:36.665031 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d92c44a-db10-4400-8eef-4d9930650684-trusted-ca\") pod \"ironic-proxy-mnfjh\" (UID: \"0d92c44a-db10-4400-8eef-4d9930650684\") " pod="openshift-machine-api/ironic-proxy-mnfjh" Mar 19 12:27:36.665193 master-0 kubenswrapper[31830]: I0319 12:27:36.665041 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metal3-ironic-tls\" (UniqueName: \"kubernetes.io/secret/0d92c44a-db10-4400-8eef-4d9930650684-metal3-ironic-tls\") pod \"ironic-proxy-mnfjh\" (UID: \"0d92c44a-db10-4400-8eef-4d9930650684\") " pod="openshift-machine-api/ironic-proxy-mnfjh" Mar 19 12:27:36.679823 master-0 kubenswrapper[31830]: I0319 12:27:36.679752 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6jh8\" (UniqueName: \"kubernetes.io/projected/0d92c44a-db10-4400-8eef-4d9930650684-kube-api-access-s6jh8\") pod \"ironic-proxy-mnfjh\" (UID: \"0d92c44a-db10-4400-8eef-4d9930650684\") " pod="openshift-machine-api/ironic-proxy-mnfjh" Mar 19 12:27:36.734073 master-0 kubenswrapper[31830]: I0319 12:27:36.733998 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" event={"ID":"ed8f0c5d-4f16-444c-b706-e78cf4036b87","Type":"ContainerStarted","Data":"f516648fe9fee91b59499976dc6906ef35d89c2a36cbf6c32b03b0811aa07c4f"} Mar 19 12:27:36.788366 master-0 kubenswrapper[31830]: I0319 12:27:36.788023 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/ironic-proxy-mnfjh" Mar 19 12:27:36.820444 master-0 kubenswrapper[31830]: W0319 12:27:36.820398 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d92c44a_db10_4400_8eef_4d9930650684.slice/crio-d7300a1e43e958d974a02142980b213bc265072fd23fc6238905aee4b2581c77 WatchSource:0}: Error finding container d7300a1e43e958d974a02142980b213bc265072fd23fc6238905aee4b2581c77: Status 404 returned error can't find the container with id d7300a1e43e958d974a02142980b213bc265072fd23fc6238905aee4b2581c77 Mar 19 12:27:37.742763 master-0 kubenswrapper[31830]: I0319 12:27:37.742703 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/ironic-proxy-mnfjh" event={"ID":"0d92c44a-db10-4400-8eef-4d9930650684","Type":"ContainerStarted","Data":"d7300a1e43e958d974a02142980b213bc265072fd23fc6238905aee4b2581c77"} Mar 19 12:27:38.753283 master-0 kubenswrapper[31830]: I0319 12:27:38.752157 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" event={"ID":"c4d8205e-157b-4a66-9ee7-318bae255129","Type":"ContainerStarted","Data":"158800db1271e06b1b842cf8861b242371f7f08a853ae2563dbcca5156a3aadf"} Mar 19 12:27:42.659516 master-0 kubenswrapper[31830]: I0319 12:27:42.659141 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/metal3-baremetal-operator-78474bdc48-sl88n" podStartSLOduration=7.549169186 podStartE2EDuration="10.659117143s" podCreationTimestamp="2026-03-19 12:27:32 +0000 UTC" firstStartedPulling="2026-03-19 12:27:35.372186884 +0000 UTC m=+793.921147598" lastFinishedPulling="2026-03-19 12:27:38.482134851 +0000 UTC m=+797.031095555" observedRunningTime="2026-03-19 12:27:38.782622474 +0000 UTC m=+797.331583178" watchObservedRunningTime="2026-03-19 12:27:42.659117143 +0000 UTC m=+801.208077847" Mar 19 12:27:47.834378 master-0 kubenswrapper[31830]: I0319 12:27:47.834313 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/ironic-proxy-mnfjh" event={"ID":"0d92c44a-db10-4400-8eef-4d9930650684","Type":"ContainerStarted","Data":"212597805a915a5140944cf5b9b13cb63cfed21d337f1dc950da8248b6075f1b"} Mar 19 12:27:47.853354 master-0 kubenswrapper[31830]: I0319 12:27:47.852323 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/ironic-proxy-mnfjh" podStartSLOduration=2.003362713 podStartE2EDuration="11.852305297s" podCreationTimestamp="2026-03-19 12:27:36 +0000 UTC" firstStartedPulling="2026-03-19 12:27:36.822763213 +0000 UTC m=+795.371723917" lastFinishedPulling="2026-03-19 12:27:46.671705797 +0000 UTC m=+805.220666501" observedRunningTime="2026-03-19 12:27:47.850074929 +0000 UTC m=+806.399035653" watchObservedRunningTime="2026-03-19 12:27:47.852305297 +0000 UTC m=+806.401266001" Mar 19 12:28:09.014137 master-0 kubenswrapper[31830]: I0319 12:28:09.014061 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/metal3-65f8c5cc94-trthc" event={"ID":"f262d280-de9c-40ab-a879-abfec51007e6","Type":"ContainerStarted","Data":"1642bc05e91cf7cf29a1e7749434faf174234639cecd4464c254f2673baa95a3"} Mar 19 12:28:09.017156 master-0 kubenswrapper[31830]: I0319 12:28:09.017089 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_metal3-image-customization-5b889bff9b-dxbkp_ed8f0c5d-4f16-444c-b706-e78cf4036b87/machine-os-images/0.log" Mar 19 12:28:09.017156 master-0 kubenswrapper[31830]: I0319 12:28:09.017141 31830 generic.go:334] "Generic (PLEG): container finished" podID="ed8f0c5d-4f16-444c-b706-e78cf4036b87" containerID="480c7c2a747bd0d70b6351750663b7ebc9e0bb9bb8e6d73e663f34f9a31bf0a1" exitCode=1 Mar 19 12:28:09.017262 master-0 kubenswrapper[31830]: I0319 12:28:09.017167 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" event={"ID":"ed8f0c5d-4f16-444c-b706-e78cf4036b87","Type":"ContainerDied","Data":"480c7c2a747bd0d70b6351750663b7ebc9e0bb9bb8e6d73e663f34f9a31bf0a1"} Mar 19 12:28:10.024038 master-0 kubenswrapper[31830]: I0319 12:28:10.023996 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_metal3-image-customization-5b889bff9b-dxbkp_ed8f0c5d-4f16-444c-b706-e78cf4036b87/machine-os-images/0.log" Mar 19 12:28:10.024766 master-0 kubenswrapper[31830]: I0319 12:28:10.024096 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" event={"ID":"ed8f0c5d-4f16-444c-b706-e78cf4036b87","Type":"ContainerStarted","Data":"e1c931365561b290881ac6bba20ecbdb6db8480be85469367858f774ae6e5768"} Mar 19 12:28:11.032859 master-0 kubenswrapper[31830]: I0319 12:28:11.032726 31830 generic.go:334] "Generic (PLEG): container finished" podID="f262d280-de9c-40ab-a879-abfec51007e6" containerID="1642bc05e91cf7cf29a1e7749434faf174234639cecd4464c254f2673baa95a3" exitCode=0 Mar 19 12:28:11.033711 master-0 kubenswrapper[31830]: I0319 12:28:11.032891 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/metal3-65f8c5cc94-trthc" event={"ID":"f262d280-de9c-40ab-a879-abfec51007e6","Type":"ContainerDied","Data":"1642bc05e91cf7cf29a1e7749434faf174234639cecd4464c254f2673baa95a3"} Mar 19 12:28:12.043383 master-0 kubenswrapper[31830]: I0319 12:28:12.043315 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/metal3-65f8c5cc94-trthc" event={"ID":"f262d280-de9c-40ab-a879-abfec51007e6","Type":"ContainerStarted","Data":"b827b8dbb28ff94eb06791002fb932f8a5ee766dddac4c75a2c119e057d14c73"} Mar 19 12:28:13.052716 master-0 kubenswrapper[31830]: I0319 12:28:13.052675 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_metal3-image-customization-5b889bff9b-dxbkp_ed8f0c5d-4f16-444c-b706-e78cf4036b87/machine-os-images/1.log" Mar 19 12:28:13.091869 master-0 kubenswrapper[31830]: I0319 12:28:13.091691 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_metal3-image-customization-5b889bff9b-dxbkp_ed8f0c5d-4f16-444c-b706-e78cf4036b87/machine-os-images/0.log" Mar 19 12:28:13.091869 master-0 kubenswrapper[31830]: I0319 12:28:13.091787 31830 generic.go:334] "Generic (PLEG): container finished" podID="ed8f0c5d-4f16-444c-b706-e78cf4036b87" containerID="e1c931365561b290881ac6bba20ecbdb6db8480be85469367858f774ae6e5768" exitCode=1 Mar 19 12:28:13.092131 master-0 kubenswrapper[31830]: I0319 12:28:13.091945 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" event={"ID":"ed8f0c5d-4f16-444c-b706-e78cf4036b87","Type":"ContainerDied","Data":"e1c931365561b290881ac6bba20ecbdb6db8480be85469367858f774ae6e5768"} Mar 19 12:28:13.092131 master-0 kubenswrapper[31830]: I0319 12:28:13.092008 31830 scope.go:117] "RemoveContainer" containerID="480c7c2a747bd0d70b6351750663b7ebc9e0bb9bb8e6d73e663f34f9a31bf0a1" Mar 19 12:28:13.092639 master-0 kubenswrapper[31830]: I0319 12:28:13.092606 31830 scope.go:117] "RemoveContainer" containerID="480c7c2a747bd0d70b6351750663b7ebc9e0bb9bb8e6d73e663f34f9a31bf0a1" Mar 19 12:28:13.103495 master-0 kubenswrapper[31830]: I0319 12:28:13.103422 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/metal3-65f8c5cc94-trthc" event={"ID":"f262d280-de9c-40ab-a879-abfec51007e6","Type":"ContainerStarted","Data":"809bb511a3ba75bccf17de7c6cc7f703eca3ee664d7cc49113a5f1f42dc7b907"} Mar 19 12:28:13.463225 master-0 kubenswrapper[31830]: E0319 12:28:13.463144 31830 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_machine-os-images_metal3-image-customization-5b889bff9b-dxbkp_openshift-machine-api_ed8f0c5d-4f16-444c-b706-e78cf4036b87_0 in pod sandbox f516648fe9fee91b59499976dc6906ef35d89c2a36cbf6c32b03b0811aa07c4f from index: no such id: '480c7c2a747bd0d70b6351750663b7ebc9e0bb9bb8e6d73e663f34f9a31bf0a1'" containerID="480c7c2a747bd0d70b6351750663b7ebc9e0bb9bb8e6d73e663f34f9a31bf0a1" Mar 19 12:28:13.463343 master-0 kubenswrapper[31830]: I0319 12:28:13.463217 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"480c7c2a747bd0d70b6351750663b7ebc9e0bb9bb8e6d73e663f34f9a31bf0a1"} err="rpc error: code = Unknown desc = failed to delete container k8s_machine-os-images_metal3-image-customization-5b889bff9b-dxbkp_openshift-machine-api_ed8f0c5d-4f16-444c-b706-e78cf4036b87_0 in pod sandbox f516648fe9fee91b59499976dc6906ef35d89c2a36cbf6c32b03b0811aa07c4f from index: no such id: '480c7c2a747bd0d70b6351750663b7ebc9e0bb9bb8e6d73e663f34f9a31bf0a1'" Mar 19 12:28:13.463915 master-0 kubenswrapper[31830]: E0319 12:28:13.463840 31830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-os-images\" with CrashLoopBackOff: \"back-off 10s restarting failed container=machine-os-images pod=metal3-image-customization-5b889bff9b-dxbkp_openshift-machine-api(ed8f0c5d-4f16-444c-b706-e78cf4036b87)\"" pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" podUID="ed8f0c5d-4f16-444c-b706-e78cf4036b87" Mar 19 12:28:14.115189 master-0 kubenswrapper[31830]: I0319 12:28:14.115111 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_metal3-image-customization-5b889bff9b-dxbkp_ed8f0c5d-4f16-444c-b706-e78cf4036b87/machine-os-images/1.log" Mar 19 12:28:14.117638 master-0 kubenswrapper[31830]: I0319 12:28:14.117589 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/metal3-65f8c5cc94-trthc" event={"ID":"f262d280-de9c-40ab-a879-abfec51007e6","Type":"ContainerStarted","Data":"4d1679af88e2caa3fb6cd15bf24e5cd11dfea28666c4585a9f0199fd129ed7c2"} Mar 19 12:28:15.344003 master-0 kubenswrapper[31830]: I0319 12:28:15.343870 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/metal3-65f8c5cc94-trthc" podStartSLOduration=7.9074207659999995 podStartE2EDuration="43.343835305s" podCreationTimestamp="2026-03-19 12:27:32 +0000 UTC" firstStartedPulling="2026-03-19 12:27:32.624828209 +0000 UTC m=+791.173788913" lastFinishedPulling="2026-03-19 12:28:08.061242748 +0000 UTC m=+826.610203452" observedRunningTime="2026-03-19 12:28:15.343534536 +0000 UTC m=+833.892495280" watchObservedRunningTime="2026-03-19 12:28:15.343835305 +0000 UTC m=+833.892796049" Mar 19 12:28:23.869599 master-0 kubenswrapper[31830]: I0319 12:28:23.869481 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs"] Mar 19 12:28:23.873290 master-0 kubenswrapper[31830]: I0319 12:28:23.873248 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs" Mar 19 12:28:23.884472 master-0 kubenswrapper[31830]: I0319 12:28:23.884416 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs"] Mar 19 12:28:24.018017 master-0 kubenswrapper[31830]: I0319 12:28:24.017818 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/315d828b-fe2f-4375-850c-81d044431050-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs\" (UID: \"315d828b-fe2f-4375-850c-81d044431050\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs" Mar 19 12:28:24.018017 master-0 kubenswrapper[31830]: I0319 12:28:24.017951 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/315d828b-fe2f-4375-850c-81d044431050-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs\" (UID: \"315d828b-fe2f-4375-850c-81d044431050\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs" Mar 19 12:28:24.018017 master-0 kubenswrapper[31830]: I0319 12:28:24.018009 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5f56\" (UniqueName: \"kubernetes.io/projected/315d828b-fe2f-4375-850c-81d044431050-kube-api-access-c5f56\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs\" (UID: \"315d828b-fe2f-4375-850c-81d044431050\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs" Mar 19 12:28:24.119861 master-0 kubenswrapper[31830]: I0319 12:28:24.119723 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/315d828b-fe2f-4375-850c-81d044431050-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs\" (UID: \"315d828b-fe2f-4375-850c-81d044431050\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs" Mar 19 12:28:24.120055 master-0 kubenswrapper[31830]: I0319 12:28:24.119992 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/315d828b-fe2f-4375-850c-81d044431050-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs\" (UID: \"315d828b-fe2f-4375-850c-81d044431050\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs" Mar 19 12:28:24.120055 master-0 kubenswrapper[31830]: I0319 12:28:24.120045 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5f56\" (UniqueName: \"kubernetes.io/projected/315d828b-fe2f-4375-850c-81d044431050-kube-api-access-c5f56\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs\" (UID: \"315d828b-fe2f-4375-850c-81d044431050\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs" Mar 19 12:28:24.120463 master-0 kubenswrapper[31830]: I0319 12:28:24.120413 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/315d828b-fe2f-4375-850c-81d044431050-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs\" (UID: \"315d828b-fe2f-4375-850c-81d044431050\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs" Mar 19 12:28:24.120463 master-0 kubenswrapper[31830]: I0319 12:28:24.120413 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/315d828b-fe2f-4375-850c-81d044431050-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs\" (UID: \"315d828b-fe2f-4375-850c-81d044431050\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs" Mar 19 12:28:24.135960 master-0 kubenswrapper[31830]: I0319 12:28:24.135915 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5f56\" (UniqueName: \"kubernetes.io/projected/315d828b-fe2f-4375-850c-81d044431050-kube-api-access-c5f56\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs\" (UID: \"315d828b-fe2f-4375-850c-81d044431050\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs" Mar 19 12:28:24.200631 master-0 kubenswrapper[31830]: I0319 12:28:24.200569 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs" Mar 19 12:28:24.754237 master-0 kubenswrapper[31830]: W0319 12:28:24.753939 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod315d828b_fe2f_4375_850c_81d044431050.slice/crio-d200d3ec3a48d6cb120828a989561198dae195704696099627c71229d1b34cb1 WatchSource:0}: Error finding container d200d3ec3a48d6cb120828a989561198dae195704696099627c71229d1b34cb1: Status 404 returned error can't find the container with id d200d3ec3a48d6cb120828a989561198dae195704696099627c71229d1b34cb1 Mar 19 12:28:24.754237 master-0 kubenswrapper[31830]: I0319 12:28:24.753968 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs"] Mar 19 12:28:25.359546 master-0 kubenswrapper[31830]: I0319 12:28:25.359471 31830 generic.go:334] "Generic (PLEG): container finished" podID="315d828b-fe2f-4375-850c-81d044431050" containerID="672cf609cb44870a5fb242d5341e7c0c74a1bcf0b42179b534894020eced2de9" exitCode=0 Mar 19 12:28:25.359546 master-0 kubenswrapper[31830]: I0319 12:28:25.359539 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs" event={"ID":"315d828b-fe2f-4375-850c-81d044431050","Type":"ContainerDied","Data":"672cf609cb44870a5fb242d5341e7c0c74a1bcf0b42179b534894020eced2de9"} Mar 19 12:28:25.360324 master-0 kubenswrapper[31830]: I0319 12:28:25.359578 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs" event={"ID":"315d828b-fe2f-4375-850c-81d044431050","Type":"ContainerStarted","Data":"d200d3ec3a48d6cb120828a989561198dae195704696099627c71229d1b34cb1"} Mar 19 12:28:25.362346 master-0 kubenswrapper[31830]: I0319 12:28:25.361996 31830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 12:28:27.379479 master-0 kubenswrapper[31830]: I0319 12:28:27.379441 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs" event={"ID":"315d828b-fe2f-4375-850c-81d044431050","Type":"ContainerStarted","Data":"a026fb43788c42212baf96a74049c4a990d5acd1722ed96492db63fd00f78eab"} Mar 19 12:28:28.390607 master-0 kubenswrapper[31830]: I0319 12:28:28.390562 31830 generic.go:334] "Generic (PLEG): container finished" podID="315d828b-fe2f-4375-850c-81d044431050" containerID="a026fb43788c42212baf96a74049c4a990d5acd1722ed96492db63fd00f78eab" exitCode=0 Mar 19 12:28:28.390607 master-0 kubenswrapper[31830]: I0319 12:28:28.390607 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs" event={"ID":"315d828b-fe2f-4375-850c-81d044431050","Type":"ContainerDied","Data":"a026fb43788c42212baf96a74049c4a990d5acd1722ed96492db63fd00f78eab"} Mar 19 12:28:29.403689 master-0 kubenswrapper[31830]: I0319 12:28:29.403644 31830 generic.go:334] "Generic (PLEG): container finished" podID="315d828b-fe2f-4375-850c-81d044431050" containerID="e1f5042536983a352ad5285ac929b30266df2e7317c6d0ea1dac930a1b412936" exitCode=0 Mar 19 12:28:29.404276 master-0 kubenswrapper[31830]: I0319 12:28:29.403732 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs" event={"ID":"315d828b-fe2f-4375-850c-81d044431050","Type":"ContainerDied","Data":"e1f5042536983a352ad5285ac929b30266df2e7317c6d0ea1dac930a1b412936"} Mar 19 12:28:29.405599 master-0 kubenswrapper[31830]: I0319 12:28:29.405578 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_metal3-image-customization-5b889bff9b-dxbkp_ed8f0c5d-4f16-444c-b706-e78cf4036b87/machine-os-images/1.log" Mar 19 12:28:29.405666 master-0 kubenswrapper[31830]: I0319 12:28:29.405613 31830 generic.go:334] "Generic (PLEG): container finished" podID="ed8f0c5d-4f16-444c-b706-e78cf4036b87" containerID="6acc12b56175ffbe527022f603cbbf98807913e16ce2ea91c8f4d540432bb34d" exitCode=0 Mar 19 12:28:29.405666 master-0 kubenswrapper[31830]: I0319 12:28:29.405638 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" event={"ID":"ed8f0c5d-4f16-444c-b706-e78cf4036b87","Type":"ContainerDied","Data":"6acc12b56175ffbe527022f603cbbf98807913e16ce2ea91c8f4d540432bb34d"} Mar 19 12:28:29.405753 master-0 kubenswrapper[31830]: I0319 12:28:29.405669 31830 scope.go:117] "RemoveContainer" containerID="e1c931365561b290881ac6bba20ecbdb6db8480be85469367858f774ae6e5768" Mar 19 12:28:29.406181 master-0 kubenswrapper[31830]: I0319 12:28:29.406161 31830 scope.go:117] "RemoveContainer" containerID="e1c931365561b290881ac6bba20ecbdb6db8480be85469367858f774ae6e5768" Mar 19 12:28:29.428939 master-0 kubenswrapper[31830]: E0319 12:28:29.428884 31830 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_machine-os-images_metal3-image-customization-5b889bff9b-dxbkp_openshift-machine-api_ed8f0c5d-4f16-444c-b706-e78cf4036b87_1 in pod sandbox f516648fe9fee91b59499976dc6906ef35d89c2a36cbf6c32b03b0811aa07c4f from index: no such id: 'e1c931365561b290881ac6bba20ecbdb6db8480be85469367858f774ae6e5768'" containerID="e1c931365561b290881ac6bba20ecbdb6db8480be85469367858f774ae6e5768" Mar 19 12:28:29.429101 master-0 kubenswrapper[31830]: E0319 12:28:29.428970 31830 kuberuntime_container.go:896] "Unhandled Error" err="failed to remove pod init container \"machine-os-images\": rpc error: code = Unknown desc = failed to delete container k8s_machine-os-images_metal3-image-customization-5b889bff9b-dxbkp_openshift-machine-api_ed8f0c5d-4f16-444c-b706-e78cf4036b87_1 in pod sandbox f516648fe9fee91b59499976dc6906ef35d89c2a36cbf6c32b03b0811aa07c4f from index: no such id: 'e1c931365561b290881ac6bba20ecbdb6db8480be85469367858f774ae6e5768'; Skipping pod \"metal3-image-customization-5b889bff9b-dxbkp_openshift-machine-api(ed8f0c5d-4f16-444c-b706-e78cf4036b87)\"" logger="UnhandledError" Mar 19 12:28:30.965033 master-0 kubenswrapper[31830]: I0319 12:28:30.964987 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs" Mar 19 12:28:31.127668 master-0 kubenswrapper[31830]: I0319 12:28:31.127595 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/315d828b-fe2f-4375-850c-81d044431050-bundle\") pod \"315d828b-fe2f-4375-850c-81d044431050\" (UID: \"315d828b-fe2f-4375-850c-81d044431050\") " Mar 19 12:28:31.127885 master-0 kubenswrapper[31830]: I0319 12:28:31.127775 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/315d828b-fe2f-4375-850c-81d044431050-util\") pod \"315d828b-fe2f-4375-850c-81d044431050\" (UID: \"315d828b-fe2f-4375-850c-81d044431050\") " Mar 19 12:28:31.128099 master-0 kubenswrapper[31830]: I0319 12:28:31.128060 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5f56\" (UniqueName: \"kubernetes.io/projected/315d828b-fe2f-4375-850c-81d044431050-kube-api-access-c5f56\") pod \"315d828b-fe2f-4375-850c-81d044431050\" (UID: \"315d828b-fe2f-4375-850c-81d044431050\") " Mar 19 12:28:31.131312 master-0 kubenswrapper[31830]: I0319 12:28:31.130548 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/315d828b-fe2f-4375-850c-81d044431050-bundle" (OuterVolumeSpecName: "bundle") pod "315d828b-fe2f-4375-850c-81d044431050" (UID: "315d828b-fe2f-4375-850c-81d044431050"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:28:31.132549 master-0 kubenswrapper[31830]: I0319 12:28:31.132331 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/315d828b-fe2f-4375-850c-81d044431050-kube-api-access-c5f56" (OuterVolumeSpecName: "kube-api-access-c5f56") pod "315d828b-fe2f-4375-850c-81d044431050" (UID: "315d828b-fe2f-4375-850c-81d044431050"). InnerVolumeSpecName "kube-api-access-c5f56". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:28:31.145430 master-0 kubenswrapper[31830]: I0319 12:28:31.145357 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/315d828b-fe2f-4375-850c-81d044431050-util" (OuterVolumeSpecName: "util") pod "315d828b-fe2f-4375-850c-81d044431050" (UID: "315d828b-fe2f-4375-850c-81d044431050"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:28:31.230975 master-0 kubenswrapper[31830]: I0319 12:28:31.230904 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5f56\" (UniqueName: \"kubernetes.io/projected/315d828b-fe2f-4375-850c-81d044431050-kube-api-access-c5f56\") on node \"master-0\" DevicePath \"\"" Mar 19 12:28:31.230975 master-0 kubenswrapper[31830]: I0319 12:28:31.230957 31830 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/315d828b-fe2f-4375-850c-81d044431050-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:28:31.230975 master-0 kubenswrapper[31830]: I0319 12:28:31.230972 31830 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/315d828b-fe2f-4375-850c-81d044431050-util\") on node \"master-0\" DevicePath \"\"" Mar 19 12:28:31.426163 master-0 kubenswrapper[31830]: I0319 12:28:31.425587 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs" event={"ID":"315d828b-fe2f-4375-850c-81d044431050","Type":"ContainerDied","Data":"d200d3ec3a48d6cb120828a989561198dae195704696099627c71229d1b34cb1"} Mar 19 12:28:31.426163 master-0 kubenswrapper[31830]: I0319 12:28:31.425824 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4vfvzs" Mar 19 12:28:31.426163 master-0 kubenswrapper[31830]: I0319 12:28:31.425835 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d200d3ec3a48d6cb120828a989561198dae195704696099627c71229d1b34cb1" Mar 19 12:28:33.440582 master-0 kubenswrapper[31830]: I0319 12:28:33.440464 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" event={"ID":"ed8f0c5d-4f16-444c-b706-e78cf4036b87","Type":"ContainerStarted","Data":"c72c9f21649475fe0bd06dc1f79c9e41855723e5dfeb95138425161a91f3f076"} Mar 19 12:28:33.469010 master-0 kubenswrapper[31830]: I0319 12:28:33.468923 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/metal3-image-customization-5b889bff9b-dxbkp" podStartSLOduration=1.891334556 podStartE2EDuration="58.468899984s" podCreationTimestamp="2026-03-19 12:27:35 +0000 UTC" firstStartedPulling="2026-03-19 12:27:36.303420173 +0000 UTC m=+794.852380877" lastFinishedPulling="2026-03-19 12:28:32.880985601 +0000 UTC m=+851.429946305" observedRunningTime="2026-03-19 12:28:33.468265514 +0000 UTC m=+852.017226218" watchObservedRunningTime="2026-03-19 12:28:33.468899984 +0000 UTC m=+852.017860698" Mar 19 12:28:38.743021 master-0 kubenswrapper[31830]: I0319 12:28:38.742950 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-67785c87c7-2dz7c"] Mar 19 12:28:38.743703 master-0 kubenswrapper[31830]: E0319 12:28:38.743316 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="315d828b-fe2f-4375-850c-81d044431050" containerName="util" Mar 19 12:28:38.743703 master-0 kubenswrapper[31830]: I0319 12:28:38.743331 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="315d828b-fe2f-4375-850c-81d044431050" containerName="util" Mar 19 12:28:38.743703 master-0 kubenswrapper[31830]: E0319 12:28:38.743354 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="315d828b-fe2f-4375-850c-81d044431050" containerName="extract" Mar 19 12:28:38.743703 master-0 kubenswrapper[31830]: I0319 12:28:38.743360 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="315d828b-fe2f-4375-850c-81d044431050" containerName="extract" Mar 19 12:28:38.743703 master-0 kubenswrapper[31830]: E0319 12:28:38.743370 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="315d828b-fe2f-4375-850c-81d044431050" containerName="pull" Mar 19 12:28:38.743703 master-0 kubenswrapper[31830]: I0319 12:28:38.743376 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="315d828b-fe2f-4375-850c-81d044431050" containerName="pull" Mar 19 12:28:38.743703 master-0 kubenswrapper[31830]: I0319 12:28:38.743592 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="315d828b-fe2f-4375-850c-81d044431050" containerName="extract" Mar 19 12:28:38.744112 master-0 kubenswrapper[31830]: I0319 12:28:38.744080 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" Mar 19 12:28:38.749084 master-0 kubenswrapper[31830]: I0319 12:28:38.748661 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Mar 19 12:28:38.749084 master-0 kubenswrapper[31830]: I0319 12:28:38.748760 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Mar 19 12:28:38.749084 master-0 kubenswrapper[31830]: I0319 12:28:38.748881 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Mar 19 12:28:38.749084 master-0 kubenswrapper[31830]: I0319 12:28:38.748956 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Mar 19 12:28:38.749400 master-0 kubenswrapper[31830]: I0319 12:28:38.749187 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Mar 19 12:28:38.774571 master-0 kubenswrapper[31830]: I0319 12:28:38.774510 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-67785c87c7-2dz7c"] Mar 19 12:28:38.845716 master-0 kubenswrapper[31830]: I0319 12:28:38.845633 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/312037dd-62a7-4574-9797-9a26dae7fb35-webhook-cert\") pod \"lvms-operator-67785c87c7-2dz7c\" (UID: \"312037dd-62a7-4574-9797-9a26dae7fb35\") " pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" Mar 19 12:28:38.845952 master-0 kubenswrapper[31830]: I0319 12:28:38.845728 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/312037dd-62a7-4574-9797-9a26dae7fb35-apiservice-cert\") pod \"lvms-operator-67785c87c7-2dz7c\" (UID: \"312037dd-62a7-4574-9797-9a26dae7fb35\") " pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" Mar 19 12:28:38.845952 master-0 kubenswrapper[31830]: I0319 12:28:38.845759 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/312037dd-62a7-4574-9797-9a26dae7fb35-metrics-cert\") pod \"lvms-operator-67785c87c7-2dz7c\" (UID: \"312037dd-62a7-4574-9797-9a26dae7fb35\") " pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" Mar 19 12:28:38.845952 master-0 kubenswrapper[31830]: I0319 12:28:38.845933 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqrnc\" (UniqueName: \"kubernetes.io/projected/312037dd-62a7-4574-9797-9a26dae7fb35-kube-api-access-fqrnc\") pod \"lvms-operator-67785c87c7-2dz7c\" (UID: \"312037dd-62a7-4574-9797-9a26dae7fb35\") " pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" Mar 19 12:28:38.846111 master-0 kubenswrapper[31830]: I0319 12:28:38.845973 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/312037dd-62a7-4574-9797-9a26dae7fb35-socket-dir\") pod \"lvms-operator-67785c87c7-2dz7c\" (UID: \"312037dd-62a7-4574-9797-9a26dae7fb35\") " pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" Mar 19 12:28:38.947339 master-0 kubenswrapper[31830]: I0319 12:28:38.947280 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/312037dd-62a7-4574-9797-9a26dae7fb35-webhook-cert\") pod \"lvms-operator-67785c87c7-2dz7c\" (UID: \"312037dd-62a7-4574-9797-9a26dae7fb35\") " pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" Mar 19 12:28:38.947339 master-0 kubenswrapper[31830]: I0319 12:28:38.947347 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/312037dd-62a7-4574-9797-9a26dae7fb35-apiservice-cert\") pod \"lvms-operator-67785c87c7-2dz7c\" (UID: \"312037dd-62a7-4574-9797-9a26dae7fb35\") " pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" Mar 19 12:28:38.947655 master-0 kubenswrapper[31830]: I0319 12:28:38.947600 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/312037dd-62a7-4574-9797-9a26dae7fb35-metrics-cert\") pod \"lvms-operator-67785c87c7-2dz7c\" (UID: \"312037dd-62a7-4574-9797-9a26dae7fb35\") " pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" Mar 19 12:28:38.949890 master-0 kubenswrapper[31830]: I0319 12:28:38.947852 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqrnc\" (UniqueName: \"kubernetes.io/projected/312037dd-62a7-4574-9797-9a26dae7fb35-kube-api-access-fqrnc\") pod \"lvms-operator-67785c87c7-2dz7c\" (UID: \"312037dd-62a7-4574-9797-9a26dae7fb35\") " pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" Mar 19 12:28:38.949890 master-0 kubenswrapper[31830]: I0319 12:28:38.947921 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/312037dd-62a7-4574-9797-9a26dae7fb35-socket-dir\") pod \"lvms-operator-67785c87c7-2dz7c\" (UID: \"312037dd-62a7-4574-9797-9a26dae7fb35\") " pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" Mar 19 12:28:38.949890 master-0 kubenswrapper[31830]: I0319 12:28:38.948550 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/312037dd-62a7-4574-9797-9a26dae7fb35-socket-dir\") pod \"lvms-operator-67785c87c7-2dz7c\" (UID: \"312037dd-62a7-4574-9797-9a26dae7fb35\") " pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" Mar 19 12:28:38.951368 master-0 kubenswrapper[31830]: I0319 12:28:38.950913 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/312037dd-62a7-4574-9797-9a26dae7fb35-webhook-cert\") pod \"lvms-operator-67785c87c7-2dz7c\" (UID: \"312037dd-62a7-4574-9797-9a26dae7fb35\") " pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" Mar 19 12:28:38.954828 master-0 kubenswrapper[31830]: I0319 12:28:38.951497 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/312037dd-62a7-4574-9797-9a26dae7fb35-apiservice-cert\") pod \"lvms-operator-67785c87c7-2dz7c\" (UID: \"312037dd-62a7-4574-9797-9a26dae7fb35\") " pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" Mar 19 12:28:38.954828 master-0 kubenswrapper[31830]: I0319 12:28:38.953466 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/312037dd-62a7-4574-9797-9a26dae7fb35-metrics-cert\") pod \"lvms-operator-67785c87c7-2dz7c\" (UID: \"312037dd-62a7-4574-9797-9a26dae7fb35\") " pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" Mar 19 12:28:38.972947 master-0 kubenswrapper[31830]: I0319 12:28:38.972893 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqrnc\" (UniqueName: \"kubernetes.io/projected/312037dd-62a7-4574-9797-9a26dae7fb35-kube-api-access-fqrnc\") pod \"lvms-operator-67785c87c7-2dz7c\" (UID: \"312037dd-62a7-4574-9797-9a26dae7fb35\") " pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" Mar 19 12:28:39.061232 master-0 kubenswrapper[31830]: I0319 12:28:39.061117 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" Mar 19 12:28:39.617953 master-0 kubenswrapper[31830]: I0319 12:28:39.617267 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-67785c87c7-2dz7c"] Mar 19 12:28:39.620516 master-0 kubenswrapper[31830]: W0319 12:28:39.620479 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod312037dd_62a7_4574_9797_9a26dae7fb35.slice/crio-380de0b934e236df4520f5c4c89318b35501365651b90e35316f63e19d828f1e WatchSource:0}: Error finding container 380de0b934e236df4520f5c4c89318b35501365651b90e35316f63e19d828f1e: Status 404 returned error can't find the container with id 380de0b934e236df4520f5c4c89318b35501365651b90e35316f63e19d828f1e Mar 19 12:28:40.489699 master-0 kubenswrapper[31830]: I0319 12:28:40.489631 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" event={"ID":"312037dd-62a7-4574-9797-9a26dae7fb35","Type":"ContainerStarted","Data":"380de0b934e236df4520f5c4c89318b35501365651b90e35316f63e19d828f1e"} Mar 19 12:28:46.534943 master-0 kubenswrapper[31830]: I0319 12:28:46.534898 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" event={"ID":"312037dd-62a7-4574-9797-9a26dae7fb35","Type":"ContainerStarted","Data":"ca955837a90496fadf09cac7117fcc37d21aa8e56a125ae9ad2d41df513fcb9b"} Mar 19 12:28:46.535554 master-0 kubenswrapper[31830]: I0319 12:28:46.535530 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" Mar 19 12:28:46.538479 master-0 kubenswrapper[31830]: I0319 12:28:46.538461 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" Mar 19 12:28:46.567542 master-0 kubenswrapper[31830]: I0319 12:28:46.567448 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-67785c87c7-2dz7c" podStartSLOduration=2.6644252269999997 podStartE2EDuration="8.567426161s" podCreationTimestamp="2026-03-19 12:28:38 +0000 UTC" firstStartedPulling="2026-03-19 12:28:39.62370923 +0000 UTC m=+858.172669934" lastFinishedPulling="2026-03-19 12:28:45.526710164 +0000 UTC m=+864.075670868" observedRunningTime="2026-03-19 12:28:46.560163475 +0000 UTC m=+865.109124179" watchObservedRunningTime="2026-03-19 12:28:46.567426161 +0000 UTC m=+865.116386865" Mar 19 12:28:50.718027 master-0 kubenswrapper[31830]: I0319 12:28:50.717951 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27"] Mar 19 12:28:50.720232 master-0 kubenswrapper[31830]: I0319 12:28:50.720180 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27" Mar 19 12:28:50.733312 master-0 kubenswrapper[31830]: I0319 12:28:50.733227 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27"] Mar 19 12:28:50.774881 master-0 kubenswrapper[31830]: I0319 12:28:50.774769 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/85f52972-f923-4479-a7f9-595e1d62c0ab-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27\" (UID: \"85f52972-f923-4479-a7f9-595e1d62c0ab\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27" Mar 19 12:28:50.775120 master-0 kubenswrapper[31830]: I0319 12:28:50.774959 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jntxp\" (UniqueName: \"kubernetes.io/projected/85f52972-f923-4479-a7f9-595e1d62c0ab-kube-api-access-jntxp\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27\" (UID: \"85f52972-f923-4479-a7f9-595e1d62c0ab\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27" Mar 19 12:28:50.775120 master-0 kubenswrapper[31830]: I0319 12:28:50.775055 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/85f52972-f923-4479-a7f9-595e1d62c0ab-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27\" (UID: \"85f52972-f923-4479-a7f9-595e1d62c0ab\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27" Mar 19 12:28:50.797786 master-0 kubenswrapper[31830]: I0319 12:28:50.797710 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk"] Mar 19 12:28:50.803646 master-0 kubenswrapper[31830]: I0319 12:28:50.802893 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk" Mar 19 12:28:50.805754 master-0 kubenswrapper[31830]: I0319 12:28:50.805709 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk"] Mar 19 12:28:50.877473 master-0 kubenswrapper[31830]: I0319 12:28:50.877415 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/85f52972-f923-4479-a7f9-595e1d62c0ab-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27\" (UID: \"85f52972-f923-4479-a7f9-595e1d62c0ab\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27" Mar 19 12:28:50.877473 master-0 kubenswrapper[31830]: I0319 12:28:50.877472 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29frs\" (UniqueName: \"kubernetes.io/projected/b8cc40de-7004-4f6d-bd80-559599cf5e8b-kube-api-access-29frs\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk\" (UID: \"b8cc40de-7004-4f6d-bd80-559599cf5e8b\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk" Mar 19 12:28:50.877702 master-0 kubenswrapper[31830]: I0319 12:28:50.877609 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jntxp\" (UniqueName: \"kubernetes.io/projected/85f52972-f923-4479-a7f9-595e1d62c0ab-kube-api-access-jntxp\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27\" (UID: \"85f52972-f923-4479-a7f9-595e1d62c0ab\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27" Mar 19 12:28:50.877702 master-0 kubenswrapper[31830]: I0319 12:28:50.877642 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b8cc40de-7004-4f6d-bd80-559599cf5e8b-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk\" (UID: \"b8cc40de-7004-4f6d-bd80-559599cf5e8b\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk" Mar 19 12:28:50.877702 master-0 kubenswrapper[31830]: I0319 12:28:50.877685 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b8cc40de-7004-4f6d-bd80-559599cf5e8b-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk\" (UID: \"b8cc40de-7004-4f6d-bd80-559599cf5e8b\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk" Mar 19 12:28:50.877815 master-0 kubenswrapper[31830]: I0319 12:28:50.877724 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/85f52972-f923-4479-a7f9-595e1d62c0ab-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27\" (UID: \"85f52972-f923-4479-a7f9-595e1d62c0ab\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27" Mar 19 12:28:50.878005 master-0 kubenswrapper[31830]: I0319 12:28:50.877974 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/85f52972-f923-4479-a7f9-595e1d62c0ab-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27\" (UID: \"85f52972-f923-4479-a7f9-595e1d62c0ab\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27" Mar 19 12:28:50.878306 master-0 kubenswrapper[31830]: I0319 12:28:50.878277 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/85f52972-f923-4479-a7f9-595e1d62c0ab-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27\" (UID: \"85f52972-f923-4479-a7f9-595e1d62c0ab\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27" Mar 19 12:28:50.893523 master-0 kubenswrapper[31830]: I0319 12:28:50.893477 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jntxp\" (UniqueName: \"kubernetes.io/projected/85f52972-f923-4479-a7f9-595e1d62c0ab-kube-api-access-jntxp\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27\" (UID: \"85f52972-f923-4479-a7f9-595e1d62c0ab\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27" Mar 19 12:28:50.978642 master-0 kubenswrapper[31830]: I0319 12:28:50.978582 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29frs\" (UniqueName: \"kubernetes.io/projected/b8cc40de-7004-4f6d-bd80-559599cf5e8b-kube-api-access-29frs\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk\" (UID: \"b8cc40de-7004-4f6d-bd80-559599cf5e8b\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk" Mar 19 12:28:50.978927 master-0 kubenswrapper[31830]: I0319 12:28:50.978677 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b8cc40de-7004-4f6d-bd80-559599cf5e8b-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk\" (UID: \"b8cc40de-7004-4f6d-bd80-559599cf5e8b\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk" Mar 19 12:28:50.978927 master-0 kubenswrapper[31830]: I0319 12:28:50.978698 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b8cc40de-7004-4f6d-bd80-559599cf5e8b-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk\" (UID: \"b8cc40de-7004-4f6d-bd80-559599cf5e8b\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk" Mar 19 12:28:50.990130 master-0 kubenswrapper[31830]: I0319 12:28:50.990081 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b8cc40de-7004-4f6d-bd80-559599cf5e8b-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk\" (UID: \"b8cc40de-7004-4f6d-bd80-559599cf5e8b\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk" Mar 19 12:28:50.990130 master-0 kubenswrapper[31830]: I0319 12:28:50.990086 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b8cc40de-7004-4f6d-bd80-559599cf5e8b-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk\" (UID: \"b8cc40de-7004-4f6d-bd80-559599cf5e8b\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk" Mar 19 12:28:51.005393 master-0 kubenswrapper[31830]: I0319 12:28:51.005336 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29frs\" (UniqueName: \"kubernetes.io/projected/b8cc40de-7004-4f6d-bd80-559599cf5e8b-kube-api-access-29frs\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk\" (UID: \"b8cc40de-7004-4f6d-bd80-559599cf5e8b\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk" Mar 19 12:28:51.036319 master-0 kubenswrapper[31830]: I0319 12:28:51.036253 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27" Mar 19 12:28:51.124118 master-0 kubenswrapper[31830]: I0319 12:28:51.124054 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk" Mar 19 12:28:51.581639 master-0 kubenswrapper[31830]: W0319 12:28:51.581566 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85f52972_f923_4479_a7f9_595e1d62c0ab.slice/crio-ea96e65ba66d0bf29e179053305a4e92457c0d741a63b92e7e3b658f609b6573 WatchSource:0}: Error finding container ea96e65ba66d0bf29e179053305a4e92457c0d741a63b92e7e3b658f609b6573: Status 404 returned error can't find the container with id ea96e65ba66d0bf29e179053305a4e92457c0d741a63b92e7e3b658f609b6573 Mar 19 12:28:51.586098 master-0 kubenswrapper[31830]: I0319 12:28:51.585196 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27"] Mar 19 12:28:51.599416 master-0 kubenswrapper[31830]: I0319 12:28:51.599331 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk"] Mar 19 12:28:51.601628 master-0 kubenswrapper[31830]: W0319 12:28:51.601578 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8cc40de_7004_4f6d_bd80_559599cf5e8b.slice/crio-90b1caec8126b49b50fc8b745e1c060d242529393d00b0885349112a41c53586 WatchSource:0}: Error finding container 90b1caec8126b49b50fc8b745e1c060d242529393d00b0885349112a41c53586: Status 404 returned error can't find the container with id 90b1caec8126b49b50fc8b745e1c060d242529393d00b0885349112a41c53586 Mar 19 12:28:51.662955 master-0 kubenswrapper[31830]: I0319 12:28:51.662883 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29"] Mar 19 12:28:51.664605 master-0 kubenswrapper[31830]: I0319 12:28:51.664559 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29" Mar 19 12:28:51.703395 master-0 kubenswrapper[31830]: I0319 12:28:51.703047 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29"] Mar 19 12:28:51.794748 master-0 kubenswrapper[31830]: I0319 12:28:51.794680 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5f62698-fef6-43f6-9f5b-9bef1af00d47-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29\" (UID: \"b5f62698-fef6-43f6-9f5b-9bef1af00d47\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29" Mar 19 12:28:51.795275 master-0 kubenswrapper[31830]: I0319 12:28:51.794779 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5f62698-fef6-43f6-9f5b-9bef1af00d47-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29\" (UID: \"b5f62698-fef6-43f6-9f5b-9bef1af00d47\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29" Mar 19 12:28:51.795275 master-0 kubenswrapper[31830]: I0319 12:28:51.794875 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rt2b\" (UniqueName: \"kubernetes.io/projected/b5f62698-fef6-43f6-9f5b-9bef1af00d47-kube-api-access-8rt2b\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29\" (UID: \"b5f62698-fef6-43f6-9f5b-9bef1af00d47\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29" Mar 19 12:28:51.897391 master-0 kubenswrapper[31830]: I0319 12:28:51.897322 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5f62698-fef6-43f6-9f5b-9bef1af00d47-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29\" (UID: \"b5f62698-fef6-43f6-9f5b-9bef1af00d47\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29" Mar 19 12:28:51.897638 master-0 kubenswrapper[31830]: I0319 12:28:51.897434 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5f62698-fef6-43f6-9f5b-9bef1af00d47-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29\" (UID: \"b5f62698-fef6-43f6-9f5b-9bef1af00d47\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29" Mar 19 12:28:51.897638 master-0 kubenswrapper[31830]: I0319 12:28:51.897511 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rt2b\" (UniqueName: \"kubernetes.io/projected/b5f62698-fef6-43f6-9f5b-9bef1af00d47-kube-api-access-8rt2b\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29\" (UID: \"b5f62698-fef6-43f6-9f5b-9bef1af00d47\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29" Mar 19 12:28:51.897900 master-0 kubenswrapper[31830]: I0319 12:28:51.897854 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5f62698-fef6-43f6-9f5b-9bef1af00d47-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29\" (UID: \"b5f62698-fef6-43f6-9f5b-9bef1af00d47\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29" Mar 19 12:28:51.897953 master-0 kubenswrapper[31830]: I0319 12:28:51.897919 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5f62698-fef6-43f6-9f5b-9bef1af00d47-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29\" (UID: \"b5f62698-fef6-43f6-9f5b-9bef1af00d47\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29" Mar 19 12:28:51.914105 master-0 kubenswrapper[31830]: I0319 12:28:51.913968 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rt2b\" (UniqueName: \"kubernetes.io/projected/b5f62698-fef6-43f6-9f5b-9bef1af00d47-kube-api-access-8rt2b\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29\" (UID: \"b5f62698-fef6-43f6-9f5b-9bef1af00d47\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29" Mar 19 12:28:52.007677 master-0 kubenswrapper[31830]: I0319 12:28:52.007559 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29" Mar 19 12:28:52.455998 master-0 kubenswrapper[31830]: I0319 12:28:52.455954 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29"] Mar 19 12:28:52.460407 master-0 kubenswrapper[31830]: W0319 12:28:52.460362 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5f62698_fef6_43f6_9f5b_9bef1af00d47.slice/crio-936ddb8c063612346613ec9148838502e56ce5b81571bab5bc7f948edf77f649 WatchSource:0}: Error finding container 936ddb8c063612346613ec9148838502e56ce5b81571bab5bc7f948edf77f649: Status 404 returned error can't find the container with id 936ddb8c063612346613ec9148838502e56ce5b81571bab5bc7f948edf77f649 Mar 19 12:28:52.580273 master-0 kubenswrapper[31830]: I0319 12:28:52.580236 31830 generic.go:334] "Generic (PLEG): container finished" podID="b8cc40de-7004-4f6d-bd80-559599cf5e8b" containerID="10efea5cc3a33226501b0bf5dfdb79e2651c04f03819e011bdf2ac7ce5abe27b" exitCode=0 Mar 19 12:28:52.580442 master-0 kubenswrapper[31830]: I0319 12:28:52.580320 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk" event={"ID":"b8cc40de-7004-4f6d-bd80-559599cf5e8b","Type":"ContainerDied","Data":"10efea5cc3a33226501b0bf5dfdb79e2651c04f03819e011bdf2ac7ce5abe27b"} Mar 19 12:28:52.580442 master-0 kubenswrapper[31830]: I0319 12:28:52.580359 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk" event={"ID":"b8cc40de-7004-4f6d-bd80-559599cf5e8b","Type":"ContainerStarted","Data":"90b1caec8126b49b50fc8b745e1c060d242529393d00b0885349112a41c53586"} Mar 19 12:28:52.582089 master-0 kubenswrapper[31830]: I0319 12:28:52.582011 31830 generic.go:334] "Generic (PLEG): container finished" podID="85f52972-f923-4479-a7f9-595e1d62c0ab" containerID="ed4076cf31bd7b26920c5ece097d350ea45ed3776e8b998c1676f908c224b7e7" exitCode=0 Mar 19 12:28:52.582212 master-0 kubenswrapper[31830]: I0319 12:28:52.582143 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27" event={"ID":"85f52972-f923-4479-a7f9-595e1d62c0ab","Type":"ContainerDied","Data":"ed4076cf31bd7b26920c5ece097d350ea45ed3776e8b998c1676f908c224b7e7"} Mar 19 12:28:52.582212 master-0 kubenswrapper[31830]: I0319 12:28:52.582195 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27" event={"ID":"85f52972-f923-4479-a7f9-595e1d62c0ab","Type":"ContainerStarted","Data":"ea96e65ba66d0bf29e179053305a4e92457c0d741a63b92e7e3b658f609b6573"} Mar 19 12:28:52.587153 master-0 kubenswrapper[31830]: I0319 12:28:52.587096 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29" event={"ID":"b5f62698-fef6-43f6-9f5b-9bef1af00d47","Type":"ContainerStarted","Data":"936ddb8c063612346613ec9148838502e56ce5b81571bab5bc7f948edf77f649"} Mar 19 12:28:53.596127 master-0 kubenswrapper[31830]: I0319 12:28:53.596080 31830 generic.go:334] "Generic (PLEG): container finished" podID="b5f62698-fef6-43f6-9f5b-9bef1af00d47" containerID="3a799f7ce0ee9a6497eda4e7f6742b2a8d4c5288f401a499099cf8359091b534" exitCode=0 Mar 19 12:28:53.596127 master-0 kubenswrapper[31830]: I0319 12:28:53.596124 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29" event={"ID":"b5f62698-fef6-43f6-9f5b-9bef1af00d47","Type":"ContainerDied","Data":"3a799f7ce0ee9a6497eda4e7f6742b2a8d4c5288f401a499099cf8359091b534"} Mar 19 12:28:54.605337 master-0 kubenswrapper[31830]: I0319 12:28:54.605290 31830 generic.go:334] "Generic (PLEG): container finished" podID="b8cc40de-7004-4f6d-bd80-559599cf5e8b" containerID="73cfbdfe824ef31e8f5c3495a66dc3993a6f17d477e13a899f1a6e7ed3df41d0" exitCode=0 Mar 19 12:28:54.605883 master-0 kubenswrapper[31830]: I0319 12:28:54.605342 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk" event={"ID":"b8cc40de-7004-4f6d-bd80-559599cf5e8b","Type":"ContainerDied","Data":"73cfbdfe824ef31e8f5c3495a66dc3993a6f17d477e13a899f1a6e7ed3df41d0"} Mar 19 12:28:55.614531 master-0 kubenswrapper[31830]: I0319 12:28:55.614462 31830 generic.go:334] "Generic (PLEG): container finished" podID="b8cc40de-7004-4f6d-bd80-559599cf5e8b" containerID="d99cb52faadfd894eb560fa29025e4f3e8249e126d585243910849e6841ad29b" exitCode=0 Mar 19 12:28:55.614531 master-0 kubenswrapper[31830]: I0319 12:28:55.614523 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk" event={"ID":"b8cc40de-7004-4f6d-bd80-559599cf5e8b","Type":"ContainerDied","Data":"d99cb52faadfd894eb560fa29025e4f3e8249e126d585243910849e6841ad29b"} Mar 19 12:28:55.617707 master-0 kubenswrapper[31830]: I0319 12:28:55.617655 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27" event={"ID":"85f52972-f923-4479-a7f9-595e1d62c0ab","Type":"ContainerStarted","Data":"8b9353fe0361e7b1e5c2b4487b8fb583a3725261c1054255e7bbb08b5d40c9b1"} Mar 19 12:28:56.630498 master-0 kubenswrapper[31830]: I0319 12:28:56.630367 31830 generic.go:334] "Generic (PLEG): container finished" podID="b5f62698-fef6-43f6-9f5b-9bef1af00d47" containerID="1cc2875342965d3782a4089a6db60f9d92d670ae6562091eed55ea54797748ff" exitCode=0 Mar 19 12:28:56.630498 master-0 kubenswrapper[31830]: I0319 12:28:56.630452 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29" event={"ID":"b5f62698-fef6-43f6-9f5b-9bef1af00d47","Type":"ContainerDied","Data":"1cc2875342965d3782a4089a6db60f9d92d670ae6562091eed55ea54797748ff"} Mar 19 12:28:56.633343 master-0 kubenswrapper[31830]: I0319 12:28:56.633320 31830 generic.go:334] "Generic (PLEG): container finished" podID="85f52972-f923-4479-a7f9-595e1d62c0ab" containerID="8b9353fe0361e7b1e5c2b4487b8fb583a3725261c1054255e7bbb08b5d40c9b1" exitCode=0 Mar 19 12:28:56.633519 master-0 kubenswrapper[31830]: I0319 12:28:56.633444 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27" event={"ID":"85f52972-f923-4479-a7f9-595e1d62c0ab","Type":"ContainerDied","Data":"8b9353fe0361e7b1e5c2b4487b8fb583a3725261c1054255e7bbb08b5d40c9b1"} Mar 19 12:28:57.130352 master-0 kubenswrapper[31830]: I0319 12:28:57.130300 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk" Mar 19 12:28:57.279648 master-0 kubenswrapper[31830]: I0319 12:28:57.279581 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29frs\" (UniqueName: \"kubernetes.io/projected/b8cc40de-7004-4f6d-bd80-559599cf5e8b-kube-api-access-29frs\") pod \"b8cc40de-7004-4f6d-bd80-559599cf5e8b\" (UID: \"b8cc40de-7004-4f6d-bd80-559599cf5e8b\") " Mar 19 12:28:57.279927 master-0 kubenswrapper[31830]: I0319 12:28:57.279751 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b8cc40de-7004-4f6d-bd80-559599cf5e8b-bundle\") pod \"b8cc40de-7004-4f6d-bd80-559599cf5e8b\" (UID: \"b8cc40de-7004-4f6d-bd80-559599cf5e8b\") " Mar 19 12:28:57.279927 master-0 kubenswrapper[31830]: I0319 12:28:57.279910 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b8cc40de-7004-4f6d-bd80-559599cf5e8b-util\") pod \"b8cc40de-7004-4f6d-bd80-559599cf5e8b\" (UID: \"b8cc40de-7004-4f6d-bd80-559599cf5e8b\") " Mar 19 12:28:57.280714 master-0 kubenswrapper[31830]: I0319 12:28:57.280667 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8cc40de-7004-4f6d-bd80-559599cf5e8b-bundle" (OuterVolumeSpecName: "bundle") pod "b8cc40de-7004-4f6d-bd80-559599cf5e8b" (UID: "b8cc40de-7004-4f6d-bd80-559599cf5e8b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:28:57.282396 master-0 kubenswrapper[31830]: I0319 12:28:57.282348 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8cc40de-7004-4f6d-bd80-559599cf5e8b-kube-api-access-29frs" (OuterVolumeSpecName: "kube-api-access-29frs") pod "b8cc40de-7004-4f6d-bd80-559599cf5e8b" (UID: "b8cc40de-7004-4f6d-bd80-559599cf5e8b"). InnerVolumeSpecName "kube-api-access-29frs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:28:57.365535 master-0 kubenswrapper[31830]: I0319 12:28:57.365450 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8cc40de-7004-4f6d-bd80-559599cf5e8b-util" (OuterVolumeSpecName: "util") pod "b8cc40de-7004-4f6d-bd80-559599cf5e8b" (UID: "b8cc40de-7004-4f6d-bd80-559599cf5e8b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:28:57.382249 master-0 kubenswrapper[31830]: I0319 12:28:57.382068 31830 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b8cc40de-7004-4f6d-bd80-559599cf5e8b-util\") on node \"master-0\" DevicePath \"\"" Mar 19 12:28:57.382249 master-0 kubenswrapper[31830]: I0319 12:28:57.382157 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29frs\" (UniqueName: \"kubernetes.io/projected/b8cc40de-7004-4f6d-bd80-559599cf5e8b-kube-api-access-29frs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:28:57.382249 master-0 kubenswrapper[31830]: I0319 12:28:57.382181 31830 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b8cc40de-7004-4f6d-bd80-559599cf5e8b-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:28:57.645943 master-0 kubenswrapper[31830]: I0319 12:28:57.645788 31830 generic.go:334] "Generic (PLEG): container finished" podID="85f52972-f923-4479-a7f9-595e1d62c0ab" containerID="5eca1dcad6fb681169c7ac82412feb2d6d4f9b6b4c20a9a031dc642e33c07883" exitCode=0 Mar 19 12:28:57.645943 master-0 kubenswrapper[31830]: I0319 12:28:57.645881 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27" event={"ID":"85f52972-f923-4479-a7f9-595e1d62c0ab","Type":"ContainerDied","Data":"5eca1dcad6fb681169c7ac82412feb2d6d4f9b6b4c20a9a031dc642e33c07883"} Mar 19 12:28:57.649567 master-0 kubenswrapper[31830]: I0319 12:28:57.649501 31830 generic.go:334] "Generic (PLEG): container finished" podID="b5f62698-fef6-43f6-9f5b-9bef1af00d47" containerID="2660cae5c2f79a6a4d0f446373067694f5a7e0cbd6259af21cd5c8ab814a7543" exitCode=0 Mar 19 12:28:57.649923 master-0 kubenswrapper[31830]: I0319 12:28:57.649589 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29" event={"ID":"b5f62698-fef6-43f6-9f5b-9bef1af00d47","Type":"ContainerDied","Data":"2660cae5c2f79a6a4d0f446373067694f5a7e0cbd6259af21cd5c8ab814a7543"} Mar 19 12:28:57.652874 master-0 kubenswrapper[31830]: I0319 12:28:57.652844 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk" event={"ID":"b8cc40de-7004-4f6d-bd80-559599cf5e8b","Type":"ContainerDied","Data":"90b1caec8126b49b50fc8b745e1c060d242529393d00b0885349112a41c53586"} Mar 19 12:28:57.652874 master-0 kubenswrapper[31830]: I0319 12:28:57.652871 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90b1caec8126b49b50fc8b745e1c060d242529393d00b0885349112a41c53586" Mar 19 12:28:57.653078 master-0 kubenswrapper[31830]: I0319 12:28:57.652985 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1cl8xk" Mar 19 12:28:58.977366 master-0 kubenswrapper[31830]: I0319 12:28:58.977321 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29" Mar 19 12:28:59.043067 master-0 kubenswrapper[31830]: I0319 12:28:59.009534 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rt2b\" (UniqueName: \"kubernetes.io/projected/b5f62698-fef6-43f6-9f5b-9bef1af00d47-kube-api-access-8rt2b\") pod \"b5f62698-fef6-43f6-9f5b-9bef1af00d47\" (UID: \"b5f62698-fef6-43f6-9f5b-9bef1af00d47\") " Mar 19 12:28:59.043067 master-0 kubenswrapper[31830]: I0319 12:28:59.009663 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5f62698-fef6-43f6-9f5b-9bef1af00d47-util\") pod \"b5f62698-fef6-43f6-9f5b-9bef1af00d47\" (UID: \"b5f62698-fef6-43f6-9f5b-9bef1af00d47\") " Mar 19 12:28:59.043067 master-0 kubenswrapper[31830]: I0319 12:28:59.009687 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5f62698-fef6-43f6-9f5b-9bef1af00d47-bundle\") pod \"b5f62698-fef6-43f6-9f5b-9bef1af00d47\" (UID: \"b5f62698-fef6-43f6-9f5b-9bef1af00d47\") " Mar 19 12:28:59.043067 master-0 kubenswrapper[31830]: I0319 12:28:59.010438 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5f62698-fef6-43f6-9f5b-9bef1af00d47-bundle" (OuterVolumeSpecName: "bundle") pod "b5f62698-fef6-43f6-9f5b-9bef1af00d47" (UID: "b5f62698-fef6-43f6-9f5b-9bef1af00d47"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:28:59.043067 master-0 kubenswrapper[31830]: I0319 12:28:59.023986 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5f62698-fef6-43f6-9f5b-9bef1af00d47-util" (OuterVolumeSpecName: "util") pod "b5f62698-fef6-43f6-9f5b-9bef1af00d47" (UID: "b5f62698-fef6-43f6-9f5b-9bef1af00d47"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:28:59.044369 master-0 kubenswrapper[31830]: I0319 12:28:59.044295 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5f62698-fef6-43f6-9f5b-9bef1af00d47-kube-api-access-8rt2b" (OuterVolumeSpecName: "kube-api-access-8rt2b") pod "b5f62698-fef6-43f6-9f5b-9bef1af00d47" (UID: "b5f62698-fef6-43f6-9f5b-9bef1af00d47"). InnerVolumeSpecName "kube-api-access-8rt2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:28:59.092778 master-0 kubenswrapper[31830]: I0319 12:28:59.092747 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27" Mar 19 12:28:59.110759 master-0 kubenswrapper[31830]: I0319 12:28:59.110720 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/85f52972-f923-4479-a7f9-595e1d62c0ab-bundle\") pod \"85f52972-f923-4479-a7f9-595e1d62c0ab\" (UID: \"85f52972-f923-4479-a7f9-595e1d62c0ab\") " Mar 19 12:28:59.111215 master-0 kubenswrapper[31830]: I0319 12:28:59.111176 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/85f52972-f923-4479-a7f9-595e1d62c0ab-util\") pod \"85f52972-f923-4479-a7f9-595e1d62c0ab\" (UID: \"85f52972-f923-4479-a7f9-595e1d62c0ab\") " Mar 19 12:28:59.111620 master-0 kubenswrapper[31830]: I0319 12:28:59.111604 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jntxp\" (UniqueName: \"kubernetes.io/projected/85f52972-f923-4479-a7f9-595e1d62c0ab-kube-api-access-jntxp\") pod \"85f52972-f923-4479-a7f9-595e1d62c0ab\" (UID: \"85f52972-f923-4479-a7f9-595e1d62c0ab\") " Mar 19 12:28:59.112141 master-0 kubenswrapper[31830]: I0319 12:28:59.112122 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rt2b\" (UniqueName: \"kubernetes.io/projected/b5f62698-fef6-43f6-9f5b-9bef1af00d47-kube-api-access-8rt2b\") on node \"master-0\" DevicePath \"\"" Mar 19 12:28:59.112240 master-0 kubenswrapper[31830]: I0319 12:28:59.112229 31830 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5f62698-fef6-43f6-9f5b-9bef1af00d47-util\") on node \"master-0\" DevicePath \"\"" Mar 19 12:28:59.112629 master-0 kubenswrapper[31830]: I0319 12:28:59.112612 31830 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5f62698-fef6-43f6-9f5b-9bef1af00d47-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:28:59.119484 master-0 kubenswrapper[31830]: I0319 12:28:59.118838 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85f52972-f923-4479-a7f9-595e1d62c0ab-kube-api-access-jntxp" (OuterVolumeSpecName: "kube-api-access-jntxp") pod "85f52972-f923-4479-a7f9-595e1d62c0ab" (UID: "85f52972-f923-4479-a7f9-595e1d62c0ab"). InnerVolumeSpecName "kube-api-access-jntxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:28:59.122750 master-0 kubenswrapper[31830]: I0319 12:28:59.122681 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85f52972-f923-4479-a7f9-595e1d62c0ab-bundle" (OuterVolumeSpecName: "bundle") pod "85f52972-f923-4479-a7f9-595e1d62c0ab" (UID: "85f52972-f923-4479-a7f9-595e1d62c0ab"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:28:59.142927 master-0 kubenswrapper[31830]: I0319 12:28:59.142550 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85f52972-f923-4479-a7f9-595e1d62c0ab-util" (OuterVolumeSpecName: "util") pod "85f52972-f923-4479-a7f9-595e1d62c0ab" (UID: "85f52972-f923-4479-a7f9-595e1d62c0ab"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:28:59.216605 master-0 kubenswrapper[31830]: I0319 12:28:59.213875 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jntxp\" (UniqueName: \"kubernetes.io/projected/85f52972-f923-4479-a7f9-595e1d62c0ab-kube-api-access-jntxp\") on node \"master-0\" DevicePath \"\"" Mar 19 12:28:59.216605 master-0 kubenswrapper[31830]: I0319 12:28:59.213920 31830 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/85f52972-f923-4479-a7f9-595e1d62c0ab-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:28:59.216605 master-0 kubenswrapper[31830]: I0319 12:28:59.213935 31830 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/85f52972-f923-4479-a7f9-595e1d62c0ab-util\") on node \"master-0\" DevicePath \"\"" Mar 19 12:28:59.669689 master-0 kubenswrapper[31830]: I0319 12:28:59.669579 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27" event={"ID":"85f52972-f923-4479-a7f9-595e1d62c0ab","Type":"ContainerDied","Data":"ea96e65ba66d0bf29e179053305a4e92457c0d741a63b92e7e3b658f609b6573"} Mar 19 12:28:59.669689 master-0 kubenswrapper[31830]: I0319 12:28:59.669632 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea96e65ba66d0bf29e179053305a4e92457c0d741a63b92e7e3b658f609b6573" Mar 19 12:28:59.669689 master-0 kubenswrapper[31830]: I0319 12:28:59.669634 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g2z27" Mar 19 12:28:59.676979 master-0 kubenswrapper[31830]: I0319 12:28:59.676919 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29" Mar 19 12:28:59.688941 master-0 kubenswrapper[31830]: I0319 12:28:59.687946 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wsb29" event={"ID":"b5f62698-fef6-43f6-9f5b-9bef1af00d47","Type":"ContainerDied","Data":"936ddb8c063612346613ec9148838502e56ce5b81571bab5bc7f948edf77f649"} Mar 19 12:28:59.688941 master-0 kubenswrapper[31830]: I0319 12:28:59.688005 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="936ddb8c063612346613ec9148838502e56ce5b81571bab5bc7f948edf77f649" Mar 19 12:29:00.048674 master-0 kubenswrapper[31830]: I0319 12:29:00.048601 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg"] Mar 19 12:29:00.049204 master-0 kubenswrapper[31830]: E0319 12:29:00.049095 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8cc40de-7004-4f6d-bd80-559599cf5e8b" containerName="pull" Mar 19 12:29:00.049204 master-0 kubenswrapper[31830]: I0319 12:29:00.049117 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8cc40de-7004-4f6d-bd80-559599cf5e8b" containerName="pull" Mar 19 12:29:00.049204 master-0 kubenswrapper[31830]: E0319 12:29:00.049143 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8cc40de-7004-4f6d-bd80-559599cf5e8b" containerName="util" Mar 19 12:29:00.049204 master-0 kubenswrapper[31830]: I0319 12:29:00.049155 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8cc40de-7004-4f6d-bd80-559599cf5e8b" containerName="util" Mar 19 12:29:00.049204 master-0 kubenswrapper[31830]: E0319 12:29:00.049179 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f52972-f923-4479-a7f9-595e1d62c0ab" containerName="pull" Mar 19 12:29:00.049204 master-0 kubenswrapper[31830]: I0319 12:29:00.049190 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f52972-f923-4479-a7f9-595e1d62c0ab" containerName="pull" Mar 19 12:29:00.049413 master-0 kubenswrapper[31830]: E0319 12:29:00.049213 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f62698-fef6-43f6-9f5b-9bef1af00d47" containerName="extract" Mar 19 12:29:00.049413 master-0 kubenswrapper[31830]: I0319 12:29:00.049225 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f62698-fef6-43f6-9f5b-9bef1af00d47" containerName="extract" Mar 19 12:29:00.049413 master-0 kubenswrapper[31830]: E0319 12:29:00.049237 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8cc40de-7004-4f6d-bd80-559599cf5e8b" containerName="extract" Mar 19 12:29:00.049413 master-0 kubenswrapper[31830]: I0319 12:29:00.049248 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8cc40de-7004-4f6d-bd80-559599cf5e8b" containerName="extract" Mar 19 12:29:00.049413 master-0 kubenswrapper[31830]: E0319 12:29:00.049278 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f62698-fef6-43f6-9f5b-9bef1af00d47" containerName="util" Mar 19 12:29:00.049413 master-0 kubenswrapper[31830]: I0319 12:29:00.049289 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f62698-fef6-43f6-9f5b-9bef1af00d47" containerName="util" Mar 19 12:29:00.049413 master-0 kubenswrapper[31830]: E0319 12:29:00.049303 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f52972-f923-4479-a7f9-595e1d62c0ab" containerName="util" Mar 19 12:29:00.049413 master-0 kubenswrapper[31830]: I0319 12:29:00.049313 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f52972-f923-4479-a7f9-595e1d62c0ab" containerName="util" Mar 19 12:29:00.049413 master-0 kubenswrapper[31830]: E0319 12:29:00.049333 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f62698-fef6-43f6-9f5b-9bef1af00d47" containerName="pull" Mar 19 12:29:00.049413 master-0 kubenswrapper[31830]: I0319 12:29:00.049346 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f62698-fef6-43f6-9f5b-9bef1af00d47" containerName="pull" Mar 19 12:29:00.049413 master-0 kubenswrapper[31830]: E0319 12:29:00.049365 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f52972-f923-4479-a7f9-595e1d62c0ab" containerName="extract" Mar 19 12:29:00.049413 master-0 kubenswrapper[31830]: I0319 12:29:00.049375 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f52972-f923-4479-a7f9-595e1d62c0ab" containerName="extract" Mar 19 12:29:00.049768 master-0 kubenswrapper[31830]: I0319 12:29:00.049606 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8cc40de-7004-4f6d-bd80-559599cf5e8b" containerName="extract" Mar 19 12:29:00.049768 master-0 kubenswrapper[31830]: I0319 12:29:00.049647 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="85f52972-f923-4479-a7f9-595e1d62c0ab" containerName="extract" Mar 19 12:29:00.049768 master-0 kubenswrapper[31830]: I0319 12:29:00.049666 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f62698-fef6-43f6-9f5b-9bef1af00d47" containerName="extract" Mar 19 12:29:00.051376 master-0 kubenswrapper[31830]: I0319 12:29:00.051323 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg" Mar 19 12:29:00.054298 master-0 kubenswrapper[31830]: I0319 12:29:00.054248 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg"] Mar 19 12:29:00.229258 master-0 kubenswrapper[31830]: I0319 12:29:00.229178 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8qhx\" (UniqueName: \"kubernetes.io/projected/23ec95e1-075b-486d-a765-32b393e90574-kube-api-access-x8qhx\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg\" (UID: \"23ec95e1-075b-486d-a765-32b393e90574\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg" Mar 19 12:29:00.229508 master-0 kubenswrapper[31830]: I0319 12:29:00.229456 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23ec95e1-075b-486d-a765-32b393e90574-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg\" (UID: \"23ec95e1-075b-486d-a765-32b393e90574\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg" Mar 19 12:29:00.229640 master-0 kubenswrapper[31830]: I0319 12:29:00.229596 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23ec95e1-075b-486d-a765-32b393e90574-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg\" (UID: \"23ec95e1-075b-486d-a765-32b393e90574\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg" Mar 19 12:29:00.331042 master-0 kubenswrapper[31830]: I0319 12:29:00.330873 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23ec95e1-075b-486d-a765-32b393e90574-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg\" (UID: \"23ec95e1-075b-486d-a765-32b393e90574\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg" Mar 19 12:29:00.331042 master-0 kubenswrapper[31830]: I0319 12:29:00.330961 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23ec95e1-075b-486d-a765-32b393e90574-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg\" (UID: \"23ec95e1-075b-486d-a765-32b393e90574\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg" Mar 19 12:29:00.331042 master-0 kubenswrapper[31830]: I0319 12:29:00.331019 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8qhx\" (UniqueName: \"kubernetes.io/projected/23ec95e1-075b-486d-a765-32b393e90574-kube-api-access-x8qhx\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg\" (UID: \"23ec95e1-075b-486d-a765-32b393e90574\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg" Mar 19 12:29:00.331619 master-0 kubenswrapper[31830]: I0319 12:29:00.331559 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23ec95e1-075b-486d-a765-32b393e90574-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg\" (UID: \"23ec95e1-075b-486d-a765-32b393e90574\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg" Mar 19 12:29:00.331985 master-0 kubenswrapper[31830]: I0319 12:29:00.331921 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23ec95e1-075b-486d-a765-32b393e90574-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg\" (UID: \"23ec95e1-075b-486d-a765-32b393e90574\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg" Mar 19 12:29:00.348961 master-0 kubenswrapper[31830]: I0319 12:29:00.348869 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8qhx\" (UniqueName: \"kubernetes.io/projected/23ec95e1-075b-486d-a765-32b393e90574-kube-api-access-x8qhx\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg\" (UID: \"23ec95e1-075b-486d-a765-32b393e90574\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg" Mar 19 12:29:00.375685 master-0 kubenswrapper[31830]: I0319 12:29:00.375632 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg" Mar 19 12:29:00.781823 master-0 kubenswrapper[31830]: I0319 12:29:00.781722 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg"] Mar 19 12:29:00.783232 master-0 kubenswrapper[31830]: W0319 12:29:00.783159 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23ec95e1_075b_486d_a765_32b393e90574.slice/crio-ece6d160cb8585e5cb0e171ecc668f2414fbcd807207edd011b2385674fd7dc3 WatchSource:0}: Error finding container ece6d160cb8585e5cb0e171ecc668f2414fbcd807207edd011b2385674fd7dc3: Status 404 returned error can't find the container with id ece6d160cb8585e5cb0e171ecc668f2414fbcd807207edd011b2385674fd7dc3 Mar 19 12:29:01.689991 master-0 kubenswrapper[31830]: I0319 12:29:01.689823 31830 generic.go:334] "Generic (PLEG): container finished" podID="23ec95e1-075b-486d-a765-32b393e90574" containerID="b32c58dce60a147805c338007def835ca8eed77abba11f02cdf34c4d5ebcca68" exitCode=0 Mar 19 12:29:01.689991 master-0 kubenswrapper[31830]: I0319 12:29:01.689863 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg" event={"ID":"23ec95e1-075b-486d-a765-32b393e90574","Type":"ContainerDied","Data":"b32c58dce60a147805c338007def835ca8eed77abba11f02cdf34c4d5ebcca68"} Mar 19 12:29:01.689991 master-0 kubenswrapper[31830]: I0319 12:29:01.689901 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg" event={"ID":"23ec95e1-075b-486d-a765-32b393e90574","Type":"ContainerStarted","Data":"ece6d160cb8585e5cb0e171ecc668f2414fbcd807207edd011b2385674fd7dc3"} Mar 19 12:29:04.729687 master-0 kubenswrapper[31830]: I0319 12:29:04.729593 31830 generic.go:334] "Generic (PLEG): container finished" podID="23ec95e1-075b-486d-a765-32b393e90574" containerID="2225d0ba062c79ec5fef411dc17588b7c45e0cb9c6813c10e789a9a9feaed1c8" exitCode=0 Mar 19 12:29:04.741822 master-0 kubenswrapper[31830]: I0319 12:29:04.736091 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg" event={"ID":"23ec95e1-075b-486d-a765-32b393e90574","Type":"ContainerDied","Data":"2225d0ba062c79ec5fef411dc17588b7c45e0cb9c6813c10e789a9a9feaed1c8"} Mar 19 12:29:05.741949 master-0 kubenswrapper[31830]: I0319 12:29:05.741895 31830 generic.go:334] "Generic (PLEG): container finished" podID="23ec95e1-075b-486d-a765-32b393e90574" containerID="b1e4635ea7e8e5c939668ce250be214adee03bc28c2665f86a2b5ce444d32dd4" exitCode=0 Mar 19 12:29:05.741949 master-0 kubenswrapper[31830]: I0319 12:29:05.741949 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg" event={"ID":"23ec95e1-075b-486d-a765-32b393e90574","Type":"ContainerDied","Data":"b1e4635ea7e8e5c939668ce250be214adee03bc28c2665f86a2b5ce444d32dd4"} Mar 19 12:29:06.145593 master-0 kubenswrapper[31830]: I0319 12:29:06.145455 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-97dgg"] Mar 19 12:29:06.146688 master-0 kubenswrapper[31830]: I0319 12:29:06.146666 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-97dgg" Mar 19 12:29:06.173930 master-0 kubenswrapper[31830]: I0319 12:29:06.171264 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 19 12:29:06.173930 master-0 kubenswrapper[31830]: I0319 12:29:06.171637 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 19 12:29:06.231819 master-0 kubenswrapper[31830]: I0319 12:29:06.231413 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-97dgg"] Mar 19 12:29:06.267026 master-0 kubenswrapper[31830]: I0319 12:29:06.266980 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brz55\" (UniqueName: \"kubernetes.io/projected/23e06a6b-a949-4f93-b864-9db90095e21e-kube-api-access-brz55\") pod \"cert-manager-operator-controller-manager-66c8bdd694-97dgg\" (UID: \"23e06a6b-a949-4f93-b864-9db90095e21e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-97dgg" Mar 19 12:29:06.267357 master-0 kubenswrapper[31830]: I0319 12:29:06.267339 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/23e06a6b-a949-4f93-b864-9db90095e21e-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-97dgg\" (UID: \"23e06a6b-a949-4f93-b864-9db90095e21e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-97dgg" Mar 19 12:29:06.368934 master-0 kubenswrapper[31830]: I0319 12:29:06.368877 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brz55\" (UniqueName: \"kubernetes.io/projected/23e06a6b-a949-4f93-b864-9db90095e21e-kube-api-access-brz55\") pod \"cert-manager-operator-controller-manager-66c8bdd694-97dgg\" (UID: \"23e06a6b-a949-4f93-b864-9db90095e21e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-97dgg" Mar 19 12:29:06.369140 master-0 kubenswrapper[31830]: I0319 12:29:06.369084 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/23e06a6b-a949-4f93-b864-9db90095e21e-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-97dgg\" (UID: \"23e06a6b-a949-4f93-b864-9db90095e21e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-97dgg" Mar 19 12:29:06.369581 master-0 kubenswrapper[31830]: I0319 12:29:06.369552 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/23e06a6b-a949-4f93-b864-9db90095e21e-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-97dgg\" (UID: \"23e06a6b-a949-4f93-b864-9db90095e21e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-97dgg" Mar 19 12:29:06.383486 master-0 kubenswrapper[31830]: I0319 12:29:06.383437 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brz55\" (UniqueName: \"kubernetes.io/projected/23e06a6b-a949-4f93-b864-9db90095e21e-kube-api-access-brz55\") pod \"cert-manager-operator-controller-manager-66c8bdd694-97dgg\" (UID: \"23e06a6b-a949-4f93-b864-9db90095e21e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-97dgg" Mar 19 12:29:06.511144 master-0 kubenswrapper[31830]: I0319 12:29:06.511092 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-97dgg" Mar 19 12:29:07.102607 master-0 kubenswrapper[31830]: I0319 12:29:07.102545 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg" Mar 19 12:29:07.194442 master-0 kubenswrapper[31830]: I0319 12:29:07.194306 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8qhx\" (UniqueName: \"kubernetes.io/projected/23ec95e1-075b-486d-a765-32b393e90574-kube-api-access-x8qhx\") pod \"23ec95e1-075b-486d-a765-32b393e90574\" (UID: \"23ec95e1-075b-486d-a765-32b393e90574\") " Mar 19 12:29:07.194640 master-0 kubenswrapper[31830]: I0319 12:29:07.194533 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23ec95e1-075b-486d-a765-32b393e90574-util\") pod \"23ec95e1-075b-486d-a765-32b393e90574\" (UID: \"23ec95e1-075b-486d-a765-32b393e90574\") " Mar 19 12:29:07.194733 master-0 kubenswrapper[31830]: I0319 12:29:07.194685 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23ec95e1-075b-486d-a765-32b393e90574-bundle\") pod \"23ec95e1-075b-486d-a765-32b393e90574\" (UID: \"23ec95e1-075b-486d-a765-32b393e90574\") " Mar 19 12:29:07.197354 master-0 kubenswrapper[31830]: I0319 12:29:07.197049 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23ec95e1-075b-486d-a765-32b393e90574-bundle" (OuterVolumeSpecName: "bundle") pod "23ec95e1-075b-486d-a765-32b393e90574" (UID: "23ec95e1-075b-486d-a765-32b393e90574"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:29:07.202104 master-0 kubenswrapper[31830]: I0319 12:29:07.202054 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23ec95e1-075b-486d-a765-32b393e90574-kube-api-access-x8qhx" (OuterVolumeSpecName: "kube-api-access-x8qhx") pod "23ec95e1-075b-486d-a765-32b393e90574" (UID: "23ec95e1-075b-486d-a765-32b393e90574"). InnerVolumeSpecName "kube-api-access-x8qhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:29:07.205126 master-0 kubenswrapper[31830]: I0319 12:29:07.204973 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23ec95e1-075b-486d-a765-32b393e90574-util" (OuterVolumeSpecName: "util") pod "23ec95e1-075b-486d-a765-32b393e90574" (UID: "23ec95e1-075b-486d-a765-32b393e90574"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:29:07.296919 master-0 kubenswrapper[31830]: I0319 12:29:07.296854 31830 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23ec95e1-075b-486d-a765-32b393e90574-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:29:07.296919 master-0 kubenswrapper[31830]: I0319 12:29:07.296904 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8qhx\" (UniqueName: \"kubernetes.io/projected/23ec95e1-075b-486d-a765-32b393e90574-kube-api-access-x8qhx\") on node \"master-0\" DevicePath \"\"" Mar 19 12:29:07.296919 master-0 kubenswrapper[31830]: I0319 12:29:07.296914 31830 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23ec95e1-075b-486d-a765-32b393e90574-util\") on node \"master-0\" DevicePath \"\"" Mar 19 12:29:07.380390 master-0 kubenswrapper[31830]: I0319 12:29:07.377448 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-97dgg"] Mar 19 12:29:07.767365 master-0 kubenswrapper[31830]: I0319 12:29:07.767310 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg" event={"ID":"23ec95e1-075b-486d-a765-32b393e90574","Type":"ContainerDied","Data":"ece6d160cb8585e5cb0e171ecc668f2414fbcd807207edd011b2385674fd7dc3"} Mar 19 12:29:07.767365 master-0 kubenswrapper[31830]: I0319 12:29:07.767359 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ece6d160cb8585e5cb0e171ecc668f2414fbcd807207edd011b2385674fd7dc3" Mar 19 12:29:07.767365 master-0 kubenswrapper[31830]: I0319 12:29:07.767326 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mpnhg" Mar 19 12:29:07.769192 master-0 kubenswrapper[31830]: I0319 12:29:07.769158 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-97dgg" event={"ID":"23e06a6b-a949-4f93-b864-9db90095e21e","Type":"ContainerStarted","Data":"56d68855b581a279aad47479c5fc9ea01bcca407fdae48756fdc3479acfaadbb"} Mar 19 12:29:11.811824 master-0 kubenswrapper[31830]: I0319 12:29:11.811759 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-97dgg" event={"ID":"23e06a6b-a949-4f93-b864-9db90095e21e","Type":"ContainerStarted","Data":"6d23c7b4302df7a1ba7ed7ea58c1084f5ced829ee69627dce753613437cc11d2"} Mar 19 12:29:11.845923 master-0 kubenswrapper[31830]: I0319 12:29:11.845811 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-97dgg" podStartSLOduration=2.1563741260000002 podStartE2EDuration="5.845774378s" podCreationTimestamp="2026-03-19 12:29:06 +0000 UTC" firstStartedPulling="2026-03-19 12:29:07.373652244 +0000 UTC m=+885.922612948" lastFinishedPulling="2026-03-19 12:29:11.063052496 +0000 UTC m=+889.612013200" observedRunningTime="2026-03-19 12:29:11.835732004 +0000 UTC m=+890.384692708" watchObservedRunningTime="2026-03-19 12:29:11.845774378 +0000 UTC m=+890.394735082" Mar 19 12:29:14.323891 master-0 kubenswrapper[31830]: I0319 12:29:14.323771 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-pt284"] Mar 19 12:29:14.324520 master-0 kubenswrapper[31830]: E0319 12:29:14.324175 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ec95e1-075b-486d-a765-32b393e90574" containerName="util" Mar 19 12:29:14.324520 master-0 kubenswrapper[31830]: I0319 12:29:14.324192 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ec95e1-075b-486d-a765-32b393e90574" containerName="util" Mar 19 12:29:14.324520 master-0 kubenswrapper[31830]: E0319 12:29:14.324216 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ec95e1-075b-486d-a765-32b393e90574" containerName="extract" Mar 19 12:29:14.324520 master-0 kubenswrapper[31830]: I0319 12:29:14.324225 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ec95e1-075b-486d-a765-32b393e90574" containerName="extract" Mar 19 12:29:14.324520 master-0 kubenswrapper[31830]: E0319 12:29:14.324250 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ec95e1-075b-486d-a765-32b393e90574" containerName="pull" Mar 19 12:29:14.324520 master-0 kubenswrapper[31830]: I0319 12:29:14.324259 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ec95e1-075b-486d-a765-32b393e90574" containerName="pull" Mar 19 12:29:14.324520 master-0 kubenswrapper[31830]: I0319 12:29:14.324431 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="23ec95e1-075b-486d-a765-32b393e90574" containerName="extract" Mar 19 12:29:14.325061 master-0 kubenswrapper[31830]: I0319 12:29:14.325031 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-pt284" Mar 19 12:29:14.327158 master-0 kubenswrapper[31830]: I0319 12:29:14.327128 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 19 12:29:14.327426 master-0 kubenswrapper[31830]: I0319 12:29:14.327393 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 19 12:29:14.339462 master-0 kubenswrapper[31830]: I0319 12:29:14.339399 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-pt284"] Mar 19 12:29:14.419349 master-0 kubenswrapper[31830]: I0319 12:29:14.419289 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dc7b064-f6f6-42ab-9901-9fccd9ece370-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-pt284\" (UID: \"5dc7b064-f6f6-42ab-9901-9fccd9ece370\") " pod="cert-manager/cert-manager-webhook-6888856db4-pt284" Mar 19 12:29:14.419607 master-0 kubenswrapper[31830]: I0319 12:29:14.419565 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr8rz\" (UniqueName: \"kubernetes.io/projected/5dc7b064-f6f6-42ab-9901-9fccd9ece370-kube-api-access-tr8rz\") pod \"cert-manager-webhook-6888856db4-pt284\" (UID: \"5dc7b064-f6f6-42ab-9901-9fccd9ece370\") " pod="cert-manager/cert-manager-webhook-6888856db4-pt284" Mar 19 12:29:14.520668 master-0 kubenswrapper[31830]: I0319 12:29:14.520604 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dc7b064-f6f6-42ab-9901-9fccd9ece370-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-pt284\" (UID: \"5dc7b064-f6f6-42ab-9901-9fccd9ece370\") " pod="cert-manager/cert-manager-webhook-6888856db4-pt284" Mar 19 12:29:14.520929 master-0 kubenswrapper[31830]: I0319 12:29:14.520719 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr8rz\" (UniqueName: \"kubernetes.io/projected/5dc7b064-f6f6-42ab-9901-9fccd9ece370-kube-api-access-tr8rz\") pod \"cert-manager-webhook-6888856db4-pt284\" (UID: \"5dc7b064-f6f6-42ab-9901-9fccd9ece370\") " pod="cert-manager/cert-manager-webhook-6888856db4-pt284" Mar 19 12:29:14.540417 master-0 kubenswrapper[31830]: I0319 12:29:14.540368 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dc7b064-f6f6-42ab-9901-9fccd9ece370-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-pt284\" (UID: \"5dc7b064-f6f6-42ab-9901-9fccd9ece370\") " pod="cert-manager/cert-manager-webhook-6888856db4-pt284" Mar 19 12:29:14.541082 master-0 kubenswrapper[31830]: I0319 12:29:14.541027 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr8rz\" (UniqueName: \"kubernetes.io/projected/5dc7b064-f6f6-42ab-9901-9fccd9ece370-kube-api-access-tr8rz\") pod \"cert-manager-webhook-6888856db4-pt284\" (UID: \"5dc7b064-f6f6-42ab-9901-9fccd9ece370\") " pod="cert-manager/cert-manager-webhook-6888856db4-pt284" Mar 19 12:29:14.640455 master-0 kubenswrapper[31830]: I0319 12:29:14.640331 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-pt284" Mar 19 12:29:15.108064 master-0 kubenswrapper[31830]: I0319 12:29:15.108012 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-pt284"] Mar 19 12:29:15.850267 master-0 kubenswrapper[31830]: I0319 12:29:15.850225 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-pt284" event={"ID":"5dc7b064-f6f6-42ab-9901-9fccd9ece370","Type":"ContainerStarted","Data":"ecd42b4f25e22e2190cf1f7ce9d38eff65192101bde94f467554212578c511e3"} Mar 19 12:29:16.436401 master-0 kubenswrapper[31830]: I0319 12:29:16.436346 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-8wgzd"] Mar 19 12:29:16.437293 master-0 kubenswrapper[31830]: I0319 12:29:16.437268 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-8wgzd" Mar 19 12:29:16.453818 master-0 kubenswrapper[31830]: I0319 12:29:16.453745 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-8wgzd"] Mar 19 12:29:16.552987 master-0 kubenswrapper[31830]: I0319 12:29:16.552924 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chjkl\" (UniqueName: \"kubernetes.io/projected/4d51abd0-9f7e-445e-aea5-9845bf559ba9-kube-api-access-chjkl\") pod \"cert-manager-cainjector-5545bd876-8wgzd\" (UID: \"4d51abd0-9f7e-445e-aea5-9845bf559ba9\") " pod="cert-manager/cert-manager-cainjector-5545bd876-8wgzd" Mar 19 12:29:16.553231 master-0 kubenswrapper[31830]: I0319 12:29:16.553015 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d51abd0-9f7e-445e-aea5-9845bf559ba9-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-8wgzd\" (UID: \"4d51abd0-9f7e-445e-aea5-9845bf559ba9\") " pod="cert-manager/cert-manager-cainjector-5545bd876-8wgzd" Mar 19 12:29:16.655494 master-0 kubenswrapper[31830]: I0319 12:29:16.655443 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chjkl\" (UniqueName: \"kubernetes.io/projected/4d51abd0-9f7e-445e-aea5-9845bf559ba9-kube-api-access-chjkl\") pod \"cert-manager-cainjector-5545bd876-8wgzd\" (UID: \"4d51abd0-9f7e-445e-aea5-9845bf559ba9\") " pod="cert-manager/cert-manager-cainjector-5545bd876-8wgzd" Mar 19 12:29:16.655855 master-0 kubenswrapper[31830]: I0319 12:29:16.655785 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d51abd0-9f7e-445e-aea5-9845bf559ba9-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-8wgzd\" (UID: \"4d51abd0-9f7e-445e-aea5-9845bf559ba9\") " pod="cert-manager/cert-manager-cainjector-5545bd876-8wgzd" Mar 19 12:29:16.678023 master-0 kubenswrapper[31830]: I0319 12:29:16.677953 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chjkl\" (UniqueName: \"kubernetes.io/projected/4d51abd0-9f7e-445e-aea5-9845bf559ba9-kube-api-access-chjkl\") pod \"cert-manager-cainjector-5545bd876-8wgzd\" (UID: \"4d51abd0-9f7e-445e-aea5-9845bf559ba9\") " pod="cert-manager/cert-manager-cainjector-5545bd876-8wgzd" Mar 19 12:29:16.687566 master-0 kubenswrapper[31830]: I0319 12:29:16.687456 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d51abd0-9f7e-445e-aea5-9845bf559ba9-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-8wgzd\" (UID: \"4d51abd0-9f7e-445e-aea5-9845bf559ba9\") " pod="cert-manager/cert-manager-cainjector-5545bd876-8wgzd" Mar 19 12:29:16.751499 master-0 kubenswrapper[31830]: I0319 12:29:16.751427 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-8wgzd" Mar 19 12:29:17.268543 master-0 kubenswrapper[31830]: I0319 12:29:17.268478 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-8wgzd"] Mar 19 12:29:17.868487 master-0 kubenswrapper[31830]: I0319 12:29:17.868428 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-8wgzd" event={"ID":"4d51abd0-9f7e-445e-aea5-9845bf559ba9","Type":"ContainerStarted","Data":"462409c981af83c8244240edced52780cf4568daafcf938235409d7d0d35a2e2"} Mar 19 12:29:18.958534 master-0 kubenswrapper[31830]: I0319 12:29:18.958400 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-jx76l"] Mar 19 12:29:18.960603 master-0 kubenswrapper[31830]: I0319 12:29:18.959664 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-jx76l" Mar 19 12:29:18.961722 master-0 kubenswrapper[31830]: I0319 12:29:18.961682 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 19 12:29:18.961894 master-0 kubenswrapper[31830]: I0319 12:29:18.961864 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 19 12:29:18.979410 master-0 kubenswrapper[31830]: I0319 12:29:18.979352 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-jx76l"] Mar 19 12:29:19.117083 master-0 kubenswrapper[31830]: I0319 12:29:19.117016 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk9l5\" (UniqueName: \"kubernetes.io/projected/dd31e5af-9ecd-4aee-b004-dff990a8c353-kube-api-access-hk9l5\") pod \"nmstate-operator-796d4cfff4-jx76l\" (UID: \"dd31e5af-9ecd-4aee-b004-dff990a8c353\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-jx76l" Mar 19 12:29:19.234874 master-0 kubenswrapper[31830]: I0319 12:29:19.234821 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk9l5\" (UniqueName: \"kubernetes.io/projected/dd31e5af-9ecd-4aee-b004-dff990a8c353-kube-api-access-hk9l5\") pod \"nmstate-operator-796d4cfff4-jx76l\" (UID: \"dd31e5af-9ecd-4aee-b004-dff990a8c353\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-jx76l" Mar 19 12:29:19.258455 master-0 kubenswrapper[31830]: I0319 12:29:19.258401 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk9l5\" (UniqueName: \"kubernetes.io/projected/dd31e5af-9ecd-4aee-b004-dff990a8c353-kube-api-access-hk9l5\") pod \"nmstate-operator-796d4cfff4-jx76l\" (UID: \"dd31e5af-9ecd-4aee-b004-dff990a8c353\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-jx76l" Mar 19 12:29:19.282414 master-0 kubenswrapper[31830]: I0319 12:29:19.282371 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-jx76l" Mar 19 12:29:19.923452 master-0 kubenswrapper[31830]: W0319 12:29:19.923352 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd31e5af_9ecd_4aee_b004_dff990a8c353.slice/crio-c6da08fa7cc4d75fcd7f47d6d63513628cd091de36ba47ced25de106b6c1460c WatchSource:0}: Error finding container c6da08fa7cc4d75fcd7f47d6d63513628cd091de36ba47ced25de106b6c1460c: Status 404 returned error can't find the container with id c6da08fa7cc4d75fcd7f47d6d63513628cd091de36ba47ced25de106b6c1460c Mar 19 12:29:19.926399 master-0 kubenswrapper[31830]: I0319 12:29:19.926337 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-jx76l"] Mar 19 12:29:20.602449 master-0 kubenswrapper[31830]: I0319 12:29:20.602375 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-8ddbf4b7-fw4vt"] Mar 19 12:29:20.603408 master-0 kubenswrapper[31830]: I0319 12:29:20.603374 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8ddbf4b7-fw4vt" Mar 19 12:29:20.605726 master-0 kubenswrapper[31830]: I0319 12:29:20.605697 31830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 19 12:29:20.606906 master-0 kubenswrapper[31830]: I0319 12:29:20.606050 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 19 12:29:20.606906 master-0 kubenswrapper[31830]: I0319 12:29:20.606231 31830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 19 12:29:20.606906 master-0 kubenswrapper[31830]: I0319 12:29:20.606392 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 19 12:29:20.624547 master-0 kubenswrapper[31830]: I0319 12:29:20.624392 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8ddbf4b7-fw4vt"] Mar 19 12:29:20.683823 master-0 kubenswrapper[31830]: I0319 12:29:20.678265 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpv8c\" (UniqueName: \"kubernetes.io/projected/a25ef66c-55db-41fb-83bc-be7e7981145b-kube-api-access-lpv8c\") pod \"metallb-operator-controller-manager-8ddbf4b7-fw4vt\" (UID: \"a25ef66c-55db-41fb-83bc-be7e7981145b\") " pod="metallb-system/metallb-operator-controller-manager-8ddbf4b7-fw4vt" Mar 19 12:29:20.683823 master-0 kubenswrapper[31830]: I0319 12:29:20.678334 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a25ef66c-55db-41fb-83bc-be7e7981145b-webhook-cert\") pod \"metallb-operator-controller-manager-8ddbf4b7-fw4vt\" (UID: \"a25ef66c-55db-41fb-83bc-be7e7981145b\") " pod="metallb-system/metallb-operator-controller-manager-8ddbf4b7-fw4vt" Mar 19 12:29:20.683823 master-0 kubenswrapper[31830]: I0319 12:29:20.678360 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a25ef66c-55db-41fb-83bc-be7e7981145b-apiservice-cert\") pod \"metallb-operator-controller-manager-8ddbf4b7-fw4vt\" (UID: \"a25ef66c-55db-41fb-83bc-be7e7981145b\") " pod="metallb-system/metallb-operator-controller-manager-8ddbf4b7-fw4vt" Mar 19 12:29:20.779879 master-0 kubenswrapper[31830]: I0319 12:29:20.779815 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpv8c\" (UniqueName: \"kubernetes.io/projected/a25ef66c-55db-41fb-83bc-be7e7981145b-kube-api-access-lpv8c\") pod \"metallb-operator-controller-manager-8ddbf4b7-fw4vt\" (UID: \"a25ef66c-55db-41fb-83bc-be7e7981145b\") " pod="metallb-system/metallb-operator-controller-manager-8ddbf4b7-fw4vt" Mar 19 12:29:20.780071 master-0 kubenswrapper[31830]: I0319 12:29:20.779897 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a25ef66c-55db-41fb-83bc-be7e7981145b-webhook-cert\") pod \"metallb-operator-controller-manager-8ddbf4b7-fw4vt\" (UID: \"a25ef66c-55db-41fb-83bc-be7e7981145b\") " pod="metallb-system/metallb-operator-controller-manager-8ddbf4b7-fw4vt" Mar 19 12:29:20.780071 master-0 kubenswrapper[31830]: I0319 12:29:20.779941 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a25ef66c-55db-41fb-83bc-be7e7981145b-apiservice-cert\") pod \"metallb-operator-controller-manager-8ddbf4b7-fw4vt\" (UID: \"a25ef66c-55db-41fb-83bc-be7e7981145b\") " pod="metallb-system/metallb-operator-controller-manager-8ddbf4b7-fw4vt" Mar 19 12:29:20.789901 master-0 kubenswrapper[31830]: I0319 12:29:20.788608 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a25ef66c-55db-41fb-83bc-be7e7981145b-apiservice-cert\") pod \"metallb-operator-controller-manager-8ddbf4b7-fw4vt\" (UID: \"a25ef66c-55db-41fb-83bc-be7e7981145b\") " pod="metallb-system/metallb-operator-controller-manager-8ddbf4b7-fw4vt" Mar 19 12:29:20.800843 master-0 kubenswrapper[31830]: I0319 12:29:20.799966 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a25ef66c-55db-41fb-83bc-be7e7981145b-webhook-cert\") pod \"metallb-operator-controller-manager-8ddbf4b7-fw4vt\" (UID: \"a25ef66c-55db-41fb-83bc-be7e7981145b\") " pod="metallb-system/metallb-operator-controller-manager-8ddbf4b7-fw4vt" Mar 19 12:29:20.809816 master-0 kubenswrapper[31830]: I0319 12:29:20.806841 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpv8c\" (UniqueName: \"kubernetes.io/projected/a25ef66c-55db-41fb-83bc-be7e7981145b-kube-api-access-lpv8c\") pod \"metallb-operator-controller-manager-8ddbf4b7-fw4vt\" (UID: \"a25ef66c-55db-41fb-83bc-be7e7981145b\") " pod="metallb-system/metallb-operator-controller-manager-8ddbf4b7-fw4vt" Mar 19 12:29:20.919891 master-0 kubenswrapper[31830]: I0319 12:29:20.916288 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-jx76l" event={"ID":"dd31e5af-9ecd-4aee-b004-dff990a8c353","Type":"ContainerStarted","Data":"c6da08fa7cc4d75fcd7f47d6d63513628cd091de36ba47ced25de106b6c1460c"} Mar 19 12:29:20.944668 master-0 kubenswrapper[31830]: I0319 12:29:20.944306 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8ddbf4b7-fw4vt" Mar 19 12:29:21.140737 master-0 kubenswrapper[31830]: I0319 12:29:21.140693 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-8665ccc68-62qpd"] Mar 19 12:29:21.141862 master-0 kubenswrapper[31830]: I0319 12:29:21.141847 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-8665ccc68-62qpd" Mar 19 12:29:21.147368 master-0 kubenswrapper[31830]: I0319 12:29:21.144272 31830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 19 12:29:21.147368 master-0 kubenswrapper[31830]: I0319 12:29:21.144504 31830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 19 12:29:21.192634 master-0 kubenswrapper[31830]: I0319 12:29:21.192532 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbd7h\" (UniqueName: \"kubernetes.io/projected/87509d6c-30c1-48aa-a256-54fa004adcb6-kube-api-access-cbd7h\") pod \"metallb-operator-webhook-server-8665ccc68-62qpd\" (UID: \"87509d6c-30c1-48aa-a256-54fa004adcb6\") " pod="metallb-system/metallb-operator-webhook-server-8665ccc68-62qpd" Mar 19 12:29:21.192634 master-0 kubenswrapper[31830]: I0319 12:29:21.192628 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/87509d6c-30c1-48aa-a256-54fa004adcb6-webhook-cert\") pod \"metallb-operator-webhook-server-8665ccc68-62qpd\" (UID: \"87509d6c-30c1-48aa-a256-54fa004adcb6\") " pod="metallb-system/metallb-operator-webhook-server-8665ccc68-62qpd" Mar 19 12:29:21.192936 master-0 kubenswrapper[31830]: I0319 12:29:21.192697 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/87509d6c-30c1-48aa-a256-54fa004adcb6-apiservice-cert\") pod \"metallb-operator-webhook-server-8665ccc68-62qpd\" (UID: \"87509d6c-30c1-48aa-a256-54fa004adcb6\") " pod="metallb-system/metallb-operator-webhook-server-8665ccc68-62qpd" Mar 19 12:29:21.229530 master-0 kubenswrapper[31830]: I0319 12:29:21.228254 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-8665ccc68-62qpd"] Mar 19 12:29:21.300720 master-0 kubenswrapper[31830]: I0319 12:29:21.299381 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/87509d6c-30c1-48aa-a256-54fa004adcb6-apiservice-cert\") pod \"metallb-operator-webhook-server-8665ccc68-62qpd\" (UID: \"87509d6c-30c1-48aa-a256-54fa004adcb6\") " pod="metallb-system/metallb-operator-webhook-server-8665ccc68-62qpd" Mar 19 12:29:21.300720 master-0 kubenswrapper[31830]: I0319 12:29:21.299490 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbd7h\" (UniqueName: \"kubernetes.io/projected/87509d6c-30c1-48aa-a256-54fa004adcb6-kube-api-access-cbd7h\") pod \"metallb-operator-webhook-server-8665ccc68-62qpd\" (UID: \"87509d6c-30c1-48aa-a256-54fa004adcb6\") " pod="metallb-system/metallb-operator-webhook-server-8665ccc68-62qpd" Mar 19 12:29:21.300720 master-0 kubenswrapper[31830]: I0319 12:29:21.299590 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/87509d6c-30c1-48aa-a256-54fa004adcb6-webhook-cert\") pod \"metallb-operator-webhook-server-8665ccc68-62qpd\" (UID: \"87509d6c-30c1-48aa-a256-54fa004adcb6\") " pod="metallb-system/metallb-operator-webhook-server-8665ccc68-62qpd" Mar 19 12:29:21.312847 master-0 kubenswrapper[31830]: I0319 12:29:21.309860 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/87509d6c-30c1-48aa-a256-54fa004adcb6-webhook-cert\") pod \"metallb-operator-webhook-server-8665ccc68-62qpd\" (UID: \"87509d6c-30c1-48aa-a256-54fa004adcb6\") " pod="metallb-system/metallb-operator-webhook-server-8665ccc68-62qpd" Mar 19 12:29:21.324819 master-0 kubenswrapper[31830]: I0319 12:29:21.321447 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/87509d6c-30c1-48aa-a256-54fa004adcb6-apiservice-cert\") pod \"metallb-operator-webhook-server-8665ccc68-62qpd\" (UID: \"87509d6c-30c1-48aa-a256-54fa004adcb6\") " pod="metallb-system/metallb-operator-webhook-server-8665ccc68-62qpd" Mar 19 12:29:21.330804 master-0 kubenswrapper[31830]: I0319 12:29:21.330738 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbd7h\" (UniqueName: \"kubernetes.io/projected/87509d6c-30c1-48aa-a256-54fa004adcb6-kube-api-access-cbd7h\") pod \"metallb-operator-webhook-server-8665ccc68-62qpd\" (UID: \"87509d6c-30c1-48aa-a256-54fa004adcb6\") " pod="metallb-system/metallb-operator-webhook-server-8665ccc68-62qpd" Mar 19 12:29:21.493104 master-0 kubenswrapper[31830]: I0319 12:29:21.493034 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-8665ccc68-62qpd" Mar 19 12:29:24.969200 master-0 kubenswrapper[31830]: I0319 12:29:24.969134 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-pt284" event={"ID":"5dc7b064-f6f6-42ab-9901-9fccd9ece370","Type":"ContainerStarted","Data":"69649c0b081cf4a08059a6ca4e60b593c3e14a1f39ac83f07c3c04a9202ab4fc"} Mar 19 12:29:24.974098 master-0 kubenswrapper[31830]: I0319 12:29:24.970128 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-pt284" Mar 19 12:29:25.017869 master-0 kubenswrapper[31830]: I0319 12:29:25.016234 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-pt284" podStartSLOduration=1.5238911370000001 podStartE2EDuration="11.016211704s" podCreationTimestamp="2026-03-19 12:29:14 +0000 UTC" firstStartedPulling="2026-03-19 12:29:15.111007736 +0000 UTC m=+893.659968440" lastFinishedPulling="2026-03-19 12:29:24.603328303 +0000 UTC m=+903.152289007" observedRunningTime="2026-03-19 12:29:25.005768069 +0000 UTC m=+903.554728773" watchObservedRunningTime="2026-03-19 12:29:25.016211704 +0000 UTC m=+903.565172408" Mar 19 12:29:25.136624 master-0 kubenswrapper[31830]: I0319 12:29:25.136498 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-8665ccc68-62qpd"] Mar 19 12:29:25.192156 master-0 kubenswrapper[31830]: I0319 12:29:25.192094 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8ddbf4b7-fw4vt"] Mar 19 12:29:25.207703 master-0 kubenswrapper[31830]: W0319 12:29:25.207650 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25ef66c_55db_41fb_83bc_be7e7981145b.slice/crio-ebe2b431397a3ea77097b9e2feeacacb069f3db48b20aadc4e592f9685888c64 WatchSource:0}: Error finding container ebe2b431397a3ea77097b9e2feeacacb069f3db48b20aadc4e592f9685888c64: Status 404 returned error can't find the container with id ebe2b431397a3ea77097b9e2feeacacb069f3db48b20aadc4e592f9685888c64 Mar 19 12:29:25.337554 master-0 kubenswrapper[31830]: I0319 12:29:25.336659 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-bkp8q"] Mar 19 12:29:25.337841 master-0 kubenswrapper[31830]: I0319 12:29:25.337598 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-bkp8q" Mar 19 12:29:25.355584 master-0 kubenswrapper[31830]: I0319 12:29:25.355459 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n48gr\" (UniqueName: \"kubernetes.io/projected/12af49a4-e6e6-420a-a197-5df04713f966-kube-api-access-n48gr\") pod \"cert-manager-545d4d4674-bkp8q\" (UID: \"12af49a4-e6e6-420a-a197-5df04713f966\") " pod="cert-manager/cert-manager-545d4d4674-bkp8q" Mar 19 12:29:25.355584 master-0 kubenswrapper[31830]: I0319 12:29:25.355496 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12af49a4-e6e6-420a-a197-5df04713f966-bound-sa-token\") pod \"cert-manager-545d4d4674-bkp8q\" (UID: \"12af49a4-e6e6-420a-a197-5df04713f966\") " pod="cert-manager/cert-manager-545d4d4674-bkp8q" Mar 19 12:29:25.414913 master-0 kubenswrapper[31830]: I0319 12:29:25.414847 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-bkp8q"] Mar 19 12:29:25.459817 master-0 kubenswrapper[31830]: I0319 12:29:25.458896 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n48gr\" (UniqueName: \"kubernetes.io/projected/12af49a4-e6e6-420a-a197-5df04713f966-kube-api-access-n48gr\") pod \"cert-manager-545d4d4674-bkp8q\" (UID: \"12af49a4-e6e6-420a-a197-5df04713f966\") " pod="cert-manager/cert-manager-545d4d4674-bkp8q" Mar 19 12:29:25.459817 master-0 kubenswrapper[31830]: I0319 12:29:25.458947 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12af49a4-e6e6-420a-a197-5df04713f966-bound-sa-token\") pod \"cert-manager-545d4d4674-bkp8q\" (UID: \"12af49a4-e6e6-420a-a197-5df04713f966\") " pod="cert-manager/cert-manager-545d4d4674-bkp8q" Mar 19 12:29:25.514765 master-0 kubenswrapper[31830]: I0319 12:29:25.514705 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12af49a4-e6e6-420a-a197-5df04713f966-bound-sa-token\") pod \"cert-manager-545d4d4674-bkp8q\" (UID: \"12af49a4-e6e6-420a-a197-5df04713f966\") " pod="cert-manager/cert-manager-545d4d4674-bkp8q" Mar 19 12:29:25.516383 master-0 kubenswrapper[31830]: I0319 12:29:25.516246 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n48gr\" (UniqueName: \"kubernetes.io/projected/12af49a4-e6e6-420a-a197-5df04713f966-kube-api-access-n48gr\") pod \"cert-manager-545d4d4674-bkp8q\" (UID: \"12af49a4-e6e6-420a-a197-5df04713f966\") " pod="cert-manager/cert-manager-545d4d4674-bkp8q" Mar 19 12:29:25.716871 master-0 kubenswrapper[31830]: I0319 12:29:25.716323 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-bkp8q" Mar 19 12:29:26.004820 master-0 kubenswrapper[31830]: I0319 12:29:26.004294 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-8wgzd" event={"ID":"4d51abd0-9f7e-445e-aea5-9845bf559ba9","Type":"ContainerStarted","Data":"b8a0cc84f1f5dfbadb4c19a28c68ed6ad4b1a243c601d88bb558c8de528d26e8"} Mar 19 12:29:26.014244 master-0 kubenswrapper[31830]: I0319 12:29:26.014116 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8ddbf4b7-fw4vt" event={"ID":"a25ef66c-55db-41fb-83bc-be7e7981145b","Type":"ContainerStarted","Data":"ebe2b431397a3ea77097b9e2feeacacb069f3db48b20aadc4e592f9685888c64"} Mar 19 12:29:26.022828 master-0 kubenswrapper[31830]: I0319 12:29:26.020131 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8665ccc68-62qpd" event={"ID":"87509d6c-30c1-48aa-a256-54fa004adcb6","Type":"ContainerStarted","Data":"d8b864cb456bc210abc03100a85fc54d3df50b3a7aa23c1379585927aaf566c1"} Mar 19 12:29:26.038767 master-0 kubenswrapper[31830]: I0319 12:29:26.036776 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-8wgzd" podStartSLOduration=2.684953691 podStartE2EDuration="10.036757873s" podCreationTimestamp="2026-03-19 12:29:16 +0000 UTC" firstStartedPulling="2026-03-19 12:29:17.26947631 +0000 UTC m=+895.818437014" lastFinishedPulling="2026-03-19 12:29:24.621280492 +0000 UTC m=+903.170241196" observedRunningTime="2026-03-19 12:29:26.020575249 +0000 UTC m=+904.569535953" watchObservedRunningTime="2026-03-19 12:29:26.036757873 +0000 UTC m=+904.585718567" Mar 19 12:29:26.221674 master-0 kubenswrapper[31830]: I0319 12:29:26.221617 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-bkp8q"] Mar 19 12:29:27.051894 master-0 kubenswrapper[31830]: I0319 12:29:27.048595 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-bkp8q" event={"ID":"12af49a4-e6e6-420a-a197-5df04713f966","Type":"ContainerStarted","Data":"f0a4c422e98872503120bb7718c1ad294f71131b5f4ab33721ac3170ea017c08"} Mar 19 12:29:27.051894 master-0 kubenswrapper[31830]: I0319 12:29:27.048634 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-bkp8q" event={"ID":"12af49a4-e6e6-420a-a197-5df04713f966","Type":"ContainerStarted","Data":"e7a13ba5f01f5ff021aed7ee0b72175c17bdcc57302acdf1c385694e27a90617"} Mar 19 12:29:27.085352 master-0 kubenswrapper[31830]: I0319 12:29:27.082611 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-bkp8q" podStartSLOduration=2.082587689 podStartE2EDuration="2.082587689s" podCreationTimestamp="2026-03-19 12:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:29:27.080403712 +0000 UTC m=+905.629364416" watchObservedRunningTime="2026-03-19 12:29:27.082587689 +0000 UTC m=+905.631548413" Mar 19 12:29:29.655826 master-0 kubenswrapper[31830]: I0319 12:29:29.654137 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-pt284" Mar 19 12:29:33.291595 master-0 kubenswrapper[31830]: I0319 12:29:33.291543 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8ddbf4b7-fw4vt" event={"ID":"a25ef66c-55db-41fb-83bc-be7e7981145b","Type":"ContainerStarted","Data":"d5cc69c187438b4e3fc95257255b29d0d1546c3b49bf80624c881033f90847f5"} Mar 19 12:29:33.292315 master-0 kubenswrapper[31830]: I0319 12:29:33.292296 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-8ddbf4b7-fw4vt" Mar 19 12:29:33.294585 master-0 kubenswrapper[31830]: I0319 12:29:33.294536 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8665ccc68-62qpd" event={"ID":"87509d6c-30c1-48aa-a256-54fa004adcb6","Type":"ContainerStarted","Data":"b797e5adeedfe4dd484e3382dec4dc1a630fc8a943cb48fc5fc1c86d370fd604"} Mar 19 12:29:33.294705 master-0 kubenswrapper[31830]: I0319 12:29:33.294642 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-8665ccc68-62qpd" Mar 19 12:29:33.297316 master-0 kubenswrapper[31830]: I0319 12:29:33.297280 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-jx76l" event={"ID":"dd31e5af-9ecd-4aee-b004-dff990a8c353","Type":"ContainerStarted","Data":"aaa9008966a877d3315c744cf54b899dbdb8accb579d0a124eabcff02346456a"} Mar 19 12:29:33.323074 master-0 kubenswrapper[31830]: I0319 12:29:33.323006 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-8ddbf4b7-fw4vt" podStartSLOduration=5.50115537 podStartE2EDuration="13.322960291s" podCreationTimestamp="2026-03-19 12:29:20 +0000 UTC" firstStartedPulling="2026-03-19 12:29:25.2173793 +0000 UTC m=+903.766339994" lastFinishedPulling="2026-03-19 12:29:33.039184221 +0000 UTC m=+911.588144915" observedRunningTime="2026-03-19 12:29:33.320068661 +0000 UTC m=+911.869029375" watchObservedRunningTime="2026-03-19 12:29:33.322960291 +0000 UTC m=+911.871920995" Mar 19 12:29:33.353706 master-0 kubenswrapper[31830]: I0319 12:29:33.353637 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-8665ccc68-62qpd" podStartSLOduration=4.479843637 podStartE2EDuration="12.353618646s" podCreationTimestamp="2026-03-19 12:29:21 +0000 UTC" firstStartedPulling="2026-03-19 12:29:25.210395423 +0000 UTC m=+903.759356127" lastFinishedPulling="2026-03-19 12:29:33.084170432 +0000 UTC m=+911.633131136" observedRunningTime="2026-03-19 12:29:33.349866629 +0000 UTC m=+911.898827333" watchObservedRunningTime="2026-03-19 12:29:33.353618646 +0000 UTC m=+911.902579350" Mar 19 12:29:33.384723 master-0 kubenswrapper[31830]: I0319 12:29:33.384622 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-jx76l" podStartSLOduration=2.27724045 podStartE2EDuration="15.384600491s" podCreationTimestamp="2026-03-19 12:29:18 +0000 UTC" firstStartedPulling="2026-03-19 12:29:19.927057812 +0000 UTC m=+898.476018516" lastFinishedPulling="2026-03-19 12:29:33.034417853 +0000 UTC m=+911.583378557" observedRunningTime="2026-03-19 12:29:33.376836699 +0000 UTC m=+911.925797423" watchObservedRunningTime="2026-03-19 12:29:33.384600491 +0000 UTC m=+911.933561205" Mar 19 12:29:34.790846 master-0 kubenswrapper[31830]: I0319 12:29:34.787925 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-nd8l4"] Mar 19 12:29:34.790846 master-0 kubenswrapper[31830]: I0319 12:29:34.789304 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-nd8l4" Mar 19 12:29:34.793744 master-0 kubenswrapper[31830]: I0319 12:29:34.792716 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 19 12:29:34.793744 master-0 kubenswrapper[31830]: I0319 12:29:34.792897 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 19 12:29:34.814739 master-0 kubenswrapper[31830]: I0319 12:29:34.814645 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-nd8l4"] Mar 19 12:29:34.955664 master-0 kubenswrapper[31830]: I0319 12:29:34.955619 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2cm5\" (UniqueName: \"kubernetes.io/projected/4bf2e991-18d5-4b0c-a386-31a336f80b6d-kube-api-access-b2cm5\") pod \"obo-prometheus-operator-8ff7d675-nd8l4\" (UID: \"4bf2e991-18d5-4b0c-a386-31a336f80b6d\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-nd8l4" Mar 19 12:29:35.057976 master-0 kubenswrapper[31830]: I0319 12:29:35.057879 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2cm5\" (UniqueName: \"kubernetes.io/projected/4bf2e991-18d5-4b0c-a386-31a336f80b6d-kube-api-access-b2cm5\") pod \"obo-prometheus-operator-8ff7d675-nd8l4\" (UID: \"4bf2e991-18d5-4b0c-a386-31a336f80b6d\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-nd8l4" Mar 19 12:29:35.080719 master-0 kubenswrapper[31830]: I0319 12:29:35.080659 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2cm5\" (UniqueName: \"kubernetes.io/projected/4bf2e991-18d5-4b0c-a386-31a336f80b6d-kube-api-access-b2cm5\") pod \"obo-prometheus-operator-8ff7d675-nd8l4\" (UID: \"4bf2e991-18d5-4b0c-a386-31a336f80b6d\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-nd8l4" Mar 19 12:29:35.110754 master-0 kubenswrapper[31830]: I0319 12:29:35.110699 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-nd8l4" Mar 19 12:29:35.167820 master-0 kubenswrapper[31830]: I0319 12:29:35.167096 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-6x964"] Mar 19 12:29:35.172213 master-0 kubenswrapper[31830]: I0319 12:29:35.168040 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-6x964" Mar 19 12:29:35.175895 master-0 kubenswrapper[31830]: I0319 12:29:35.174057 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 19 12:29:35.179726 master-0 kubenswrapper[31830]: I0319 12:29:35.179649 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-cfnjg"] Mar 19 12:29:35.182148 master-0 kubenswrapper[31830]: I0319 12:29:35.181154 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-cfnjg" Mar 19 12:29:35.212826 master-0 kubenswrapper[31830]: I0319 12:29:35.211888 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-6x964"] Mar 19 12:29:35.265346 master-0 kubenswrapper[31830]: I0319 12:29:35.264043 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-cfnjg"] Mar 19 12:29:35.378890 master-0 kubenswrapper[31830]: I0319 12:29:35.365578 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6583b411-b800-4794-abae-ff091fa2959a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7df9ddf467-6x964\" (UID: \"6583b411-b800-4794-abae-ff091fa2959a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-6x964" Mar 19 12:29:35.378890 master-0 kubenswrapper[31830]: I0319 12:29:35.365657 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db169766-1f49-4387-b078-a1899d4161b9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7df9ddf467-cfnjg\" (UID: \"db169766-1f49-4387-b078-a1899d4161b9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-cfnjg" Mar 19 12:29:35.378890 master-0 kubenswrapper[31830]: I0319 12:29:35.365681 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6583b411-b800-4794-abae-ff091fa2959a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7df9ddf467-6x964\" (UID: \"6583b411-b800-4794-abae-ff091fa2959a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-6x964" Mar 19 12:29:35.378890 master-0 kubenswrapper[31830]: I0319 12:29:35.365747 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db169766-1f49-4387-b078-a1899d4161b9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7df9ddf467-cfnjg\" (UID: \"db169766-1f49-4387-b078-a1899d4161b9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-cfnjg" Mar 19 12:29:35.468920 master-0 kubenswrapper[31830]: I0319 12:29:35.468861 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db169766-1f49-4387-b078-a1899d4161b9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7df9ddf467-cfnjg\" (UID: \"db169766-1f49-4387-b078-a1899d4161b9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-cfnjg" Mar 19 12:29:35.469162 master-0 kubenswrapper[31830]: I0319 12:29:35.468932 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6583b411-b800-4794-abae-ff091fa2959a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7df9ddf467-6x964\" (UID: \"6583b411-b800-4794-abae-ff091fa2959a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-6x964" Mar 19 12:29:35.469222 master-0 kubenswrapper[31830]: I0319 12:29:35.469196 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db169766-1f49-4387-b078-a1899d4161b9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7df9ddf467-cfnjg\" (UID: \"db169766-1f49-4387-b078-a1899d4161b9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-cfnjg" Mar 19 12:29:35.469324 master-0 kubenswrapper[31830]: I0319 12:29:35.469301 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6583b411-b800-4794-abae-ff091fa2959a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7df9ddf467-6x964\" (UID: \"6583b411-b800-4794-abae-ff091fa2959a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-6x964" Mar 19 12:29:35.475817 master-0 kubenswrapper[31830]: I0319 12:29:35.473145 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6583b411-b800-4794-abae-ff091fa2959a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7df9ddf467-6x964\" (UID: \"6583b411-b800-4794-abae-ff091fa2959a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-6x964" Mar 19 12:29:35.496822 master-0 kubenswrapper[31830]: I0319 12:29:35.493000 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db169766-1f49-4387-b078-a1899d4161b9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7df9ddf467-cfnjg\" (UID: \"db169766-1f49-4387-b078-a1899d4161b9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-cfnjg" Mar 19 12:29:35.496822 master-0 kubenswrapper[31830]: I0319 12:29:35.493424 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db169766-1f49-4387-b078-a1899d4161b9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7df9ddf467-cfnjg\" (UID: \"db169766-1f49-4387-b078-a1899d4161b9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-cfnjg" Mar 19 12:29:35.496822 master-0 kubenswrapper[31830]: I0319 12:29:35.493509 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6583b411-b800-4794-abae-ff091fa2959a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7df9ddf467-6x964\" (UID: \"6583b411-b800-4794-abae-ff091fa2959a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-6x964" Mar 19 12:29:35.496822 master-0 kubenswrapper[31830]: I0319 12:29:35.494855 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-6x964" Mar 19 12:29:35.510894 master-0 kubenswrapper[31830]: I0319 12:29:35.503571 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-npvcs"] Mar 19 12:29:35.510894 master-0 kubenswrapper[31830]: I0319 12:29:35.505048 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-npvcs" Mar 19 12:29:35.510894 master-0 kubenswrapper[31830]: I0319 12:29:35.508694 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 19 12:29:35.518014 master-0 kubenswrapper[31830]: I0319 12:29:35.516553 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-cfnjg" Mar 19 12:29:35.551825 master-0 kubenswrapper[31830]: I0319 12:29:35.548508 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-npvcs"] Mar 19 12:29:35.646892 master-0 kubenswrapper[31830]: I0319 12:29:35.641936 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-nd8l4"] Mar 19 12:29:35.676822 master-0 kubenswrapper[31830]: I0319 12:29:35.672605 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-777ps\" (UniqueName: \"kubernetes.io/projected/4bf21083-6e5c-45da-be01-a74ae41f18a1-kube-api-access-777ps\") pod \"observability-operator-6dd7dd855f-npvcs\" (UID: \"4bf21083-6e5c-45da-be01-a74ae41f18a1\") " pod="openshift-operators/observability-operator-6dd7dd855f-npvcs" Mar 19 12:29:35.676822 master-0 kubenswrapper[31830]: I0319 12:29:35.672677 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bf21083-6e5c-45da-be01-a74ae41f18a1-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-npvcs\" (UID: \"4bf21083-6e5c-45da-be01-a74ae41f18a1\") " pod="openshift-operators/observability-operator-6dd7dd855f-npvcs" Mar 19 12:29:35.774817 master-0 kubenswrapper[31830]: I0319 12:29:35.774744 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-777ps\" (UniqueName: \"kubernetes.io/projected/4bf21083-6e5c-45da-be01-a74ae41f18a1-kube-api-access-777ps\") pod \"observability-operator-6dd7dd855f-npvcs\" (UID: \"4bf21083-6e5c-45da-be01-a74ae41f18a1\") " pod="openshift-operators/observability-operator-6dd7dd855f-npvcs" Mar 19 12:29:35.775063 master-0 kubenswrapper[31830]: I0319 12:29:35.774831 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bf21083-6e5c-45da-be01-a74ae41f18a1-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-npvcs\" (UID: \"4bf21083-6e5c-45da-be01-a74ae41f18a1\") " pod="openshift-operators/observability-operator-6dd7dd855f-npvcs" Mar 19 12:29:35.787819 master-0 kubenswrapper[31830]: I0319 12:29:35.780479 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bf21083-6e5c-45da-be01-a74ae41f18a1-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-npvcs\" (UID: \"4bf21083-6e5c-45da-be01-a74ae41f18a1\") " pod="openshift-operators/observability-operator-6dd7dd855f-npvcs" Mar 19 12:29:35.812668 master-0 kubenswrapper[31830]: I0319 12:29:35.812605 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-777ps\" (UniqueName: \"kubernetes.io/projected/4bf21083-6e5c-45da-be01-a74ae41f18a1-kube-api-access-777ps\") pod \"observability-operator-6dd7dd855f-npvcs\" (UID: \"4bf21083-6e5c-45da-be01-a74ae41f18a1\") " pod="openshift-operators/observability-operator-6dd7dd855f-npvcs" Mar 19 12:29:35.874155 master-0 kubenswrapper[31830]: I0319 12:29:35.868593 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-npvcs" Mar 19 12:29:36.151832 master-0 kubenswrapper[31830]: I0319 12:29:36.150954 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-bfb5f57db-jhrd5"] Mar 19 12:29:36.158973 master-0 kubenswrapper[31830]: I0319 12:29:36.157244 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-bfb5f57db-jhrd5" Mar 19 12:29:36.159512 master-0 kubenswrapper[31830]: I0319 12:29:36.159467 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-service-cert" Mar 19 12:29:36.181415 master-0 kubenswrapper[31830]: I0319 12:29:36.176420 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-bfb5f57db-jhrd5"] Mar 19 12:29:36.210084 master-0 kubenswrapper[31830]: W0319 12:29:36.210025 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb169766_1f49_4387_b078_a1899d4161b9.slice/crio-eacc655975bb480fcb473e7a695af3234554b03805d3b5766dad2aed44123a78 WatchSource:0}: Error finding container eacc655975bb480fcb473e7a695af3234554b03805d3b5766dad2aed44123a78: Status 404 returned error can't find the container with id eacc655975bb480fcb473e7a695af3234554b03805d3b5766dad2aed44123a78 Mar 19 12:29:36.214286 master-0 kubenswrapper[31830]: I0319 12:29:36.213567 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-cfnjg"] Mar 19 12:29:36.296760 master-0 kubenswrapper[31830]: I0319 12:29:36.296303 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1c499153-14fe-45db-9223-04a583ba17c2-apiservice-cert\") pod \"perses-operator-bfb5f57db-jhrd5\" (UID: \"1c499153-14fe-45db-9223-04a583ba17c2\") " pod="openshift-operators/perses-operator-bfb5f57db-jhrd5" Mar 19 12:29:36.297053 master-0 kubenswrapper[31830]: I0319 12:29:36.296827 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c499153-14fe-45db-9223-04a583ba17c2-webhook-cert\") pod \"perses-operator-bfb5f57db-jhrd5\" (UID: \"1c499153-14fe-45db-9223-04a583ba17c2\") " pod="openshift-operators/perses-operator-bfb5f57db-jhrd5" Mar 19 12:29:36.297053 master-0 kubenswrapper[31830]: I0319 12:29:36.296910 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1c499153-14fe-45db-9223-04a583ba17c2-openshift-service-ca\") pod \"perses-operator-bfb5f57db-jhrd5\" (UID: \"1c499153-14fe-45db-9223-04a583ba17c2\") " pod="openshift-operators/perses-operator-bfb5f57db-jhrd5" Mar 19 12:29:36.297053 master-0 kubenswrapper[31830]: I0319 12:29:36.296949 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8dhj\" (UniqueName: \"kubernetes.io/projected/1c499153-14fe-45db-9223-04a583ba17c2-kube-api-access-k8dhj\") pod \"perses-operator-bfb5f57db-jhrd5\" (UID: \"1c499153-14fe-45db-9223-04a583ba17c2\") " pod="openshift-operators/perses-operator-bfb5f57db-jhrd5" Mar 19 12:29:36.310110 master-0 kubenswrapper[31830]: I0319 12:29:36.310062 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-6x964"] Mar 19 12:29:36.332431 master-0 kubenswrapper[31830]: I0319 12:29:36.332382 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-6x964" event={"ID":"6583b411-b800-4794-abae-ff091fa2959a","Type":"ContainerStarted","Data":"6c68764e1e4455305007b3b158cc852b2014e4952254db20f18fe2957b67995d"} Mar 19 12:29:36.333374 master-0 kubenswrapper[31830]: I0319 12:29:36.333356 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-nd8l4" event={"ID":"4bf2e991-18d5-4b0c-a386-31a336f80b6d","Type":"ContainerStarted","Data":"ce220f108b6df9a9052e69b52a3e8f57c5b315bd02ac64ec0a5cc74625b06d17"} Mar 19 12:29:36.334436 master-0 kubenswrapper[31830]: I0319 12:29:36.334413 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-cfnjg" event={"ID":"db169766-1f49-4387-b078-a1899d4161b9","Type":"ContainerStarted","Data":"eacc655975bb480fcb473e7a695af3234554b03805d3b5766dad2aed44123a78"} Mar 19 12:29:36.398280 master-0 kubenswrapper[31830]: I0319 12:29:36.398215 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c499153-14fe-45db-9223-04a583ba17c2-webhook-cert\") pod \"perses-operator-bfb5f57db-jhrd5\" (UID: \"1c499153-14fe-45db-9223-04a583ba17c2\") " pod="openshift-operators/perses-operator-bfb5f57db-jhrd5" Mar 19 12:29:36.398545 master-0 kubenswrapper[31830]: I0319 12:29:36.398298 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1c499153-14fe-45db-9223-04a583ba17c2-openshift-service-ca\") pod \"perses-operator-bfb5f57db-jhrd5\" (UID: \"1c499153-14fe-45db-9223-04a583ba17c2\") " pod="openshift-operators/perses-operator-bfb5f57db-jhrd5" Mar 19 12:29:36.398545 master-0 kubenswrapper[31830]: I0319 12:29:36.398376 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8dhj\" (UniqueName: \"kubernetes.io/projected/1c499153-14fe-45db-9223-04a583ba17c2-kube-api-access-k8dhj\") pod \"perses-operator-bfb5f57db-jhrd5\" (UID: \"1c499153-14fe-45db-9223-04a583ba17c2\") " pod="openshift-operators/perses-operator-bfb5f57db-jhrd5" Mar 19 12:29:36.398545 master-0 kubenswrapper[31830]: I0319 12:29:36.398419 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1c499153-14fe-45db-9223-04a583ba17c2-apiservice-cert\") pod \"perses-operator-bfb5f57db-jhrd5\" (UID: \"1c499153-14fe-45db-9223-04a583ba17c2\") " pod="openshift-operators/perses-operator-bfb5f57db-jhrd5" Mar 19 12:29:36.401671 master-0 kubenswrapper[31830]: I0319 12:29:36.401611 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1c499153-14fe-45db-9223-04a583ba17c2-openshift-service-ca\") pod \"perses-operator-bfb5f57db-jhrd5\" (UID: \"1c499153-14fe-45db-9223-04a583ba17c2\") " pod="openshift-operators/perses-operator-bfb5f57db-jhrd5" Mar 19 12:29:36.403552 master-0 kubenswrapper[31830]: I0319 12:29:36.403512 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1c499153-14fe-45db-9223-04a583ba17c2-apiservice-cert\") pod \"perses-operator-bfb5f57db-jhrd5\" (UID: \"1c499153-14fe-45db-9223-04a583ba17c2\") " pod="openshift-operators/perses-operator-bfb5f57db-jhrd5" Mar 19 12:29:36.404386 master-0 kubenswrapper[31830]: I0319 12:29:36.404358 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c499153-14fe-45db-9223-04a583ba17c2-webhook-cert\") pod \"perses-operator-bfb5f57db-jhrd5\" (UID: \"1c499153-14fe-45db-9223-04a583ba17c2\") " pod="openshift-operators/perses-operator-bfb5f57db-jhrd5" Mar 19 12:29:36.413693 master-0 kubenswrapper[31830]: I0319 12:29:36.413651 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8dhj\" (UniqueName: \"kubernetes.io/projected/1c499153-14fe-45db-9223-04a583ba17c2-kube-api-access-k8dhj\") pod \"perses-operator-bfb5f57db-jhrd5\" (UID: \"1c499153-14fe-45db-9223-04a583ba17c2\") " pod="openshift-operators/perses-operator-bfb5f57db-jhrd5" Mar 19 12:29:36.499464 master-0 kubenswrapper[31830]: I0319 12:29:36.499414 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-npvcs"] Mar 19 12:29:36.501641 master-0 kubenswrapper[31830]: I0319 12:29:36.501606 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-bfb5f57db-jhrd5" Mar 19 12:29:36.502992 master-0 kubenswrapper[31830]: W0319 12:29:36.502949 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bf21083_6e5c_45da_be01_a74ae41f18a1.slice/crio-95102df4052b70f9261089611baae77fc1dfabc5e6eeee304eb6a62e657dd302 WatchSource:0}: Error finding container 95102df4052b70f9261089611baae77fc1dfabc5e6eeee304eb6a62e657dd302: Status 404 returned error can't find the container with id 95102df4052b70f9261089611baae77fc1dfabc5e6eeee304eb6a62e657dd302 Mar 19 12:29:36.922856 master-0 kubenswrapper[31830]: W0319 12:29:36.922788 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c499153_14fe_45db_9223_04a583ba17c2.slice/crio-1486b6f8d70905ec3831ca73fe9f2778b671e90fe85c4cc076d66a7db32c68b4 WatchSource:0}: Error finding container 1486b6f8d70905ec3831ca73fe9f2778b671e90fe85c4cc076d66a7db32c68b4: Status 404 returned error can't find the container with id 1486b6f8d70905ec3831ca73fe9f2778b671e90fe85c4cc076d66a7db32c68b4 Mar 19 12:29:36.923646 master-0 kubenswrapper[31830]: I0319 12:29:36.923598 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-bfb5f57db-jhrd5"] Mar 19 12:29:37.344579 master-0 kubenswrapper[31830]: I0319 12:29:37.344512 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-npvcs" event={"ID":"4bf21083-6e5c-45da-be01-a74ae41f18a1","Type":"ContainerStarted","Data":"95102df4052b70f9261089611baae77fc1dfabc5e6eeee304eb6a62e657dd302"} Mar 19 12:29:37.346445 master-0 kubenswrapper[31830]: I0319 12:29:37.346401 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-bfb5f57db-jhrd5" event={"ID":"1c499153-14fe-45db-9223-04a583ba17c2","Type":"ContainerStarted","Data":"1486b6f8d70905ec3831ca73fe9f2778b671e90fe85c4cc076d66a7db32c68b4"} Mar 19 12:29:46.441511 master-0 kubenswrapper[31830]: I0319 12:29:46.441453 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-6x964" event={"ID":"6583b411-b800-4794-abae-ff091fa2959a","Type":"ContainerStarted","Data":"2a6d6250ac9ab9fd91757c8b11b29bf3f0a6400ab548fd749b26d13baa2a25ec"} Mar 19 12:29:46.454632 master-0 kubenswrapper[31830]: I0319 12:29:46.448500 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-npvcs" event={"ID":"4bf21083-6e5c-45da-be01-a74ae41f18a1","Type":"ContainerStarted","Data":"ad9f7f8e09081122914a87c88919da7285b9944c8a54213bbe54945d970d0293"} Mar 19 12:29:46.454632 master-0 kubenswrapper[31830]: I0319 12:29:46.448825 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-6dd7dd855f-npvcs" Mar 19 12:29:46.454632 master-0 kubenswrapper[31830]: I0319 12:29:46.453885 31830 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-npvcs container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.128.0.132:8081/healthz\": dial tcp 10.128.0.132:8081: connect: connection refused" start-of-body= Mar 19 12:29:46.454632 master-0 kubenswrapper[31830]: I0319 12:29:46.453946 31830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-npvcs" podUID="4bf21083-6e5c-45da-be01-a74ae41f18a1" containerName="operator" probeResult="failure" output="Get \"http://10.128.0.132:8081/healthz\": dial tcp 10.128.0.132:8081: connect: connection refused" Mar 19 12:29:46.461954 master-0 kubenswrapper[31830]: I0319 12:29:46.461306 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-bfb5f57db-jhrd5" event={"ID":"1c499153-14fe-45db-9223-04a583ba17c2","Type":"ContainerStarted","Data":"06b972c9db8088596e4cf6342e036a0bd3fbb5db1263dd08ca39a0928786c7c0"} Mar 19 12:29:46.462907 master-0 kubenswrapper[31830]: I0319 12:29:46.462235 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-bfb5f57db-jhrd5" Mar 19 12:29:46.482629 master-0 kubenswrapper[31830]: I0319 12:29:46.482545 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-6x964" podStartSLOduration=1.723568577 podStartE2EDuration="11.48252285s" podCreationTimestamp="2026-03-19 12:29:35 +0000 UTC" firstStartedPulling="2026-03-19 12:29:36.324080953 +0000 UTC m=+914.873041657" lastFinishedPulling="2026-03-19 12:29:46.083035226 +0000 UTC m=+924.631995930" observedRunningTime="2026-03-19 12:29:46.476573224 +0000 UTC m=+925.025533928" watchObservedRunningTime="2026-03-19 12:29:46.48252285 +0000 UTC m=+925.031483564" Mar 19 12:29:46.521639 master-0 kubenswrapper[31830]: I0319 12:29:46.518909 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-bfb5f57db-jhrd5" podStartSLOduration=1.361898799 podStartE2EDuration="10.518890053s" podCreationTimestamp="2026-03-19 12:29:36 +0000 UTC" firstStartedPulling="2026-03-19 12:29:36.925021951 +0000 UTC m=+915.473982655" lastFinishedPulling="2026-03-19 12:29:46.082013205 +0000 UTC m=+924.630973909" observedRunningTime="2026-03-19 12:29:46.517485918 +0000 UTC m=+925.066446622" watchObservedRunningTime="2026-03-19 12:29:46.518890053 +0000 UTC m=+925.067850777" Mar 19 12:29:47.469889 master-0 kubenswrapper[31830]: I0319 12:29:47.469821 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-nd8l4" event={"ID":"4bf2e991-18d5-4b0c-a386-31a336f80b6d","Type":"ContainerStarted","Data":"9686a7ffa4774522ebe9b754f62194caa6e70e9d1f4bc446b1e19266860074d4"} Mar 19 12:29:47.471446 master-0 kubenswrapper[31830]: I0319 12:29:47.471357 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-cfnjg" event={"ID":"db169766-1f49-4387-b078-a1899d4161b9","Type":"ContainerStarted","Data":"87ad02e819a60833e8fd633f89d777afb2e48c23eab2e3eb51bd3db6e95cb415"} Mar 19 12:29:47.474096 master-0 kubenswrapper[31830]: I0319 12:29:47.474062 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-6dd7dd855f-npvcs" Mar 19 12:29:47.489872 master-0 kubenswrapper[31830]: I0319 12:29:47.489751 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-8ff7d675-nd8l4" podStartSLOduration=3.128741867 podStartE2EDuration="13.489737133s" podCreationTimestamp="2026-03-19 12:29:34 +0000 UTC" firstStartedPulling="2026-03-19 12:29:35.681213728 +0000 UTC m=+914.230174432" lastFinishedPulling="2026-03-19 12:29:46.042208994 +0000 UTC m=+924.591169698" observedRunningTime="2026-03-19 12:29:47.487752891 +0000 UTC m=+926.036713615" watchObservedRunningTime="2026-03-19 12:29:47.489737133 +0000 UTC m=+926.038697837" Mar 19 12:29:47.490990 master-0 kubenswrapper[31830]: I0319 12:29:47.490958 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-6dd7dd855f-npvcs" podStartSLOduration=2.8555850080000003 podStartE2EDuration="12.490952191s" podCreationTimestamp="2026-03-19 12:29:35 +0000 UTC" firstStartedPulling="2026-03-19 12:29:36.510094647 +0000 UTC m=+915.059055351" lastFinishedPulling="2026-03-19 12:29:46.14546183 +0000 UTC m=+924.694422534" observedRunningTime="2026-03-19 12:29:46.56569195 +0000 UTC m=+925.114652664" watchObservedRunningTime="2026-03-19 12:29:47.490952191 +0000 UTC m=+926.039912895" Mar 19 12:29:47.512141 master-0 kubenswrapper[31830]: I0319 12:29:47.512060 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7df9ddf467-cfnjg" podStartSLOduration=2.678732119 podStartE2EDuration="12.512039428s" podCreationTimestamp="2026-03-19 12:29:35 +0000 UTC" firstStartedPulling="2026-03-19 12:29:36.212615931 +0000 UTC m=+914.761576635" lastFinishedPulling="2026-03-19 12:29:46.04592324 +0000 UTC m=+924.594883944" observedRunningTime="2026-03-19 12:29:47.50858242 +0000 UTC m=+926.057543144" watchObservedRunningTime="2026-03-19 12:29:47.512039428 +0000 UTC m=+926.061000142" Mar 19 12:29:51.499109 master-0 kubenswrapper[31830]: I0319 12:29:51.499025 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-8665ccc68-62qpd" Mar 19 12:29:56.504968 master-0 kubenswrapper[31830]: I0319 12:29:56.504917 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-bfb5f57db-jhrd5" Mar 19 12:30:10.947450 master-0 kubenswrapper[31830]: I0319 12:30:10.947371 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-8ddbf4b7-fw4vt" Mar 19 12:30:18.932559 master-0 kubenswrapper[31830]: I0319 12:30:18.932491 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-hx5pt"] Mar 19 12:30:18.941075 master-0 kubenswrapper[31830]: I0319 12:30:18.940307 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:18.947820 master-0 kubenswrapper[31830]: I0319 12:30:18.944197 31830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 19 12:30:18.947820 master-0 kubenswrapper[31830]: I0319 12:30:18.944204 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 19 12:30:18.955109 master-0 kubenswrapper[31830]: I0319 12:30:18.954411 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-qltq6"] Mar 19 12:30:18.958836 master-0 kubenswrapper[31830]: I0319 12:30:18.955418 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qltq6" Mar 19 12:30:18.958836 master-0 kubenswrapper[31830]: I0319 12:30:18.957128 31830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 19 12:30:18.969551 master-0 kubenswrapper[31830]: I0319 12:30:18.967042 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-qltq6"] Mar 19 12:30:19.042458 master-0 kubenswrapper[31830]: I0319 12:30:19.041316 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/450fdf42-489c-4403-9c52-03c51471160c-reloader\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.042458 master-0 kubenswrapper[31830]: I0319 12:30:19.041367 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/450fdf42-489c-4403-9c52-03c51471160c-metrics\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.042458 master-0 kubenswrapper[31830]: I0319 12:30:19.041394 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r2hb\" (UniqueName: \"kubernetes.io/projected/450fdf42-489c-4403-9c52-03c51471160c-kube-api-access-4r2hb\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.042458 master-0 kubenswrapper[31830]: I0319 12:30:19.041532 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/450fdf42-489c-4403-9c52-03c51471160c-metrics-certs\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.042458 master-0 kubenswrapper[31830]: I0319 12:30:19.041580 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/450fdf42-489c-4403-9c52-03c51471160c-frr-sockets\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.042458 master-0 kubenswrapper[31830]: I0319 12:30:19.041603 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/450fdf42-489c-4403-9c52-03c51471160c-frr-conf\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.042458 master-0 kubenswrapper[31830]: I0319 12:30:19.041733 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/450fdf42-489c-4403-9c52-03c51471160c-frr-startup\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.048739 master-0 kubenswrapper[31830]: I0319 12:30:19.047690 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-fxx75"] Mar 19 12:30:19.049209 master-0 kubenswrapper[31830]: I0319 12:30:19.049092 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fxx75" Mar 19 12:30:19.055883 master-0 kubenswrapper[31830]: I0319 12:30:19.052286 31830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 19 12:30:19.055883 master-0 kubenswrapper[31830]: I0319 12:30:19.053200 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 19 12:30:19.055883 master-0 kubenswrapper[31830]: I0319 12:30:19.053956 31830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 19 12:30:19.063826 master-0 kubenswrapper[31830]: I0319 12:30:19.063770 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-r5pn2"] Mar 19 12:30:19.064977 master-0 kubenswrapper[31830]: I0319 12:30:19.064943 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-r5pn2" Mar 19 12:30:19.067613 master-0 kubenswrapper[31830]: I0319 12:30:19.067566 31830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 19 12:30:19.118494 master-0 kubenswrapper[31830]: I0319 12:30:19.118440 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-r5pn2"] Mar 19 12:30:19.143403 master-0 kubenswrapper[31830]: I0319 12:30:19.143339 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/450fdf42-489c-4403-9c52-03c51471160c-frr-startup\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.143725 master-0 kubenswrapper[31830]: I0319 12:30:19.143675 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/450fdf42-489c-4403-9c52-03c51471160c-reloader\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.143805 master-0 kubenswrapper[31830]: I0319 12:30:19.143742 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7f0f4018-1edf-45aa-ae8d-9798bed919a2-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-qltq6\" (UID: \"7f0f4018-1edf-45aa-ae8d-9798bed919a2\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qltq6" Mar 19 12:30:19.143805 master-0 kubenswrapper[31830]: I0319 12:30:19.143763 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/450fdf42-489c-4403-9c52-03c51471160c-metrics\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.143889 master-0 kubenswrapper[31830]: I0319 12:30:19.143850 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r2hb\" (UniqueName: \"kubernetes.io/projected/450fdf42-489c-4403-9c52-03c51471160c-kube-api-access-4r2hb\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.144254 master-0 kubenswrapper[31830]: I0319 12:30:19.144209 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/450fdf42-489c-4403-9c52-03c51471160c-reloader\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.144405 master-0 kubenswrapper[31830]: I0319 12:30:19.144376 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/450fdf42-489c-4403-9c52-03c51471160c-metrics-certs\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.144472 master-0 kubenswrapper[31830]: I0319 12:30:19.144451 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/450fdf42-489c-4403-9c52-03c51471160c-frr-sockets\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.144639 master-0 kubenswrapper[31830]: I0319 12:30:19.144481 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/450fdf42-489c-4403-9c52-03c51471160c-frr-conf\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.144639 master-0 kubenswrapper[31830]: I0319 12:30:19.144509 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/450fdf42-489c-4403-9c52-03c51471160c-frr-startup\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.144639 master-0 kubenswrapper[31830]: I0319 12:30:19.144521 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvjh5\" (UniqueName: \"kubernetes.io/projected/7f0f4018-1edf-45aa-ae8d-9798bed919a2-kube-api-access-zvjh5\") pod \"frr-k8s-webhook-server-bcc4b6f68-qltq6\" (UID: \"7f0f4018-1edf-45aa-ae8d-9798bed919a2\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qltq6" Mar 19 12:30:19.144639 master-0 kubenswrapper[31830]: E0319 12:30:19.144563 31830 secret.go:189] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Mar 19 12:30:19.144639 master-0 kubenswrapper[31830]: E0319 12:30:19.144606 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/450fdf42-489c-4403-9c52-03c51471160c-metrics-certs podName:450fdf42-489c-4403-9c52-03c51471160c nodeName:}" failed. No retries permitted until 2026-03-19 12:30:19.644590915 +0000 UTC m=+958.193551619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/450fdf42-489c-4403-9c52-03c51471160c-metrics-certs") pod "frr-k8s-hx5pt" (UID: "450fdf42-489c-4403-9c52-03c51471160c") : secret "frr-k8s-certs-secret" not found Mar 19 12:30:19.144811 master-0 kubenswrapper[31830]: I0319 12:30:19.144225 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/450fdf42-489c-4403-9c52-03c51471160c-metrics\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.144971 master-0 kubenswrapper[31830]: I0319 12:30:19.144938 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/450fdf42-489c-4403-9c52-03c51471160c-frr-conf\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.146041 master-0 kubenswrapper[31830]: I0319 12:30:19.146007 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/450fdf42-489c-4403-9c52-03c51471160c-frr-sockets\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.160274 master-0 kubenswrapper[31830]: I0319 12:30:19.160224 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r2hb\" (UniqueName: \"kubernetes.io/projected/450fdf42-489c-4403-9c52-03c51471160c-kube-api-access-4r2hb\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.245471 master-0 kubenswrapper[31830]: I0319 12:30:19.245425 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7dc06b24-66ec-4b57-88d2-90bb6d42bb60-metallb-excludel2\") pod \"speaker-fxx75\" (UID: \"7dc06b24-66ec-4b57-88d2-90bb6d42bb60\") " pod="metallb-system/speaker-fxx75" Mar 19 12:30:19.245711 master-0 kubenswrapper[31830]: I0319 12:30:19.245491 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t5cp\" (UniqueName: \"kubernetes.io/projected/b8442707-1048-49e7-883b-9dfc0c48eb15-kube-api-access-9t5cp\") pod \"controller-7bb4cc7c98-r5pn2\" (UID: \"b8442707-1048-49e7-883b-9dfc0c48eb15\") " pod="metallb-system/controller-7bb4cc7c98-r5pn2" Mar 19 12:30:19.245711 master-0 kubenswrapper[31830]: I0319 12:30:19.245589 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7f0f4018-1edf-45aa-ae8d-9798bed919a2-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-qltq6\" (UID: \"7f0f4018-1edf-45aa-ae8d-9798bed919a2\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qltq6" Mar 19 12:30:19.245711 master-0 kubenswrapper[31830]: I0319 12:30:19.245692 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b8442707-1048-49e7-883b-9dfc0c48eb15-cert\") pod \"controller-7bb4cc7c98-r5pn2\" (UID: \"b8442707-1048-49e7-883b-9dfc0c48eb15\") " pod="metallb-system/controller-7bb4cc7c98-r5pn2" Mar 19 12:30:19.245891 master-0 kubenswrapper[31830]: I0319 12:30:19.245758 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dc06b24-66ec-4b57-88d2-90bb6d42bb60-metrics-certs\") pod \"speaker-fxx75\" (UID: \"7dc06b24-66ec-4b57-88d2-90bb6d42bb60\") " pod="metallb-system/speaker-fxx75" Mar 19 12:30:19.245891 master-0 kubenswrapper[31830]: I0319 12:30:19.245831 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zv6d\" (UniqueName: \"kubernetes.io/projected/7dc06b24-66ec-4b57-88d2-90bb6d42bb60-kube-api-access-6zv6d\") pod \"speaker-fxx75\" (UID: \"7dc06b24-66ec-4b57-88d2-90bb6d42bb60\") " pod="metallb-system/speaker-fxx75" Mar 19 12:30:19.246005 master-0 kubenswrapper[31830]: I0319 12:30:19.245897 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvjh5\" (UniqueName: \"kubernetes.io/projected/7f0f4018-1edf-45aa-ae8d-9798bed919a2-kube-api-access-zvjh5\") pod \"frr-k8s-webhook-server-bcc4b6f68-qltq6\" (UID: \"7f0f4018-1edf-45aa-ae8d-9798bed919a2\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qltq6" Mar 19 12:30:19.246005 master-0 kubenswrapper[31830]: I0319 12:30:19.245931 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7dc06b24-66ec-4b57-88d2-90bb6d42bb60-memberlist\") pod \"speaker-fxx75\" (UID: \"7dc06b24-66ec-4b57-88d2-90bb6d42bb60\") " pod="metallb-system/speaker-fxx75" Mar 19 12:30:19.246005 master-0 kubenswrapper[31830]: I0319 12:30:19.245979 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8442707-1048-49e7-883b-9dfc0c48eb15-metrics-certs\") pod \"controller-7bb4cc7c98-r5pn2\" (UID: \"b8442707-1048-49e7-883b-9dfc0c48eb15\") " pod="metallb-system/controller-7bb4cc7c98-r5pn2" Mar 19 12:30:19.246218 master-0 kubenswrapper[31830]: E0319 12:30:19.246164 31830 secret.go:189] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Mar 19 12:30:19.246311 master-0 kubenswrapper[31830]: E0319 12:30:19.246286 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f0f4018-1edf-45aa-ae8d-9798bed919a2-cert podName:7f0f4018-1edf-45aa-ae8d-9798bed919a2 nodeName:}" failed. No retries permitted until 2026-03-19 12:30:19.746236786 +0000 UTC m=+958.295197490 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7f0f4018-1edf-45aa-ae8d-9798bed919a2-cert") pod "frr-k8s-webhook-server-bcc4b6f68-qltq6" (UID: "7f0f4018-1edf-45aa-ae8d-9798bed919a2") : secret "frr-k8s-webhook-server-cert" not found Mar 19 12:30:19.265668 master-0 kubenswrapper[31830]: I0319 12:30:19.265529 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvjh5\" (UniqueName: \"kubernetes.io/projected/7f0f4018-1edf-45aa-ae8d-9798bed919a2-kube-api-access-zvjh5\") pod \"frr-k8s-webhook-server-bcc4b6f68-qltq6\" (UID: \"7f0f4018-1edf-45aa-ae8d-9798bed919a2\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qltq6" Mar 19 12:30:19.347435 master-0 kubenswrapper[31830]: I0319 12:30:19.347377 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t5cp\" (UniqueName: \"kubernetes.io/projected/b8442707-1048-49e7-883b-9dfc0c48eb15-kube-api-access-9t5cp\") pod \"controller-7bb4cc7c98-r5pn2\" (UID: \"b8442707-1048-49e7-883b-9dfc0c48eb15\") " pod="metallb-system/controller-7bb4cc7c98-r5pn2" Mar 19 12:30:19.347628 master-0 kubenswrapper[31830]: I0319 12:30:19.347450 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b8442707-1048-49e7-883b-9dfc0c48eb15-cert\") pod \"controller-7bb4cc7c98-r5pn2\" (UID: \"b8442707-1048-49e7-883b-9dfc0c48eb15\") " pod="metallb-system/controller-7bb4cc7c98-r5pn2" Mar 19 12:30:19.348231 master-0 kubenswrapper[31830]: I0319 12:30:19.347663 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dc06b24-66ec-4b57-88d2-90bb6d42bb60-metrics-certs\") pod \"speaker-fxx75\" (UID: \"7dc06b24-66ec-4b57-88d2-90bb6d42bb60\") " pod="metallb-system/speaker-fxx75" Mar 19 12:30:19.348231 master-0 kubenswrapper[31830]: I0319 12:30:19.347844 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zv6d\" (UniqueName: \"kubernetes.io/projected/7dc06b24-66ec-4b57-88d2-90bb6d42bb60-kube-api-access-6zv6d\") pod \"speaker-fxx75\" (UID: \"7dc06b24-66ec-4b57-88d2-90bb6d42bb60\") " pod="metallb-system/speaker-fxx75" Mar 19 12:30:19.348231 master-0 kubenswrapper[31830]: I0319 12:30:19.347934 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7dc06b24-66ec-4b57-88d2-90bb6d42bb60-memberlist\") pod \"speaker-fxx75\" (UID: \"7dc06b24-66ec-4b57-88d2-90bb6d42bb60\") " pod="metallb-system/speaker-fxx75" Mar 19 12:30:19.348231 master-0 kubenswrapper[31830]: I0319 12:30:19.347975 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8442707-1048-49e7-883b-9dfc0c48eb15-metrics-certs\") pod \"controller-7bb4cc7c98-r5pn2\" (UID: \"b8442707-1048-49e7-883b-9dfc0c48eb15\") " pod="metallb-system/controller-7bb4cc7c98-r5pn2" Mar 19 12:30:19.348231 master-0 kubenswrapper[31830]: I0319 12:30:19.348022 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7dc06b24-66ec-4b57-88d2-90bb6d42bb60-metallb-excludel2\") pod \"speaker-fxx75\" (UID: \"7dc06b24-66ec-4b57-88d2-90bb6d42bb60\") " pod="metallb-system/speaker-fxx75" Mar 19 12:30:19.348231 master-0 kubenswrapper[31830]: E0319 12:30:19.348097 31830 secret.go:189] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Mar 19 12:30:19.348231 master-0 kubenswrapper[31830]: E0319 12:30:19.348115 31830 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 19 12:30:19.348231 master-0 kubenswrapper[31830]: E0319 12:30:19.348148 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8442707-1048-49e7-883b-9dfc0c48eb15-metrics-certs podName:b8442707-1048-49e7-883b-9dfc0c48eb15 nodeName:}" failed. No retries permitted until 2026-03-19 12:30:19.848127326 +0000 UTC m=+958.397088030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b8442707-1048-49e7-883b-9dfc0c48eb15-metrics-certs") pod "controller-7bb4cc7c98-r5pn2" (UID: "b8442707-1048-49e7-883b-9dfc0c48eb15") : secret "controller-certs-secret" not found Mar 19 12:30:19.348231 master-0 kubenswrapper[31830]: E0319 12:30:19.348187 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dc06b24-66ec-4b57-88d2-90bb6d42bb60-memberlist podName:7dc06b24-66ec-4b57-88d2-90bb6d42bb60 nodeName:}" failed. No retries permitted until 2026-03-19 12:30:19.848169688 +0000 UTC m=+958.397130392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7dc06b24-66ec-4b57-88d2-90bb6d42bb60-memberlist") pod "speaker-fxx75" (UID: "7dc06b24-66ec-4b57-88d2-90bb6d42bb60") : secret "metallb-memberlist" not found Mar 19 12:30:19.350749 master-0 kubenswrapper[31830]: I0319 12:30:19.348765 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7dc06b24-66ec-4b57-88d2-90bb6d42bb60-metallb-excludel2\") pod \"speaker-fxx75\" (UID: \"7dc06b24-66ec-4b57-88d2-90bb6d42bb60\") " pod="metallb-system/speaker-fxx75" Mar 19 12:30:19.350749 master-0 kubenswrapper[31830]: I0319 12:30:19.349649 31830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 19 12:30:19.350749 master-0 kubenswrapper[31830]: I0319 12:30:19.350695 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dc06b24-66ec-4b57-88d2-90bb6d42bb60-metrics-certs\") pod \"speaker-fxx75\" (UID: \"7dc06b24-66ec-4b57-88d2-90bb6d42bb60\") " pod="metallb-system/speaker-fxx75" Mar 19 12:30:19.363818 master-0 kubenswrapper[31830]: I0319 12:30:19.363750 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zv6d\" (UniqueName: \"kubernetes.io/projected/7dc06b24-66ec-4b57-88d2-90bb6d42bb60-kube-api-access-6zv6d\") pod \"speaker-fxx75\" (UID: \"7dc06b24-66ec-4b57-88d2-90bb6d42bb60\") " pod="metallb-system/speaker-fxx75" Mar 19 12:30:19.364434 master-0 kubenswrapper[31830]: I0319 12:30:19.364105 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t5cp\" (UniqueName: \"kubernetes.io/projected/b8442707-1048-49e7-883b-9dfc0c48eb15-kube-api-access-9t5cp\") pod \"controller-7bb4cc7c98-r5pn2\" (UID: \"b8442707-1048-49e7-883b-9dfc0c48eb15\") " pod="metallb-system/controller-7bb4cc7c98-r5pn2" Mar 19 12:30:19.364434 master-0 kubenswrapper[31830]: I0319 12:30:19.364385 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b8442707-1048-49e7-883b-9dfc0c48eb15-cert\") pod \"controller-7bb4cc7c98-r5pn2\" (UID: \"b8442707-1048-49e7-883b-9dfc0c48eb15\") " pod="metallb-system/controller-7bb4cc7c98-r5pn2" Mar 19 12:30:19.651919 master-0 kubenswrapper[31830]: I0319 12:30:19.651736 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/450fdf42-489c-4403-9c52-03c51471160c-metrics-certs\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.655153 master-0 kubenswrapper[31830]: I0319 12:30:19.655093 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/450fdf42-489c-4403-9c52-03c51471160c-metrics-certs\") pod \"frr-k8s-hx5pt\" (UID: \"450fdf42-489c-4403-9c52-03c51471160c\") " pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.753936 master-0 kubenswrapper[31830]: I0319 12:30:19.753883 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7f0f4018-1edf-45aa-ae8d-9798bed919a2-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-qltq6\" (UID: \"7f0f4018-1edf-45aa-ae8d-9798bed919a2\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qltq6" Mar 19 12:30:19.756846 master-0 kubenswrapper[31830]: I0319 12:30:19.756813 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7f0f4018-1edf-45aa-ae8d-9798bed919a2-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-qltq6\" (UID: \"7f0f4018-1edf-45aa-ae8d-9798bed919a2\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qltq6" Mar 19 12:30:19.855639 master-0 kubenswrapper[31830]: I0319 12:30:19.855577 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7dc06b24-66ec-4b57-88d2-90bb6d42bb60-memberlist\") pod \"speaker-fxx75\" (UID: \"7dc06b24-66ec-4b57-88d2-90bb6d42bb60\") " pod="metallb-system/speaker-fxx75" Mar 19 12:30:19.855639 master-0 kubenswrapper[31830]: I0319 12:30:19.855639 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8442707-1048-49e7-883b-9dfc0c48eb15-metrics-certs\") pod \"controller-7bb4cc7c98-r5pn2\" (UID: \"b8442707-1048-49e7-883b-9dfc0c48eb15\") " pod="metallb-system/controller-7bb4cc7c98-r5pn2" Mar 19 12:30:19.855933 master-0 kubenswrapper[31830]: E0319 12:30:19.855825 31830 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 19 12:30:19.855933 master-0 kubenswrapper[31830]: E0319 12:30:19.855930 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dc06b24-66ec-4b57-88d2-90bb6d42bb60-memberlist podName:7dc06b24-66ec-4b57-88d2-90bb6d42bb60 nodeName:}" failed. No retries permitted until 2026-03-19 12:30:20.855907434 +0000 UTC m=+959.404868158 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7dc06b24-66ec-4b57-88d2-90bb6d42bb60-memberlist") pod "speaker-fxx75" (UID: "7dc06b24-66ec-4b57-88d2-90bb6d42bb60") : secret "metallb-memberlist" not found Mar 19 12:30:19.858555 master-0 kubenswrapper[31830]: I0319 12:30:19.858522 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8442707-1048-49e7-883b-9dfc0c48eb15-metrics-certs\") pod \"controller-7bb4cc7c98-r5pn2\" (UID: \"b8442707-1048-49e7-883b-9dfc0c48eb15\") " pod="metallb-system/controller-7bb4cc7c98-r5pn2" Mar 19 12:30:19.867423 master-0 kubenswrapper[31830]: I0319 12:30:19.867388 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:19.882554 master-0 kubenswrapper[31830]: I0319 12:30:19.882506 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qltq6" Mar 19 12:30:19.996558 master-0 kubenswrapper[31830]: I0319 12:30:19.996448 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-r5pn2" Mar 19 12:30:20.310496 master-0 kubenswrapper[31830]: I0319 12:30:20.310418 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-qltq6"] Mar 19 12:30:20.406108 master-0 kubenswrapper[31830]: I0319 12:30:20.406059 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-r5pn2"] Mar 19 12:30:20.406240 master-0 kubenswrapper[31830]: W0319 12:30:20.406180 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8442707_1048_49e7_883b_9dfc0c48eb15.slice/crio-518079ed8f336de03b725f1f5b358dbac816378aa113144cb0bed7347ea98d9b WatchSource:0}: Error finding container 518079ed8f336de03b725f1f5b358dbac816378aa113144cb0bed7347ea98d9b: Status 404 returned error can't find the container with id 518079ed8f336de03b725f1f5b358dbac816378aa113144cb0bed7347ea98d9b Mar 19 12:30:20.749754 master-0 kubenswrapper[31830]: I0319 12:30:20.749673 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hx5pt" event={"ID":"450fdf42-489c-4403-9c52-03c51471160c","Type":"ContainerStarted","Data":"a5f4c52ce4978b2113dd9eead2ce78b3058cd9447b9ff1645e39fb17ae1b969a"} Mar 19 12:30:20.751192 master-0 kubenswrapper[31830]: I0319 12:30:20.751152 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qltq6" event={"ID":"7f0f4018-1edf-45aa-ae8d-9798bed919a2","Type":"ContainerStarted","Data":"f93daa15cfb9f744cbbe35b80b6920d740a7e19db2f13e4eca4956c3bc9d685e"} Mar 19 12:30:20.752988 master-0 kubenswrapper[31830]: I0319 12:30:20.752946 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-r5pn2" event={"ID":"b8442707-1048-49e7-883b-9dfc0c48eb15","Type":"ContainerStarted","Data":"4a41e741468fc59a4aed06604e3aa952c51cb2ce1ea866573a7e218dd6b5b23e"} Mar 19 12:30:20.752988 master-0 kubenswrapper[31830]: I0319 12:30:20.752979 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-r5pn2" event={"ID":"b8442707-1048-49e7-883b-9dfc0c48eb15","Type":"ContainerStarted","Data":"518079ed8f336de03b725f1f5b358dbac816378aa113144cb0bed7347ea98d9b"} Mar 19 12:30:20.877625 master-0 kubenswrapper[31830]: I0319 12:30:20.877577 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7dc06b24-66ec-4b57-88d2-90bb6d42bb60-memberlist\") pod \"speaker-fxx75\" (UID: \"7dc06b24-66ec-4b57-88d2-90bb6d42bb60\") " pod="metallb-system/speaker-fxx75" Mar 19 12:30:20.880235 master-0 kubenswrapper[31830]: I0319 12:30:20.880202 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7dc06b24-66ec-4b57-88d2-90bb6d42bb60-memberlist\") pod \"speaker-fxx75\" (UID: \"7dc06b24-66ec-4b57-88d2-90bb6d42bb60\") " pod="metallb-system/speaker-fxx75" Mar 19 12:30:21.151536 master-0 kubenswrapper[31830]: I0319 12:30:21.151382 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-m9dsd"] Mar 19 12:30:21.153248 master-0 kubenswrapper[31830]: I0319 12:30:21.153211 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-m9dsd" Mar 19 12:30:21.168415 master-0 kubenswrapper[31830]: I0319 12:30:21.168188 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fxx75" Mar 19 12:30:21.177952 master-0 kubenswrapper[31830]: I0319 12:30:21.177883 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-4f74g"] Mar 19 12:30:21.180645 master-0 kubenswrapper[31830]: I0319 12:30:21.179732 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-4f74g" Mar 19 12:30:21.180972 master-0 kubenswrapper[31830]: I0319 12:30:21.180920 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gqx9\" (UniqueName: \"kubernetes.io/projected/db5a3424-916f-441f-87c8-31bf62b4a07b-kube-api-access-6gqx9\") pod \"nmstate-webhook-5f558f5558-4f74g\" (UID: \"db5a3424-916f-441f-87c8-31bf62b4a07b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-4f74g" Mar 19 12:30:21.181047 master-0 kubenswrapper[31830]: I0319 12:30:21.180975 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/db5a3424-916f-441f-87c8-31bf62b4a07b-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-4f74g\" (UID: \"db5a3424-916f-441f-87c8-31bf62b4a07b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-4f74g" Mar 19 12:30:21.181082 master-0 kubenswrapper[31830]: I0319 12:30:21.181063 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9w4q\" (UniqueName: \"kubernetes.io/projected/3e910bd8-61ee-4627-a7ff-fc2ae9aec770-kube-api-access-f9w4q\") pod \"nmstate-metrics-9b8c8685d-m9dsd\" (UID: \"3e910bd8-61ee-4627-a7ff-fc2ae9aec770\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-m9dsd" Mar 19 12:30:21.183621 master-0 kubenswrapper[31830]: I0319 12:30:21.181697 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 19 12:30:21.190767 master-0 kubenswrapper[31830]: I0319 12:30:21.190694 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-m9dsd"] Mar 19 12:30:21.200472 master-0 kubenswrapper[31830]: I0319 12:30:21.200378 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-4f74g"] Mar 19 12:30:21.212467 master-0 kubenswrapper[31830]: I0319 12:30:21.212390 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-kf4wb"] Mar 19 12:30:21.230745 master-0 kubenswrapper[31830]: I0319 12:30:21.230694 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-kf4wb" Mar 19 12:30:21.284482 master-0 kubenswrapper[31830]: I0319 12:30:21.284399 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrlpv\" (UniqueName: \"kubernetes.io/projected/6e09c4f2-c7cc-46b5-b00c-385fde5f190f-kube-api-access-xrlpv\") pod \"nmstate-handler-kf4wb\" (UID: \"6e09c4f2-c7cc-46b5-b00c-385fde5f190f\") " pod="openshift-nmstate/nmstate-handler-kf4wb" Mar 19 12:30:21.284482 master-0 kubenswrapper[31830]: I0319 12:30:21.284466 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6e09c4f2-c7cc-46b5-b00c-385fde5f190f-ovs-socket\") pod \"nmstate-handler-kf4wb\" (UID: \"6e09c4f2-c7cc-46b5-b00c-385fde5f190f\") " pod="openshift-nmstate/nmstate-handler-kf4wb" Mar 19 12:30:21.284482 master-0 kubenswrapper[31830]: I0319 12:30:21.284495 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gqx9\" (UniqueName: \"kubernetes.io/projected/db5a3424-916f-441f-87c8-31bf62b4a07b-kube-api-access-6gqx9\") pod \"nmstate-webhook-5f558f5558-4f74g\" (UID: \"db5a3424-916f-441f-87c8-31bf62b4a07b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-4f74g" Mar 19 12:30:21.284845 master-0 kubenswrapper[31830]: I0319 12:30:21.284514 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6e09c4f2-c7cc-46b5-b00c-385fde5f190f-nmstate-lock\") pod \"nmstate-handler-kf4wb\" (UID: \"6e09c4f2-c7cc-46b5-b00c-385fde5f190f\") " pod="openshift-nmstate/nmstate-handler-kf4wb" Mar 19 12:30:21.284845 master-0 kubenswrapper[31830]: I0319 12:30:21.284529 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/db5a3424-916f-441f-87c8-31bf62b4a07b-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-4f74g\" (UID: \"db5a3424-916f-441f-87c8-31bf62b4a07b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-4f74g" Mar 19 12:30:21.284845 master-0 kubenswrapper[31830]: E0319 12:30:21.284632 31830 secret.go:189] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Mar 19 12:30:21.284845 master-0 kubenswrapper[31830]: E0319 12:30:21.284743 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db5a3424-916f-441f-87c8-31bf62b4a07b-tls-key-pair podName:db5a3424-916f-441f-87c8-31bf62b4a07b nodeName:}" failed. No retries permitted until 2026-03-19 12:30:21.784665033 +0000 UTC m=+960.333625737 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/db5a3424-916f-441f-87c8-31bf62b4a07b-tls-key-pair") pod "nmstate-webhook-5f558f5558-4f74g" (UID: "db5a3424-916f-441f-87c8-31bf62b4a07b") : secret "openshift-nmstate-webhook" not found Mar 19 12:30:21.285010 master-0 kubenswrapper[31830]: I0319 12:30:21.284627 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6e09c4f2-c7cc-46b5-b00c-385fde5f190f-dbus-socket\") pod \"nmstate-handler-kf4wb\" (UID: \"6e09c4f2-c7cc-46b5-b00c-385fde5f190f\") " pod="openshift-nmstate/nmstate-handler-kf4wb" Mar 19 12:30:21.285010 master-0 kubenswrapper[31830]: I0319 12:30:21.284990 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9w4q\" (UniqueName: \"kubernetes.io/projected/3e910bd8-61ee-4627-a7ff-fc2ae9aec770-kube-api-access-f9w4q\") pod \"nmstate-metrics-9b8c8685d-m9dsd\" (UID: \"3e910bd8-61ee-4627-a7ff-fc2ae9aec770\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-m9dsd" Mar 19 12:30:21.304039 master-0 kubenswrapper[31830]: I0319 12:30:21.303995 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gqx9\" (UniqueName: \"kubernetes.io/projected/db5a3424-916f-441f-87c8-31bf62b4a07b-kube-api-access-6gqx9\") pod \"nmstate-webhook-5f558f5558-4f74g\" (UID: \"db5a3424-916f-441f-87c8-31bf62b4a07b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-4f74g" Mar 19 12:30:21.313753 master-0 kubenswrapper[31830]: W0319 12:30:21.310312 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dc06b24_66ec_4b57_88d2_90bb6d42bb60.slice/crio-41c8e2e905ac865a52372dfed0810972f5b5cc2bd76815b9edc596661e3ca52f WatchSource:0}: Error finding container 41c8e2e905ac865a52372dfed0810972f5b5cc2bd76815b9edc596661e3ca52f: Status 404 returned error can't find the container with id 41c8e2e905ac865a52372dfed0810972f5b5cc2bd76815b9edc596661e3ca52f Mar 19 12:30:21.318828 master-0 kubenswrapper[31830]: I0319 12:30:21.318038 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9w4q\" (UniqueName: \"kubernetes.io/projected/3e910bd8-61ee-4627-a7ff-fc2ae9aec770-kube-api-access-f9w4q\") pod \"nmstate-metrics-9b8c8685d-m9dsd\" (UID: \"3e910bd8-61ee-4627-a7ff-fc2ae9aec770\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-m9dsd" Mar 19 12:30:21.391133 master-0 kubenswrapper[31830]: I0319 12:30:21.391067 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6e09c4f2-c7cc-46b5-b00c-385fde5f190f-ovs-socket\") pod \"nmstate-handler-kf4wb\" (UID: \"6e09c4f2-c7cc-46b5-b00c-385fde5f190f\") " pod="openshift-nmstate/nmstate-handler-kf4wb" Mar 19 12:30:21.391133 master-0 kubenswrapper[31830]: I0319 12:30:21.391131 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6e09c4f2-c7cc-46b5-b00c-385fde5f190f-nmstate-lock\") pod \"nmstate-handler-kf4wb\" (UID: \"6e09c4f2-c7cc-46b5-b00c-385fde5f190f\") " pod="openshift-nmstate/nmstate-handler-kf4wb" Mar 19 12:30:21.391539 master-0 kubenswrapper[31830]: I0319 12:30:21.391198 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6e09c4f2-c7cc-46b5-b00c-385fde5f190f-dbus-socket\") pod \"nmstate-handler-kf4wb\" (UID: \"6e09c4f2-c7cc-46b5-b00c-385fde5f190f\") " pod="openshift-nmstate/nmstate-handler-kf4wb" Mar 19 12:30:21.391539 master-0 kubenswrapper[31830]: I0319 12:30:21.391286 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrlpv\" (UniqueName: \"kubernetes.io/projected/6e09c4f2-c7cc-46b5-b00c-385fde5f190f-kube-api-access-xrlpv\") pod \"nmstate-handler-kf4wb\" (UID: \"6e09c4f2-c7cc-46b5-b00c-385fde5f190f\") " pod="openshift-nmstate/nmstate-handler-kf4wb" Mar 19 12:30:21.391642 master-0 kubenswrapper[31830]: I0319 12:30:21.391601 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6e09c4f2-c7cc-46b5-b00c-385fde5f190f-ovs-socket\") pod \"nmstate-handler-kf4wb\" (UID: \"6e09c4f2-c7cc-46b5-b00c-385fde5f190f\") " pod="openshift-nmstate/nmstate-handler-kf4wb" Mar 19 12:30:21.393849 master-0 kubenswrapper[31830]: I0319 12:30:21.393820 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6e09c4f2-c7cc-46b5-b00c-385fde5f190f-nmstate-lock\") pod \"nmstate-handler-kf4wb\" (UID: \"6e09c4f2-c7cc-46b5-b00c-385fde5f190f\") " pod="openshift-nmstate/nmstate-handler-kf4wb" Mar 19 12:30:21.396226 master-0 kubenswrapper[31830]: I0319 12:30:21.396166 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6e09c4f2-c7cc-46b5-b00c-385fde5f190f-dbus-socket\") pod \"nmstate-handler-kf4wb\" (UID: \"6e09c4f2-c7cc-46b5-b00c-385fde5f190f\") " pod="openshift-nmstate/nmstate-handler-kf4wb" Mar 19 12:30:21.405919 master-0 kubenswrapper[31830]: I0319 12:30:21.405732 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-c2bxj"] Mar 19 12:30:21.408022 master-0 kubenswrapper[31830]: I0319 12:30:21.407995 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-c2bxj" Mar 19 12:30:21.410923 master-0 kubenswrapper[31830]: I0319 12:30:21.410811 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 19 12:30:21.415671 master-0 kubenswrapper[31830]: I0319 12:30:21.415593 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-c2bxj"] Mar 19 12:30:21.418022 master-0 kubenswrapper[31830]: I0319 12:30:21.417236 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 19 12:30:21.419098 master-0 kubenswrapper[31830]: I0319 12:30:21.419071 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrlpv\" (UniqueName: \"kubernetes.io/projected/6e09c4f2-c7cc-46b5-b00c-385fde5f190f-kube-api-access-xrlpv\") pod \"nmstate-handler-kf4wb\" (UID: \"6e09c4f2-c7cc-46b5-b00c-385fde5f190f\") " pod="openshift-nmstate/nmstate-handler-kf4wb" Mar 19 12:30:21.477254 master-0 kubenswrapper[31830]: I0319 12:30:21.476221 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-m9dsd" Mar 19 12:30:21.598630 master-0 kubenswrapper[31830]: I0319 12:30:21.598537 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4572c3d4-9030-4bd3-9f56-346d9f954254-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-c2bxj\" (UID: \"4572c3d4-9030-4bd3-9f56-346d9f954254\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-c2bxj" Mar 19 12:30:21.598990 master-0 kubenswrapper[31830]: I0319 12:30:21.598874 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4572c3d4-9030-4bd3-9f56-346d9f954254-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-c2bxj\" (UID: \"4572c3d4-9030-4bd3-9f56-346d9f954254\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-c2bxj" Mar 19 12:30:21.599056 master-0 kubenswrapper[31830]: I0319 12:30:21.599030 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgjgq\" (UniqueName: \"kubernetes.io/projected/4572c3d4-9030-4bd3-9f56-346d9f954254-kube-api-access-cgjgq\") pod \"nmstate-console-plugin-86f58fcf4-c2bxj\" (UID: \"4572c3d4-9030-4bd3-9f56-346d9f954254\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-c2bxj" Mar 19 12:30:21.617643 master-0 kubenswrapper[31830]: I0319 12:30:21.615860 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-778974b6d8-gqqzj"] Mar 19 12:30:21.617643 master-0 kubenswrapper[31830]: I0319 12:30:21.617011 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.642885 master-0 kubenswrapper[31830]: I0319 12:30:21.640928 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-778974b6d8-gqqzj"] Mar 19 12:30:21.719761 master-0 kubenswrapper[31830]: I0319 12:30:21.719365 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4572c3d4-9030-4bd3-9f56-346d9f954254-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-c2bxj\" (UID: \"4572c3d4-9030-4bd3-9f56-346d9f954254\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-c2bxj" Mar 19 12:30:21.722153 master-0 kubenswrapper[31830]: I0319 12:30:21.722091 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fc8fbfa9-d55d-470b-aabc-96b9f0c15790-service-ca\") pod \"console-778974b6d8-gqqzj\" (UID: \"fc8fbfa9-d55d-470b-aabc-96b9f0c15790\") " pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.725484 master-0 kubenswrapper[31830]: I0319 12:30:21.725345 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-kf4wb" Mar 19 12:30:21.728623 master-0 kubenswrapper[31830]: I0319 12:30:21.728560 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4572c3d4-9030-4bd3-9f56-346d9f954254-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-c2bxj\" (UID: \"4572c3d4-9030-4bd3-9f56-346d9f954254\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-c2bxj" Mar 19 12:30:21.728878 master-0 kubenswrapper[31830]: I0319 12:30:21.728771 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fc8fbfa9-d55d-470b-aabc-96b9f0c15790-oauth-serving-cert\") pod \"console-778974b6d8-gqqzj\" (UID: \"fc8fbfa9-d55d-470b-aabc-96b9f0c15790\") " pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.728878 master-0 kubenswrapper[31830]: I0319 12:30:21.728858 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fc8fbfa9-d55d-470b-aabc-96b9f0c15790-console-oauth-config\") pod \"console-778974b6d8-gqqzj\" (UID: \"fc8fbfa9-d55d-470b-aabc-96b9f0c15790\") " pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.728995 master-0 kubenswrapper[31830]: I0319 12:30:21.728913 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fc8fbfa9-d55d-470b-aabc-96b9f0c15790-console-config\") pod \"console-778974b6d8-gqqzj\" (UID: \"fc8fbfa9-d55d-470b-aabc-96b9f0c15790\") " pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.728995 master-0 kubenswrapper[31830]: I0319 12:30:21.728953 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgjgq\" (UniqueName: \"kubernetes.io/projected/4572c3d4-9030-4bd3-9f56-346d9f954254-kube-api-access-cgjgq\") pod \"nmstate-console-plugin-86f58fcf4-c2bxj\" (UID: \"4572c3d4-9030-4bd3-9f56-346d9f954254\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-c2bxj" Mar 19 12:30:21.729109 master-0 kubenswrapper[31830]: I0319 12:30:21.729085 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc8fbfa9-d55d-470b-aabc-96b9f0c15790-trusted-ca-bundle\") pod \"console-778974b6d8-gqqzj\" (UID: \"fc8fbfa9-d55d-470b-aabc-96b9f0c15790\") " pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.729159 master-0 kubenswrapper[31830]: I0319 12:30:21.729119 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsdvm\" (UniqueName: \"kubernetes.io/projected/fc8fbfa9-d55d-470b-aabc-96b9f0c15790-kube-api-access-jsdvm\") pod \"console-778974b6d8-gqqzj\" (UID: \"fc8fbfa9-d55d-470b-aabc-96b9f0c15790\") " pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.729214 master-0 kubenswrapper[31830]: I0319 12:30:21.729186 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc8fbfa9-d55d-470b-aabc-96b9f0c15790-console-serving-cert\") pod \"console-778974b6d8-gqqzj\" (UID: \"fc8fbfa9-d55d-470b-aabc-96b9f0c15790\") " pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.729665 master-0 kubenswrapper[31830]: I0319 12:30:21.729608 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 19 12:30:21.743202 master-0 kubenswrapper[31830]: I0319 12:30:21.733088 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 19 12:30:21.743202 master-0 kubenswrapper[31830]: I0319 12:30:21.735492 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4572c3d4-9030-4bd3-9f56-346d9f954254-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-c2bxj\" (UID: \"4572c3d4-9030-4bd3-9f56-346d9f954254\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-c2bxj" Mar 19 12:30:21.743202 master-0 kubenswrapper[31830]: I0319 12:30:21.741731 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4572c3d4-9030-4bd3-9f56-346d9f954254-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-c2bxj\" (UID: \"4572c3d4-9030-4bd3-9f56-346d9f954254\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-c2bxj" Mar 19 12:30:21.758435 master-0 kubenswrapper[31830]: I0319 12:30:21.758392 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgjgq\" (UniqueName: \"kubernetes.io/projected/4572c3d4-9030-4bd3-9f56-346d9f954254-kube-api-access-cgjgq\") pod \"nmstate-console-plugin-86f58fcf4-c2bxj\" (UID: \"4572c3d4-9030-4bd3-9f56-346d9f954254\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-c2bxj" Mar 19 12:30:21.773446 master-0 kubenswrapper[31830]: I0319 12:30:21.773222 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-c2bxj" Mar 19 12:30:21.783069 master-0 kubenswrapper[31830]: I0319 12:30:21.781905 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fxx75" event={"ID":"7dc06b24-66ec-4b57-88d2-90bb6d42bb60","Type":"ContainerStarted","Data":"d8dbd9948d7e9cb349df349b1198e5b66bd27a8c3207dddaeaf4d0f7e640592d"} Mar 19 12:30:21.783069 master-0 kubenswrapper[31830]: I0319 12:30:21.781958 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fxx75" event={"ID":"7dc06b24-66ec-4b57-88d2-90bb6d42bb60","Type":"ContainerStarted","Data":"41c8e2e905ac865a52372dfed0810972f5b5cc2bd76815b9edc596661e3ca52f"} Mar 19 12:30:21.831930 master-0 kubenswrapper[31830]: I0319 12:30:21.831189 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fc8fbfa9-d55d-470b-aabc-96b9f0c15790-oauth-serving-cert\") pod \"console-778974b6d8-gqqzj\" (UID: \"fc8fbfa9-d55d-470b-aabc-96b9f0c15790\") " pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.831930 master-0 kubenswrapper[31830]: I0319 12:30:21.831237 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fc8fbfa9-d55d-470b-aabc-96b9f0c15790-console-oauth-config\") pod \"console-778974b6d8-gqqzj\" (UID: \"fc8fbfa9-d55d-470b-aabc-96b9f0c15790\") " pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.831930 master-0 kubenswrapper[31830]: I0319 12:30:21.831300 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fc8fbfa9-d55d-470b-aabc-96b9f0c15790-console-config\") pod \"console-778974b6d8-gqqzj\" (UID: \"fc8fbfa9-d55d-470b-aabc-96b9f0c15790\") " pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.833443 master-0 kubenswrapper[31830]: I0319 12:30:21.832158 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc8fbfa9-d55d-470b-aabc-96b9f0c15790-trusted-ca-bundle\") pod \"console-778974b6d8-gqqzj\" (UID: \"fc8fbfa9-d55d-470b-aabc-96b9f0c15790\") " pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.833903 master-0 kubenswrapper[31830]: I0319 12:30:21.833749 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc8fbfa9-d55d-470b-aabc-96b9f0c15790-trusted-ca-bundle\") pod \"console-778974b6d8-gqqzj\" (UID: \"fc8fbfa9-d55d-470b-aabc-96b9f0c15790\") " pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.834993 master-0 kubenswrapper[31830]: I0319 12:30:21.834596 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fc8fbfa9-d55d-470b-aabc-96b9f0c15790-oauth-serving-cert\") pod \"console-778974b6d8-gqqzj\" (UID: \"fc8fbfa9-d55d-470b-aabc-96b9f0c15790\") " pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.834993 master-0 kubenswrapper[31830]: I0319 12:30:21.834884 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsdvm\" (UniqueName: \"kubernetes.io/projected/fc8fbfa9-d55d-470b-aabc-96b9f0c15790-kube-api-access-jsdvm\") pod \"console-778974b6d8-gqqzj\" (UID: \"fc8fbfa9-d55d-470b-aabc-96b9f0c15790\") " pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.835989 master-0 kubenswrapper[31830]: I0319 12:30:21.835878 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc8fbfa9-d55d-470b-aabc-96b9f0c15790-console-serving-cert\") pod \"console-778974b6d8-gqqzj\" (UID: \"fc8fbfa9-d55d-470b-aabc-96b9f0c15790\") " pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.836225 master-0 kubenswrapper[31830]: I0319 12:30:21.836138 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fc8fbfa9-d55d-470b-aabc-96b9f0c15790-console-oauth-config\") pod \"console-778974b6d8-gqqzj\" (UID: \"fc8fbfa9-d55d-470b-aabc-96b9f0c15790\") " pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.836477 master-0 kubenswrapper[31830]: I0319 12:30:21.836434 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fc8fbfa9-d55d-470b-aabc-96b9f0c15790-service-ca\") pod \"console-778974b6d8-gqqzj\" (UID: \"fc8fbfa9-d55d-470b-aabc-96b9f0c15790\") " pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.837253 master-0 kubenswrapper[31830]: I0319 12:30:21.836934 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fc8fbfa9-d55d-470b-aabc-96b9f0c15790-console-config\") pod \"console-778974b6d8-gqqzj\" (UID: \"fc8fbfa9-d55d-470b-aabc-96b9f0c15790\") " pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.837956 master-0 kubenswrapper[31830]: I0319 12:30:21.837663 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/db5a3424-916f-441f-87c8-31bf62b4a07b-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-4f74g\" (UID: \"db5a3424-916f-441f-87c8-31bf62b4a07b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-4f74g" Mar 19 12:30:21.838383 master-0 kubenswrapper[31830]: I0319 12:30:21.838350 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fc8fbfa9-d55d-470b-aabc-96b9f0c15790-service-ca\") pod \"console-778974b6d8-gqqzj\" (UID: \"fc8fbfa9-d55d-470b-aabc-96b9f0c15790\") " pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.850091 master-0 kubenswrapper[31830]: I0319 12:30:21.849965 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc8fbfa9-d55d-470b-aabc-96b9f0c15790-console-serving-cert\") pod \"console-778974b6d8-gqqzj\" (UID: \"fc8fbfa9-d55d-470b-aabc-96b9f0c15790\") " pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.850339 master-0 kubenswrapper[31830]: I0319 12:30:21.850200 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/db5a3424-916f-441f-87c8-31bf62b4a07b-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-4f74g\" (UID: \"db5a3424-916f-441f-87c8-31bf62b4a07b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-4f74g" Mar 19 12:30:21.854747 master-0 kubenswrapper[31830]: I0319 12:30:21.854682 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsdvm\" (UniqueName: \"kubernetes.io/projected/fc8fbfa9-d55d-470b-aabc-96b9f0c15790-kube-api-access-jsdvm\") pod \"console-778974b6d8-gqqzj\" (UID: \"fc8fbfa9-d55d-470b-aabc-96b9f0c15790\") " pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:21.982857 master-0 kubenswrapper[31830]: I0319 12:30:21.982436 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:22.001171 master-0 kubenswrapper[31830]: I0319 12:30:22.001129 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-m9dsd"] Mar 19 12:30:22.003269 master-0 kubenswrapper[31830]: W0319 12:30:22.003217 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e910bd8_61ee_4627_a7ff_fc2ae9aec770.slice/crio-614aa5ffe3fd1d47bce934c8e7239d063a4486d76f9300a0429ee2be046d6955 WatchSource:0}: Error finding container 614aa5ffe3fd1d47bce934c8e7239d063a4486d76f9300a0429ee2be046d6955: Status 404 returned error can't find the container with id 614aa5ffe3fd1d47bce934c8e7239d063a4486d76f9300a0429ee2be046d6955 Mar 19 12:30:22.110572 master-0 kubenswrapper[31830]: I0319 12:30:22.110515 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-4f74g" Mar 19 12:30:22.240297 master-0 kubenswrapper[31830]: W0319 12:30:22.240229 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4572c3d4_9030_4bd3_9f56_346d9f954254.slice/crio-8744342f6dab44529a1bd4b770940e06e367de3d1d48b43ae18120af7e0c3eb9 WatchSource:0}: Error finding container 8744342f6dab44529a1bd4b770940e06e367de3d1d48b43ae18120af7e0c3eb9: Status 404 returned error can't find the container with id 8744342f6dab44529a1bd4b770940e06e367de3d1d48b43ae18120af7e0c3eb9 Mar 19 12:30:22.258711 master-0 kubenswrapper[31830]: I0319 12:30:22.258656 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-c2bxj"] Mar 19 12:30:22.418315 master-0 kubenswrapper[31830]: I0319 12:30:22.418259 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-778974b6d8-gqqzj"] Mar 19 12:30:22.428658 master-0 kubenswrapper[31830]: W0319 12:30:22.428601 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc8fbfa9_d55d_470b_aabc_96b9f0c15790.slice/crio-6a589047d671393ca08bc7cf56192543088c35f2701c5df690e28d0543b01fc6 WatchSource:0}: Error finding container 6a589047d671393ca08bc7cf56192543088c35f2701c5df690e28d0543b01fc6: Status 404 returned error can't find the container with id 6a589047d671393ca08bc7cf56192543088c35f2701c5df690e28d0543b01fc6 Mar 19 12:30:22.561263 master-0 kubenswrapper[31830]: W0319 12:30:22.561209 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb5a3424_916f_441f_87c8_31bf62b4a07b.slice/crio-cb03819e9d58407929206d978414c6c077432dcd79a0dd6afa8e95052dbe4242 WatchSource:0}: Error finding container cb03819e9d58407929206d978414c6c077432dcd79a0dd6afa8e95052dbe4242: Status 404 returned error can't find the container with id cb03819e9d58407929206d978414c6c077432dcd79a0dd6afa8e95052dbe4242 Mar 19 12:30:22.561684 master-0 kubenswrapper[31830]: I0319 12:30:22.561633 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-4f74g"] Mar 19 12:30:22.800670 master-0 kubenswrapper[31830]: I0319 12:30:22.790152 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-m9dsd" event={"ID":"3e910bd8-61ee-4627-a7ff-fc2ae9aec770","Type":"ContainerStarted","Data":"614aa5ffe3fd1d47bce934c8e7239d063a4486d76f9300a0429ee2be046d6955"} Mar 19 12:30:22.800670 master-0 kubenswrapper[31830]: I0319 12:30:22.791686 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-778974b6d8-gqqzj" event={"ID":"fc8fbfa9-d55d-470b-aabc-96b9f0c15790","Type":"ContainerStarted","Data":"defe93e4f2530e8d92758eb25028ecd035f65c1cf23c76f0c3c084b391c541a0"} Mar 19 12:30:22.800670 master-0 kubenswrapper[31830]: I0319 12:30:22.791706 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-778974b6d8-gqqzj" event={"ID":"fc8fbfa9-d55d-470b-aabc-96b9f0c15790","Type":"ContainerStarted","Data":"6a589047d671393ca08bc7cf56192543088c35f2701c5df690e28d0543b01fc6"} Mar 19 12:30:22.800670 master-0 kubenswrapper[31830]: I0319 12:30:22.793682 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-kf4wb" event={"ID":"6e09c4f2-c7cc-46b5-b00c-385fde5f190f","Type":"ContainerStarted","Data":"aea6f0e3d2f03faa48f62d85c4d06e9eff16ceb6a3efec59d3625c887570b3b7"} Mar 19 12:30:22.806223 master-0 kubenswrapper[31830]: I0319 12:30:22.806165 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-r5pn2" Mar 19 12:30:22.807598 master-0 kubenswrapper[31830]: I0319 12:30:22.807564 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-4f74g" event={"ID":"db5a3424-916f-441f-87c8-31bf62b4a07b","Type":"ContainerStarted","Data":"cb03819e9d58407929206d978414c6c077432dcd79a0dd6afa8e95052dbe4242"} Mar 19 12:30:22.812029 master-0 kubenswrapper[31830]: I0319 12:30:22.811995 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-c2bxj" event={"ID":"4572c3d4-9030-4bd3-9f56-346d9f954254","Type":"ContainerStarted","Data":"8744342f6dab44529a1bd4b770940e06e367de3d1d48b43ae18120af7e0c3eb9"} Mar 19 12:30:22.828648 master-0 kubenswrapper[31830]: I0319 12:30:22.828562 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-778974b6d8-gqqzj" podStartSLOduration=1.828539643 podStartE2EDuration="1.828539643s" podCreationTimestamp="2026-03-19 12:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:30:22.822387542 +0000 UTC m=+961.371348256" watchObservedRunningTime="2026-03-19 12:30:22.828539643 +0000 UTC m=+961.377500347" Mar 19 12:30:22.868317 master-0 kubenswrapper[31830]: I0319 12:30:22.868172 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-r5pn2" podStartSLOduration=1.7613753650000001 podStartE2EDuration="3.868153732s" podCreationTimestamp="2026-03-19 12:30:19 +0000 UTC" firstStartedPulling="2026-03-19 12:30:20.536176111 +0000 UTC m=+959.085136825" lastFinishedPulling="2026-03-19 12:30:22.642954488 +0000 UTC m=+961.191915192" observedRunningTime="2026-03-19 12:30:22.863621011 +0000 UTC m=+961.412581715" watchObservedRunningTime="2026-03-19 12:30:22.868153732 +0000 UTC m=+961.417114436" Mar 19 12:30:23.822677 master-0 kubenswrapper[31830]: I0319 12:30:23.822607 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-r5pn2" event={"ID":"b8442707-1048-49e7-883b-9dfc0c48eb15","Type":"ContainerStarted","Data":"c003691dc08445560f4875f6038b5d63f017e1e1265e3a4ac196110ba02af014"} Mar 19 12:30:23.824944 master-0 kubenswrapper[31830]: I0319 12:30:23.824899 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fxx75" event={"ID":"7dc06b24-66ec-4b57-88d2-90bb6d42bb60","Type":"ContainerStarted","Data":"92701e0f5f710a328a6985f786e0ff02f6546c3fe9588559de39cab49968188e"} Mar 19 12:30:23.825121 master-0 kubenswrapper[31830]: I0319 12:30:23.825041 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-fxx75" Mar 19 12:30:29.889547 master-0 kubenswrapper[31830]: I0319 12:30:29.889485 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-c2bxj" event={"ID":"4572c3d4-9030-4bd3-9f56-346d9f954254","Type":"ContainerStarted","Data":"d62438f8bedced98e4b57ee356ee3ade07b3477ace9b7f72e7cf508204a8c060"} Mar 19 12:30:29.891498 master-0 kubenswrapper[31830]: I0319 12:30:29.891443 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-m9dsd" event={"ID":"3e910bd8-61ee-4627-a7ff-fc2ae9aec770","Type":"ContainerStarted","Data":"8262fc9154ad499593d39964c25c8eab34d3b1a8447d4b54206e885fc468addf"} Mar 19 12:30:29.891498 master-0 kubenswrapper[31830]: I0319 12:30:29.891477 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-m9dsd" event={"ID":"3e910bd8-61ee-4627-a7ff-fc2ae9aec770","Type":"ContainerStarted","Data":"f04619ca5518832c6bb79ac6cf127b306599888f610dea09c3e417cbf3fdf074"} Mar 19 12:30:29.893145 master-0 kubenswrapper[31830]: I0319 12:30:29.893118 31830 generic.go:334] "Generic (PLEG): container finished" podID="450fdf42-489c-4403-9c52-03c51471160c" containerID="4bc5c67c39a9e21a6c80addb775349cce90c2a240506979479b4fe9a6d3b610f" exitCode=0 Mar 19 12:30:29.893227 master-0 kubenswrapper[31830]: I0319 12:30:29.893177 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hx5pt" event={"ID":"450fdf42-489c-4403-9c52-03c51471160c","Type":"ContainerDied","Data":"4bc5c67c39a9e21a6c80addb775349cce90c2a240506979479b4fe9a6d3b610f"} Mar 19 12:30:29.894940 master-0 kubenswrapper[31830]: I0319 12:30:29.894919 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qltq6" event={"ID":"7f0f4018-1edf-45aa-ae8d-9798bed919a2","Type":"ContainerStarted","Data":"c6a67d115fe4791c17e2a2fcc74e78f9229b01a759e15762190831cf7891f8a6"} Mar 19 12:30:29.895320 master-0 kubenswrapper[31830]: I0319 12:30:29.895302 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qltq6" Mar 19 12:30:29.897348 master-0 kubenswrapper[31830]: I0319 12:30:29.897131 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-kf4wb" event={"ID":"6e09c4f2-c7cc-46b5-b00c-385fde5f190f","Type":"ContainerStarted","Data":"40ee8874c57f43374ce8d53fdf350c8c160f3cad503085c404e804476967b1fe"} Mar 19 12:30:29.897348 master-0 kubenswrapper[31830]: I0319 12:30:29.897256 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-kf4wb" Mar 19 12:30:29.898880 master-0 kubenswrapper[31830]: I0319 12:30:29.898846 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-4f74g" event={"ID":"db5a3424-916f-441f-87c8-31bf62b4a07b","Type":"ContainerStarted","Data":"eb882aa978ce641831242f589dd7329ea23bad90eedcab4817ce3b632ad61872"} Mar 19 12:30:29.898981 master-0 kubenswrapper[31830]: I0319 12:30:29.898963 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-4f74g" Mar 19 12:30:29.997880 master-0 kubenswrapper[31830]: I0319 12:30:29.997371 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-fxx75" podStartSLOduration=9.426593294 podStartE2EDuration="10.997353977s" podCreationTimestamp="2026-03-19 12:30:19 +0000 UTC" firstStartedPulling="2026-03-19 12:30:21.742754081 +0000 UTC m=+960.291714795" lastFinishedPulling="2026-03-19 12:30:23.313514774 +0000 UTC m=+961.862475478" observedRunningTime="2026-03-19 12:30:23.845213413 +0000 UTC m=+962.394174117" watchObservedRunningTime="2026-03-19 12:30:29.997353977 +0000 UTC m=+968.546314681" Mar 19 12:30:30.003112 master-0 kubenswrapper[31830]: I0319 12:30:30.003036 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-c2bxj" podStartSLOduration=2.048769043 podStartE2EDuration="9.003015973s" podCreationTimestamp="2026-03-19 12:30:21 +0000 UTC" firstStartedPulling="2026-03-19 12:30:22.243760368 +0000 UTC m=+960.792721092" lastFinishedPulling="2026-03-19 12:30:29.198007318 +0000 UTC m=+967.746968022" observedRunningTime="2026-03-19 12:30:29.996112979 +0000 UTC m=+968.545073683" watchObservedRunningTime="2026-03-19 12:30:30.003015973 +0000 UTC m=+968.551976677" Mar 19 12:30:30.129513 master-0 kubenswrapper[31830]: I0319 12:30:30.129453 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-kf4wb" podStartSLOduration=1.617234899 podStartE2EDuration="9.129431733s" podCreationTimestamp="2026-03-19 12:30:21 +0000 UTC" firstStartedPulling="2026-03-19 12:30:21.785680711 +0000 UTC m=+960.334641415" lastFinishedPulling="2026-03-19 12:30:29.297877545 +0000 UTC m=+967.846838249" observedRunningTime="2026-03-19 12:30:30.117366989 +0000 UTC m=+968.666327693" watchObservedRunningTime="2026-03-19 12:30:30.129431733 +0000 UTC m=+968.678392457" Mar 19 12:30:30.172835 master-0 kubenswrapper[31830]: I0319 12:30:30.171472 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qltq6" podStartSLOduration=3.192281969 podStartE2EDuration="12.171451916s" podCreationTimestamp="2026-03-19 12:30:18 +0000 UTC" firstStartedPulling="2026-03-19 12:30:20.317437908 +0000 UTC m=+958.866398612" lastFinishedPulling="2026-03-19 12:30:29.296607845 +0000 UTC m=+967.845568559" observedRunningTime="2026-03-19 12:30:30.170052943 +0000 UTC m=+968.719013647" watchObservedRunningTime="2026-03-19 12:30:30.171451916 +0000 UTC m=+968.720412620" Mar 19 12:30:30.292891 master-0 kubenswrapper[31830]: I0319 12:30:30.292759 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-4f74g" podStartSLOduration=2.5752705110000003 podStartE2EDuration="9.292683956s" podCreationTimestamp="2026-03-19 12:30:21 +0000 UTC" firstStartedPulling="2026-03-19 12:30:22.578409436 +0000 UTC m=+961.127370140" lastFinishedPulling="2026-03-19 12:30:29.295822881 +0000 UTC m=+967.844783585" observedRunningTime="2026-03-19 12:30:30.28765176 +0000 UTC m=+968.836612464" watchObservedRunningTime="2026-03-19 12:30:30.292683956 +0000 UTC m=+968.841644660" Mar 19 12:30:30.466231 master-0 kubenswrapper[31830]: I0319 12:30:30.465597 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-m9dsd" podStartSLOduration=2.217323831 podStartE2EDuration="9.465576428s" podCreationTimestamp="2026-03-19 12:30:21 +0000 UTC" firstStartedPulling="2026-03-19 12:30:22.013686543 +0000 UTC m=+960.562647247" lastFinishedPulling="2026-03-19 12:30:29.26193914 +0000 UTC m=+967.810899844" observedRunningTime="2026-03-19 12:30:30.464540846 +0000 UTC m=+969.013501560" watchObservedRunningTime="2026-03-19 12:30:30.465576428 +0000 UTC m=+969.014537132" Mar 19 12:30:30.939604 master-0 kubenswrapper[31830]: I0319 12:30:30.939380 31830 generic.go:334] "Generic (PLEG): container finished" podID="450fdf42-489c-4403-9c52-03c51471160c" containerID="c85664a535e7ac0cca64a4395bcf2301d140ccf8523df1267c4419bbf389739d" exitCode=0 Mar 19 12:30:30.940846 master-0 kubenswrapper[31830]: I0319 12:30:30.940323 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hx5pt" event={"ID":"450fdf42-489c-4403-9c52-03c51471160c","Type":"ContainerDied","Data":"c85664a535e7ac0cca64a4395bcf2301d140ccf8523df1267c4419bbf389739d"} Mar 19 12:30:31.171745 master-0 kubenswrapper[31830]: I0319 12:30:31.170970 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-fxx75" Mar 19 12:30:31.951174 master-0 kubenswrapper[31830]: I0319 12:30:31.951126 31830 generic.go:334] "Generic (PLEG): container finished" podID="450fdf42-489c-4403-9c52-03c51471160c" containerID="f5c88ec42466556ad9b6321d053b3a5678e5160b14f8c15e4d41a4f9a30aff4f" exitCode=0 Mar 19 12:30:31.951727 master-0 kubenswrapper[31830]: I0319 12:30:31.951223 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hx5pt" event={"ID":"450fdf42-489c-4403-9c52-03c51471160c","Type":"ContainerDied","Data":"f5c88ec42466556ad9b6321d053b3a5678e5160b14f8c15e4d41a4f9a30aff4f"} Mar 19 12:30:31.983536 master-0 kubenswrapper[31830]: I0319 12:30:31.983488 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:31.983536 master-0 kubenswrapper[31830]: I0319 12:30:31.983527 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:31.988145 master-0 kubenswrapper[31830]: I0319 12:30:31.988101 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:32.967036 master-0 kubenswrapper[31830]: I0319 12:30:32.966954 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hx5pt" event={"ID":"450fdf42-489c-4403-9c52-03c51471160c","Type":"ContainerStarted","Data":"c5088a6989d4b4d10b2680054217d41364668f144fb5c01452b2398f30c58ed0"} Mar 19 12:30:32.967036 master-0 kubenswrapper[31830]: I0319 12:30:32.967037 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hx5pt" event={"ID":"450fdf42-489c-4403-9c52-03c51471160c","Type":"ContainerStarted","Data":"68c6cee83fa6e2424dd8ad43095e2b46a85679c2cee06cc3e4d26f9ad11d3a7f"} Mar 19 12:30:32.967036 master-0 kubenswrapper[31830]: I0319 12:30:32.967051 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hx5pt" event={"ID":"450fdf42-489c-4403-9c52-03c51471160c","Type":"ContainerStarted","Data":"69b9435367f07d5d5a94e005da48adfe0a909cfb04fd38b150bd02e2c072bbf3"} Mar 19 12:30:32.967647 master-0 kubenswrapper[31830]: I0319 12:30:32.967066 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hx5pt" event={"ID":"450fdf42-489c-4403-9c52-03c51471160c","Type":"ContainerStarted","Data":"de55b2d6efcfa616ca79c6535c7e04e7e165d4f509e206725e9844b81bd21561"} Mar 19 12:30:32.967647 master-0 kubenswrapper[31830]: I0319 12:30:32.967079 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hx5pt" event={"ID":"450fdf42-489c-4403-9c52-03c51471160c","Type":"ContainerStarted","Data":"fc0035459114129558bf2157ec0c2eb59e110f4b46a9e3b15e1f09264bda41fb"} Mar 19 12:30:32.970872 master-0 kubenswrapper[31830]: I0319 12:30:32.970825 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-778974b6d8-gqqzj" Mar 19 12:30:33.087973 master-0 kubenswrapper[31830]: I0319 12:30:33.087928 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-845bb9776f-9p49g"] Mar 19 12:30:33.990540 master-0 kubenswrapper[31830]: I0319 12:30:33.990407 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hx5pt" event={"ID":"450fdf42-489c-4403-9c52-03c51471160c","Type":"ContainerStarted","Data":"424b5b552102e9d08449621078b5c5e1ba7d2271d3067718086d406335dfd24d"} Mar 19 12:30:34.029868 master-0 kubenswrapper[31830]: I0319 12:30:34.029766 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-hx5pt" podStartSLOduration=6.778716735 podStartE2EDuration="16.029743873s" podCreationTimestamp="2026-03-19 12:30:18 +0000 UTC" firstStartedPulling="2026-03-19 12:30:20.014832533 +0000 UTC m=+958.563793237" lastFinishedPulling="2026-03-19 12:30:29.265859671 +0000 UTC m=+967.814820375" observedRunningTime="2026-03-19 12:30:34.022175388 +0000 UTC m=+972.571136122" watchObservedRunningTime="2026-03-19 12:30:34.029743873 +0000 UTC m=+972.578704587" Mar 19 12:30:34.868711 master-0 kubenswrapper[31830]: I0319 12:30:34.868640 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:34.909482 master-0 kubenswrapper[31830]: I0319 12:30:34.909392 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:34.999445 master-0 kubenswrapper[31830]: I0319 12:30:34.999386 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:36.752346 master-0 kubenswrapper[31830]: I0319 12:30:36.752297 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-kf4wb" Mar 19 12:30:39.888039 master-0 kubenswrapper[31830]: I0319 12:30:39.887992 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qltq6" Mar 19 12:30:39.999552 master-0 kubenswrapper[31830]: I0319 12:30:39.999487 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-r5pn2" Mar 19 12:30:42.117622 master-0 kubenswrapper[31830]: I0319 12:30:42.117554 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-4f74g" Mar 19 12:30:44.177832 master-0 kubenswrapper[31830]: I0319 12:30:44.172609 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-p749d"] Mar 19 12:30:44.181544 master-0 kubenswrapper[31830]: I0319 12:30:44.181506 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.184337 master-0 kubenswrapper[31830]: I0319 12:30:44.184056 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Mar 19 12:30:44.196885 master-0 kubenswrapper[31830]: I0319 12:30:44.195721 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-p749d"] Mar 19 12:30:44.330701 master-0 kubenswrapper[31830]: I0319 12:30:44.330618 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-device-dir\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.330701 master-0 kubenswrapper[31830]: I0319 12:30:44.330703 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-registration-dir\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.331002 master-0 kubenswrapper[31830]: I0319 12:30:44.330891 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-lvmd-config\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.331045 master-0 kubenswrapper[31830]: I0319 12:30:44.331025 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-run-udev\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.331095 master-0 kubenswrapper[31830]: I0319 12:30:44.331074 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-file-lock-dir\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.331149 master-0 kubenswrapper[31830]: I0319 12:30:44.331134 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-metrics-cert\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.331189 master-0 kubenswrapper[31830]: I0319 12:30:44.331169 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-pod-volumes-dir\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.331232 master-0 kubenswrapper[31830]: I0319 12:30:44.331202 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-node-plugin-dir\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.331232 master-0 kubenswrapper[31830]: I0319 12:30:44.331228 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-csi-plugin-dir\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.331396 master-0 kubenswrapper[31830]: I0319 12:30:44.331343 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-sys\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.331703 master-0 kubenswrapper[31830]: I0319 12:30:44.331657 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct49v\" (UniqueName: \"kubernetes.io/projected/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-kube-api-access-ct49v\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.433684 master-0 kubenswrapper[31830]: I0319 12:30:44.433545 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-sys\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.433916 master-0 kubenswrapper[31830]: I0319 12:30:44.433713 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-sys\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.433916 master-0 kubenswrapper[31830]: I0319 12:30:44.433862 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct49v\" (UniqueName: \"kubernetes.io/projected/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-kube-api-access-ct49v\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.434009 master-0 kubenswrapper[31830]: I0319 12:30:44.433987 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-device-dir\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.434048 master-0 kubenswrapper[31830]: I0319 12:30:44.434040 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-registration-dir\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.434134 master-0 kubenswrapper[31830]: I0319 12:30:44.434108 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-device-dir\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.434134 master-0 kubenswrapper[31830]: I0319 12:30:44.434114 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-lvmd-config\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.434279 master-0 kubenswrapper[31830]: I0319 12:30:44.434248 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-run-udev\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.434354 master-0 kubenswrapper[31830]: I0319 12:30:44.434277 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-registration-dir\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.434354 master-0 kubenswrapper[31830]: I0319 12:30:44.434331 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-file-lock-dir\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.434452 master-0 kubenswrapper[31830]: I0319 12:30:44.434334 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-run-udev\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.434452 master-0 kubenswrapper[31830]: I0319 12:30:44.434388 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-metrics-cert\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.434452 master-0 kubenswrapper[31830]: I0319 12:30:44.434286 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-lvmd-config\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.434548 master-0 kubenswrapper[31830]: I0319 12:30:44.434470 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-pod-volumes-dir\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.434548 master-0 kubenswrapper[31830]: I0319 12:30:44.434514 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-node-plugin-dir\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.434610 master-0 kubenswrapper[31830]: I0319 12:30:44.434550 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-csi-plugin-dir\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.434610 master-0 kubenswrapper[31830]: I0319 12:30:44.434588 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-file-lock-dir\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.434843 master-0 kubenswrapper[31830]: I0319 12:30:44.434809 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-pod-volumes-dir\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.434928 master-0 kubenswrapper[31830]: I0319 12:30:44.434904 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-csi-plugin-dir\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.435019 master-0 kubenswrapper[31830]: I0319 12:30:44.434997 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-node-plugin-dir\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.438887 master-0 kubenswrapper[31830]: I0319 12:30:44.438001 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-metrics-cert\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.449742 master-0 kubenswrapper[31830]: I0319 12:30:44.449691 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct49v\" (UniqueName: \"kubernetes.io/projected/79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7-kube-api-access-ct49v\") pod \"vg-manager-p749d\" (UID: \"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7\") " pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.498972 master-0 kubenswrapper[31830]: I0319 12:30:44.498906 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:44.959367 master-0 kubenswrapper[31830]: I0319 12:30:44.959320 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-p749d"] Mar 19 12:30:44.966963 master-0 kubenswrapper[31830]: W0319 12:30:44.966910 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79fd96c6_d0cf_4e22_8c87_6f4d7abd8cb7.slice/crio-b24f0fdb53bd90a5114e32f01214ef3b97c2eef01298a1b8bdc85715897551eb WatchSource:0}: Error finding container b24f0fdb53bd90a5114e32f01214ef3b97c2eef01298a1b8bdc85715897551eb: Status 404 returned error can't find the container with id b24f0fdb53bd90a5114e32f01214ef3b97c2eef01298a1b8bdc85715897551eb Mar 19 12:30:45.095933 master-0 kubenswrapper[31830]: I0319 12:30:45.094466 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-p749d" event={"ID":"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7","Type":"ContainerStarted","Data":"b24f0fdb53bd90a5114e32f01214ef3b97c2eef01298a1b8bdc85715897551eb"} Mar 19 12:30:46.103421 master-0 kubenswrapper[31830]: I0319 12:30:46.103376 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-p749d" event={"ID":"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7","Type":"ContainerStarted","Data":"f29d4d757aef5d907df26a6a7481393bd117b85a69150599d65108a6d755f68f"} Mar 19 12:30:46.130455 master-0 kubenswrapper[31830]: I0319 12:30:46.130347 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-p749d" podStartSLOduration=2.130318494 podStartE2EDuration="2.130318494s" podCreationTimestamp="2026-03-19 12:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:30:46.125434191 +0000 UTC m=+984.674394895" watchObservedRunningTime="2026-03-19 12:30:46.130318494 +0000 UTC m=+984.679279198" Mar 19 12:30:47.125199 master-0 kubenswrapper[31830]: I0319 12:30:47.125086 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-p749d_79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7/vg-manager/0.log" Mar 19 12:30:47.125199 master-0 kubenswrapper[31830]: I0319 12:30:47.125140 31830 generic.go:334] "Generic (PLEG): container finished" podID="79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7" containerID="f29d4d757aef5d907df26a6a7481393bd117b85a69150599d65108a6d755f68f" exitCode=1 Mar 19 12:30:47.125199 master-0 kubenswrapper[31830]: I0319 12:30:47.125172 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-p749d" event={"ID":"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7","Type":"ContainerDied","Data":"f29d4d757aef5d907df26a6a7481393bd117b85a69150599d65108a6d755f68f"} Mar 19 12:30:47.126291 master-0 kubenswrapper[31830]: I0319 12:30:47.126256 31830 scope.go:117] "RemoveContainer" containerID="f29d4d757aef5d907df26a6a7481393bd117b85a69150599d65108a6d755f68f" Mar 19 12:30:47.589300 master-0 kubenswrapper[31830]: I0319 12:30:47.589210 31830 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Mar 19 12:30:48.135199 master-0 kubenswrapper[31830]: I0319 12:30:48.134775 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-p749d_79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7/vg-manager/0.log" Mar 19 12:30:48.135199 master-0 kubenswrapper[31830]: I0319 12:30:48.134890 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-p749d" event={"ID":"79fd96c6-d0cf-4e22-8c87-6f4d7abd8cb7","Type":"ContainerStarted","Data":"bafbe1df7f1a3a2e1d51f2214713b944a731a44b802511d9b7d6fc70dadea481"} Mar 19 12:30:48.279185 master-0 kubenswrapper[31830]: I0319 12:30:48.278947 31830 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-03-19T12:30:47.589256579Z","Handler":null,"Name":""} Mar 19 12:30:48.281992 master-0 kubenswrapper[31830]: I0319 12:30:48.281954 31830 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Mar 19 12:30:48.282091 master-0 kubenswrapper[31830]: I0319 12:30:48.281999 31830 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Mar 19 12:30:49.870847 master-0 kubenswrapper[31830]: I0319 12:30:49.870780 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-hx5pt" Mar 19 12:30:54.499914 master-0 kubenswrapper[31830]: I0319 12:30:54.499837 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:54.502680 master-0 kubenswrapper[31830]: I0319 12:30:54.502625 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:55.200864 master-0 kubenswrapper[31830]: I0319 12:30:55.200764 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:55.201737 master-0 kubenswrapper[31830]: I0319 12:30:55.201711 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-p749d" Mar 19 12:30:57.155285 master-0 kubenswrapper[31830]: I0319 12:30:57.155223 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-gxqv7"] Mar 19 12:30:57.158042 master-0 kubenswrapper[31830]: I0319 12:30:57.157992 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gxqv7" Mar 19 12:30:57.159992 master-0 kubenswrapper[31830]: I0319 12:30:57.159922 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 19 12:30:57.160546 master-0 kubenswrapper[31830]: I0319 12:30:57.160518 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 19 12:30:57.178654 master-0 kubenswrapper[31830]: I0319 12:30:57.178559 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gxqv7"] Mar 19 12:30:57.292369 master-0 kubenswrapper[31830]: I0319 12:30:57.292301 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bshlb\" (UniqueName: \"kubernetes.io/projected/4bdab8cb-e11a-4b5d-9e1e-3cf37ce23ab8-kube-api-access-bshlb\") pod \"openstack-operator-index-gxqv7\" (UID: \"4bdab8cb-e11a-4b5d-9e1e-3cf37ce23ab8\") " pod="openstack-operators/openstack-operator-index-gxqv7" Mar 19 12:30:57.394447 master-0 kubenswrapper[31830]: I0319 12:30:57.394359 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bshlb\" (UniqueName: \"kubernetes.io/projected/4bdab8cb-e11a-4b5d-9e1e-3cf37ce23ab8-kube-api-access-bshlb\") pod \"openstack-operator-index-gxqv7\" (UID: \"4bdab8cb-e11a-4b5d-9e1e-3cf37ce23ab8\") " pod="openstack-operators/openstack-operator-index-gxqv7" Mar 19 12:30:57.412560 master-0 kubenswrapper[31830]: I0319 12:30:57.412474 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bshlb\" (UniqueName: \"kubernetes.io/projected/4bdab8cb-e11a-4b5d-9e1e-3cf37ce23ab8-kube-api-access-bshlb\") pod \"openstack-operator-index-gxqv7\" (UID: \"4bdab8cb-e11a-4b5d-9e1e-3cf37ce23ab8\") " pod="openstack-operators/openstack-operator-index-gxqv7" Mar 19 12:30:57.483187 master-0 kubenswrapper[31830]: I0319 12:30:57.483134 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gxqv7" Mar 19 12:30:58.076081 master-0 kubenswrapper[31830]: W0319 12:30:58.076032 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bdab8cb_e11a_4b5d_9e1e_3cf37ce23ab8.slice/crio-43ba1d865c33f64a553786acfdf2a14e09492fd194002c1140ca9a08b8b6e7f1 WatchSource:0}: Error finding container 43ba1d865c33f64a553786acfdf2a14e09492fd194002c1140ca9a08b8b6e7f1: Status 404 returned error can't find the container with id 43ba1d865c33f64a553786acfdf2a14e09492fd194002c1140ca9a08b8b6e7f1 Mar 19 12:30:58.076587 master-0 kubenswrapper[31830]: I0319 12:30:58.076532 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gxqv7"] Mar 19 12:30:58.148147 master-0 kubenswrapper[31830]: I0319 12:30:58.148077 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-845bb9776f-9p49g" podUID="8168e523-f491-4c1d-9588-ae2963e93927" containerName="console" containerID="cri-o://3cef942edc4c49ce47897eb1e611d525880e69e956d225cbab8ebfa47c4c4e55" gracePeriod=15 Mar 19 12:30:58.251992 master-0 kubenswrapper[31830]: I0319 12:30:58.251914 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gxqv7" event={"ID":"4bdab8cb-e11a-4b5d-9e1e-3cf37ce23ab8","Type":"ContainerStarted","Data":"43ba1d865c33f64a553786acfdf2a14e09492fd194002c1140ca9a08b8b6e7f1"} Mar 19 12:30:58.591388 master-0 kubenswrapper[31830]: I0319 12:30:58.591263 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-845bb9776f-9p49g_8168e523-f491-4c1d-9588-ae2963e93927/console/0.log" Mar 19 12:30:58.591388 master-0 kubenswrapper[31830]: I0319 12:30:58.591330 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:30:58.719479 master-0 kubenswrapper[31830]: I0319 12:30:58.719398 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjbvj\" (UniqueName: \"kubernetes.io/projected/8168e523-f491-4c1d-9588-ae2963e93927-kube-api-access-pjbvj\") pod \"8168e523-f491-4c1d-9588-ae2963e93927\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " Mar 19 12:30:58.719773 master-0 kubenswrapper[31830]: I0319 12:30:58.719547 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8168e523-f491-4c1d-9588-ae2963e93927-console-oauth-config\") pod \"8168e523-f491-4c1d-9588-ae2963e93927\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " Mar 19 12:30:58.719773 master-0 kubenswrapper[31830]: I0319 12:30:58.719613 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-oauth-serving-cert\") pod \"8168e523-f491-4c1d-9588-ae2963e93927\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " Mar 19 12:30:58.719773 master-0 kubenswrapper[31830]: I0319 12:30:58.719696 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8168e523-f491-4c1d-9588-ae2963e93927-console-serving-cert\") pod \"8168e523-f491-4c1d-9588-ae2963e93927\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " Mar 19 12:30:58.719773 master-0 kubenswrapper[31830]: I0319 12:30:58.719732 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-service-ca\") pod \"8168e523-f491-4c1d-9588-ae2963e93927\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " Mar 19 12:30:58.720022 master-0 kubenswrapper[31830]: I0319 12:30:58.719819 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-console-config\") pod \"8168e523-f491-4c1d-9588-ae2963e93927\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " Mar 19 12:30:58.720022 master-0 kubenswrapper[31830]: I0319 12:30:58.719861 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-trusted-ca-bundle\") pod \"8168e523-f491-4c1d-9588-ae2963e93927\" (UID: \"8168e523-f491-4c1d-9588-ae2963e93927\") " Mar 19 12:30:58.721336 master-0 kubenswrapper[31830]: I0319 12:30:58.720819 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8168e523-f491-4c1d-9588-ae2963e93927" (UID: "8168e523-f491-4c1d-9588-ae2963e93927"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:30:58.721984 master-0 kubenswrapper[31830]: I0319 12:30:58.721786 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-console-config" (OuterVolumeSpecName: "console-config") pod "8168e523-f491-4c1d-9588-ae2963e93927" (UID: "8168e523-f491-4c1d-9588-ae2963e93927"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:30:58.721984 master-0 kubenswrapper[31830]: I0319 12:30:58.721937 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8168e523-f491-4c1d-9588-ae2963e93927" (UID: "8168e523-f491-4c1d-9588-ae2963e93927"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:30:58.722177 master-0 kubenswrapper[31830]: I0319 12:30:58.722166 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-service-ca" (OuterVolumeSpecName: "service-ca") pod "8168e523-f491-4c1d-9588-ae2963e93927" (UID: "8168e523-f491-4c1d-9588-ae2963e93927"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:30:58.723295 master-0 kubenswrapper[31830]: I0319 12:30:58.723241 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8168e523-f491-4c1d-9588-ae2963e93927-kube-api-access-pjbvj" (OuterVolumeSpecName: "kube-api-access-pjbvj") pod "8168e523-f491-4c1d-9588-ae2963e93927" (UID: "8168e523-f491-4c1d-9588-ae2963e93927"). InnerVolumeSpecName "kube-api-access-pjbvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:30:58.723556 master-0 kubenswrapper[31830]: I0319 12:30:58.723432 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8168e523-f491-4c1d-9588-ae2963e93927-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8168e523-f491-4c1d-9588-ae2963e93927" (UID: "8168e523-f491-4c1d-9588-ae2963e93927"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:30:58.726612 master-0 kubenswrapper[31830]: I0319 12:30:58.725970 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8168e523-f491-4c1d-9588-ae2963e93927-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "8168e523-f491-4c1d-9588-ae2963e93927" (UID: "8168e523-f491-4c1d-9588-ae2963e93927"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:30:58.822450 master-0 kubenswrapper[31830]: I0319 12:30:58.822284 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjbvj\" (UniqueName: \"kubernetes.io/projected/8168e523-f491-4c1d-9588-ae2963e93927-kube-api-access-pjbvj\") on node \"master-0\" DevicePath \"\"" Mar 19 12:30:58.822450 master-0 kubenswrapper[31830]: I0319 12:30:58.822319 31830 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8168e523-f491-4c1d-9588-ae2963e93927-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:30:58.822450 master-0 kubenswrapper[31830]: I0319 12:30:58.822332 31830 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 12:30:58.822450 master-0 kubenswrapper[31830]: I0319 12:30:58.822342 31830 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8168e523-f491-4c1d-9588-ae2963e93927-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 12:30:58.822450 master-0 kubenswrapper[31830]: I0319 12:30:58.822350 31830 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 12:30:58.822450 master-0 kubenswrapper[31830]: I0319 12:30:58.822358 31830 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-console-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:30:58.822450 master-0 kubenswrapper[31830]: I0319 12:30:58.822367 31830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8168e523-f491-4c1d-9588-ae2963e93927-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:30:59.265819 master-0 kubenswrapper[31830]: I0319 12:30:59.262240 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-845bb9776f-9p49g_8168e523-f491-4c1d-9588-ae2963e93927/console/0.log" Mar 19 12:30:59.265819 master-0 kubenswrapper[31830]: I0319 12:30:59.262294 31830 generic.go:334] "Generic (PLEG): container finished" podID="8168e523-f491-4c1d-9588-ae2963e93927" containerID="3cef942edc4c49ce47897eb1e611d525880e69e956d225cbab8ebfa47c4c4e55" exitCode=2 Mar 19 12:30:59.265819 master-0 kubenswrapper[31830]: I0319 12:30:59.262325 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-845bb9776f-9p49g" event={"ID":"8168e523-f491-4c1d-9588-ae2963e93927","Type":"ContainerDied","Data":"3cef942edc4c49ce47897eb1e611d525880e69e956d225cbab8ebfa47c4c4e55"} Mar 19 12:30:59.265819 master-0 kubenswrapper[31830]: I0319 12:30:59.262350 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-845bb9776f-9p49g" event={"ID":"8168e523-f491-4c1d-9588-ae2963e93927","Type":"ContainerDied","Data":"407c9f4b9a56ecc1169e1b0477f4da5e663759480e30a4c3ad0776841eb3d82f"} Mar 19 12:30:59.265819 master-0 kubenswrapper[31830]: I0319 12:30:59.262365 31830 scope.go:117] "RemoveContainer" containerID="3cef942edc4c49ce47897eb1e611d525880e69e956d225cbab8ebfa47c4c4e55" Mar 19 12:30:59.265819 master-0 kubenswrapper[31830]: I0319 12:30:59.262473 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-845bb9776f-9p49g" Mar 19 12:30:59.283185 master-0 kubenswrapper[31830]: I0319 12:30:59.283142 31830 scope.go:117] "RemoveContainer" containerID="3cef942edc4c49ce47897eb1e611d525880e69e956d225cbab8ebfa47c4c4e55" Mar 19 12:30:59.283606 master-0 kubenswrapper[31830]: E0319 12:30:59.283575 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cef942edc4c49ce47897eb1e611d525880e69e956d225cbab8ebfa47c4c4e55\": container with ID starting with 3cef942edc4c49ce47897eb1e611d525880e69e956d225cbab8ebfa47c4c4e55 not found: ID does not exist" containerID="3cef942edc4c49ce47897eb1e611d525880e69e956d225cbab8ebfa47c4c4e55" Mar 19 12:30:59.283654 master-0 kubenswrapper[31830]: I0319 12:30:59.283611 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cef942edc4c49ce47897eb1e611d525880e69e956d225cbab8ebfa47c4c4e55"} err="failed to get container status \"3cef942edc4c49ce47897eb1e611d525880e69e956d225cbab8ebfa47c4c4e55\": rpc error: code = NotFound desc = could not find container \"3cef942edc4c49ce47897eb1e611d525880e69e956d225cbab8ebfa47c4c4e55\": container with ID starting with 3cef942edc4c49ce47897eb1e611d525880e69e956d225cbab8ebfa47c4c4e55 not found: ID does not exist" Mar 19 12:30:59.294960 master-0 kubenswrapper[31830]: I0319 12:30:59.294890 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-845bb9776f-9p49g"] Mar 19 12:30:59.302659 master-0 kubenswrapper[31830]: I0319 12:30:59.302601 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-845bb9776f-9p49g"] Mar 19 12:30:59.689354 master-0 kubenswrapper[31830]: I0319 12:30:59.689303 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8168e523-f491-4c1d-9588-ae2963e93927" path="/var/lib/kubelet/pods/8168e523-f491-4c1d-9588-ae2963e93927/volumes" Mar 19 12:31:00.274362 master-0 kubenswrapper[31830]: I0319 12:31:00.274307 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gxqv7" event={"ID":"4bdab8cb-e11a-4b5d-9e1e-3cf37ce23ab8","Type":"ContainerStarted","Data":"66984888a6a21883e3ede080cf813fec3d1ac499dbb2d61e1322861d033ed142"} Mar 19 12:31:00.296975 master-0 kubenswrapper[31830]: I0319 12:31:00.296837 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-gxqv7" podStartSLOduration=1.849431607 podStartE2EDuration="3.296780263s" podCreationTimestamp="2026-03-19 12:30:57 +0000 UTC" firstStartedPulling="2026-03-19 12:30:58.080082798 +0000 UTC m=+996.629043502" lastFinishedPulling="2026-03-19 12:30:59.527431454 +0000 UTC m=+998.076392158" observedRunningTime="2026-03-19 12:31:00.29216791 +0000 UTC m=+998.841128654" watchObservedRunningTime="2026-03-19 12:31:00.296780263 +0000 UTC m=+998.845741007" Mar 19 12:31:07.484096 master-0 kubenswrapper[31830]: I0319 12:31:07.484034 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-gxqv7" Mar 19 12:31:07.484965 master-0 kubenswrapper[31830]: I0319 12:31:07.484745 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-gxqv7" Mar 19 12:31:07.512204 master-0 kubenswrapper[31830]: I0319 12:31:07.512151 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-gxqv7" Mar 19 12:31:08.361913 master-0 kubenswrapper[31830]: I0319 12:31:08.361868 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-gxqv7" Mar 19 12:31:14.877713 master-0 kubenswrapper[31830]: I0319 12:31:14.877590 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5"] Mar 19 12:31:14.878412 master-0 kubenswrapper[31830]: E0319 12:31:14.878026 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8168e523-f491-4c1d-9588-ae2963e93927" containerName="console" Mar 19 12:31:14.878412 master-0 kubenswrapper[31830]: I0319 12:31:14.878043 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8168e523-f491-4c1d-9588-ae2963e93927" containerName="console" Mar 19 12:31:14.878412 master-0 kubenswrapper[31830]: I0319 12:31:14.878303 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8168e523-f491-4c1d-9588-ae2963e93927" containerName="console" Mar 19 12:31:14.879791 master-0 kubenswrapper[31830]: I0319 12:31:14.879763 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5" Mar 19 12:31:14.895740 master-0 kubenswrapper[31830]: I0319 12:31:14.895692 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5"] Mar 19 12:31:15.005082 master-0 kubenswrapper[31830]: I0319 12:31:15.005015 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0dda4422-e7ac-48a5-8e06-5ebab86395ab-bundle\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5\" (UID: \"0dda4422-e7ac-48a5-8e06-5ebab86395ab\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5" Mar 19 12:31:15.005353 master-0 kubenswrapper[31830]: I0319 12:31:15.005137 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwq9z\" (UniqueName: \"kubernetes.io/projected/0dda4422-e7ac-48a5-8e06-5ebab86395ab-kube-api-access-bwq9z\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5\" (UID: \"0dda4422-e7ac-48a5-8e06-5ebab86395ab\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5" Mar 19 12:31:15.005353 master-0 kubenswrapper[31830]: I0319 12:31:15.005167 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0dda4422-e7ac-48a5-8e06-5ebab86395ab-util\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5\" (UID: \"0dda4422-e7ac-48a5-8e06-5ebab86395ab\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5" Mar 19 12:31:15.106560 master-0 kubenswrapper[31830]: I0319 12:31:15.106497 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0dda4422-e7ac-48a5-8e06-5ebab86395ab-bundle\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5\" (UID: \"0dda4422-e7ac-48a5-8e06-5ebab86395ab\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5" Mar 19 12:31:15.106765 master-0 kubenswrapper[31830]: I0319 12:31:15.106707 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwq9z\" (UniqueName: \"kubernetes.io/projected/0dda4422-e7ac-48a5-8e06-5ebab86395ab-kube-api-access-bwq9z\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5\" (UID: \"0dda4422-e7ac-48a5-8e06-5ebab86395ab\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5" Mar 19 12:31:15.106765 master-0 kubenswrapper[31830]: I0319 12:31:15.106738 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0dda4422-e7ac-48a5-8e06-5ebab86395ab-util\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5\" (UID: \"0dda4422-e7ac-48a5-8e06-5ebab86395ab\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5" Mar 19 12:31:15.107134 master-0 kubenswrapper[31830]: I0319 12:31:15.107095 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0dda4422-e7ac-48a5-8e06-5ebab86395ab-bundle\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5\" (UID: \"0dda4422-e7ac-48a5-8e06-5ebab86395ab\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5" Mar 19 12:31:15.107134 master-0 kubenswrapper[31830]: I0319 12:31:15.107120 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0dda4422-e7ac-48a5-8e06-5ebab86395ab-util\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5\" (UID: \"0dda4422-e7ac-48a5-8e06-5ebab86395ab\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5" Mar 19 12:31:15.121628 master-0 kubenswrapper[31830]: I0319 12:31:15.121593 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwq9z\" (UniqueName: \"kubernetes.io/projected/0dda4422-e7ac-48a5-8e06-5ebab86395ab-kube-api-access-bwq9z\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5\" (UID: \"0dda4422-e7ac-48a5-8e06-5ebab86395ab\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5" Mar 19 12:31:15.199749 master-0 kubenswrapper[31830]: I0319 12:31:15.199600 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5" Mar 19 12:31:15.647207 master-0 kubenswrapper[31830]: I0319 12:31:15.647148 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5"] Mar 19 12:31:16.418033 master-0 kubenswrapper[31830]: I0319 12:31:16.417890 31830 generic.go:334] "Generic (PLEG): container finished" podID="0dda4422-e7ac-48a5-8e06-5ebab86395ab" containerID="04456608513f0483dd342b65ab245e8effd4c2d5c492f8a15c17d02221f68f58" exitCode=0 Mar 19 12:31:16.418033 master-0 kubenswrapper[31830]: I0319 12:31:16.417948 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5" event={"ID":"0dda4422-e7ac-48a5-8e06-5ebab86395ab","Type":"ContainerDied","Data":"04456608513f0483dd342b65ab245e8effd4c2d5c492f8a15c17d02221f68f58"} Mar 19 12:31:16.418033 master-0 kubenswrapper[31830]: I0319 12:31:16.417976 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5" event={"ID":"0dda4422-e7ac-48a5-8e06-5ebab86395ab","Type":"ContainerStarted","Data":"d457a852c4ba4a07c6d6b561650e9ab4d56bd1381ce30d2be677845ddaec0b1d"} Mar 19 12:31:18.445632 master-0 kubenswrapper[31830]: I0319 12:31:18.445576 31830 generic.go:334] "Generic (PLEG): container finished" podID="0dda4422-e7ac-48a5-8e06-5ebab86395ab" containerID="f772560505637aaf8ffbc3348cf17a22bbf0dac08b73ea5ded6e3cf3f33ed6af" exitCode=0 Mar 19 12:31:18.445632 master-0 kubenswrapper[31830]: I0319 12:31:18.445630 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5" event={"ID":"0dda4422-e7ac-48a5-8e06-5ebab86395ab","Type":"ContainerDied","Data":"f772560505637aaf8ffbc3348cf17a22bbf0dac08b73ea5ded6e3cf3f33ed6af"} Mar 19 12:31:19.457407 master-0 kubenswrapper[31830]: I0319 12:31:19.457350 31830 generic.go:334] "Generic (PLEG): container finished" podID="0dda4422-e7ac-48a5-8e06-5ebab86395ab" containerID="1c836d9268edf784f8e0a28506453aa8ee225c0d6bfa71c7cfed0f3eb05a2a6e" exitCode=0 Mar 19 12:31:19.457407 master-0 kubenswrapper[31830]: I0319 12:31:19.457400 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5" event={"ID":"0dda4422-e7ac-48a5-8e06-5ebab86395ab","Type":"ContainerDied","Data":"1c836d9268edf784f8e0a28506453aa8ee225c0d6bfa71c7cfed0f3eb05a2a6e"} Mar 19 12:31:20.852690 master-0 kubenswrapper[31830]: I0319 12:31:20.852647 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5" Mar 19 12:31:21.012537 master-0 kubenswrapper[31830]: I0319 12:31:21.012501 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0dda4422-e7ac-48a5-8e06-5ebab86395ab-util\") pod \"0dda4422-e7ac-48a5-8e06-5ebab86395ab\" (UID: \"0dda4422-e7ac-48a5-8e06-5ebab86395ab\") " Mar 19 12:31:21.012869 master-0 kubenswrapper[31830]: I0319 12:31:21.012841 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0dda4422-e7ac-48a5-8e06-5ebab86395ab-bundle\") pod \"0dda4422-e7ac-48a5-8e06-5ebab86395ab\" (UID: \"0dda4422-e7ac-48a5-8e06-5ebab86395ab\") " Mar 19 12:31:21.013183 master-0 kubenswrapper[31830]: I0319 12:31:21.013167 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwq9z\" (UniqueName: \"kubernetes.io/projected/0dda4422-e7ac-48a5-8e06-5ebab86395ab-kube-api-access-bwq9z\") pod \"0dda4422-e7ac-48a5-8e06-5ebab86395ab\" (UID: \"0dda4422-e7ac-48a5-8e06-5ebab86395ab\") " Mar 19 12:31:21.013504 master-0 kubenswrapper[31830]: I0319 12:31:21.013447 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dda4422-e7ac-48a5-8e06-5ebab86395ab-bundle" (OuterVolumeSpecName: "bundle") pod "0dda4422-e7ac-48a5-8e06-5ebab86395ab" (UID: "0dda4422-e7ac-48a5-8e06-5ebab86395ab"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:31:21.013835 master-0 kubenswrapper[31830]: I0319 12:31:21.013813 31830 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0dda4422-e7ac-48a5-8e06-5ebab86395ab-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:31:21.017026 master-0 kubenswrapper[31830]: I0319 12:31:21.016990 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dda4422-e7ac-48a5-8e06-5ebab86395ab-kube-api-access-bwq9z" (OuterVolumeSpecName: "kube-api-access-bwq9z") pod "0dda4422-e7ac-48a5-8e06-5ebab86395ab" (UID: "0dda4422-e7ac-48a5-8e06-5ebab86395ab"). InnerVolumeSpecName "kube-api-access-bwq9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:31:21.022269 master-0 kubenswrapper[31830]: I0319 12:31:21.022236 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dda4422-e7ac-48a5-8e06-5ebab86395ab-util" (OuterVolumeSpecName: "util") pod "0dda4422-e7ac-48a5-8e06-5ebab86395ab" (UID: "0dda4422-e7ac-48a5-8e06-5ebab86395ab"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:31:21.115182 master-0 kubenswrapper[31830]: I0319 12:31:21.115056 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwq9z\" (UniqueName: \"kubernetes.io/projected/0dda4422-e7ac-48a5-8e06-5ebab86395ab-kube-api-access-bwq9z\") on node \"master-0\" DevicePath \"\"" Mar 19 12:31:21.115182 master-0 kubenswrapper[31830]: I0319 12:31:21.115103 31830 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0dda4422-e7ac-48a5-8e06-5ebab86395ab-util\") on node \"master-0\" DevicePath \"\"" Mar 19 12:31:21.478019 master-0 kubenswrapper[31830]: I0319 12:31:21.477960 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5" event={"ID":"0dda4422-e7ac-48a5-8e06-5ebab86395ab","Type":"ContainerDied","Data":"d457a852c4ba4a07c6d6b561650e9ab4d56bd1381ce30d2be677845ddaec0b1d"} Mar 19 12:31:21.478019 master-0 kubenswrapper[31830]: I0319 12:31:21.478009 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5" Mar 19 12:31:21.478359 master-0 kubenswrapper[31830]: I0319 12:31:21.478013 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d457a852c4ba4a07c6d6b561650e9ab4d56bd1381ce30d2be677845ddaec0b1d" Mar 19 12:31:31.603484 master-0 kubenswrapper[31830]: I0319 12:31:31.603417 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-b85c4d696-mlg8z"] Mar 19 12:31:31.604245 master-0 kubenswrapper[31830]: E0319 12:31:31.603866 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dda4422-e7ac-48a5-8e06-5ebab86395ab" containerName="pull" Mar 19 12:31:31.604245 master-0 kubenswrapper[31830]: I0319 12:31:31.603884 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dda4422-e7ac-48a5-8e06-5ebab86395ab" containerName="pull" Mar 19 12:31:31.604245 master-0 kubenswrapper[31830]: E0319 12:31:31.603903 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dda4422-e7ac-48a5-8e06-5ebab86395ab" containerName="util" Mar 19 12:31:31.604245 master-0 kubenswrapper[31830]: I0319 12:31:31.603911 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dda4422-e7ac-48a5-8e06-5ebab86395ab" containerName="util" Mar 19 12:31:31.604245 master-0 kubenswrapper[31830]: E0319 12:31:31.603939 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dda4422-e7ac-48a5-8e06-5ebab86395ab" containerName="extract" Mar 19 12:31:31.604245 master-0 kubenswrapper[31830]: I0319 12:31:31.603945 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dda4422-e7ac-48a5-8e06-5ebab86395ab" containerName="extract" Mar 19 12:31:31.604245 master-0 kubenswrapper[31830]: I0319 12:31:31.604119 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dda4422-e7ac-48a5-8e06-5ebab86395ab" containerName="extract" Mar 19 12:31:31.604625 master-0 kubenswrapper[31830]: I0319 12:31:31.604603 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-mlg8z" Mar 19 12:31:31.648595 master-0 kubenswrapper[31830]: I0319 12:31:31.648526 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b85c4d696-mlg8z"] Mar 19 12:31:31.692824 master-0 kubenswrapper[31830]: I0319 12:31:31.684941 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spfz5\" (UniqueName: \"kubernetes.io/projected/c7ef7174-3939-4606-a689-d29f50fd7790-kube-api-access-spfz5\") pod \"openstack-operator-controller-init-b85c4d696-mlg8z\" (UID: \"c7ef7174-3939-4606-a689-d29f50fd7790\") " pod="openstack-operators/openstack-operator-controller-init-b85c4d696-mlg8z" Mar 19 12:31:31.787334 master-0 kubenswrapper[31830]: I0319 12:31:31.787274 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spfz5\" (UniqueName: \"kubernetes.io/projected/c7ef7174-3939-4606-a689-d29f50fd7790-kube-api-access-spfz5\") pod \"openstack-operator-controller-init-b85c4d696-mlg8z\" (UID: \"c7ef7174-3939-4606-a689-d29f50fd7790\") " pod="openstack-operators/openstack-operator-controller-init-b85c4d696-mlg8z" Mar 19 12:31:31.804885 master-0 kubenswrapper[31830]: I0319 12:31:31.804785 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spfz5\" (UniqueName: \"kubernetes.io/projected/c7ef7174-3939-4606-a689-d29f50fd7790-kube-api-access-spfz5\") pod \"openstack-operator-controller-init-b85c4d696-mlg8z\" (UID: \"c7ef7174-3939-4606-a689-d29f50fd7790\") " pod="openstack-operators/openstack-operator-controller-init-b85c4d696-mlg8z" Mar 19 12:31:31.920267 master-0 kubenswrapper[31830]: I0319 12:31:31.920144 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-mlg8z" Mar 19 12:31:32.394611 master-0 kubenswrapper[31830]: I0319 12:31:32.394553 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b85c4d696-mlg8z"] Mar 19 12:31:32.404722 master-0 kubenswrapper[31830]: W0319 12:31:32.404666 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7ef7174_3939_4606_a689_d29f50fd7790.slice/crio-c3a17e4936ec01beb9b933b77bf5742da1cbacb894d8d4e4242a8f4b6b34d897 WatchSource:0}: Error finding container c3a17e4936ec01beb9b933b77bf5742da1cbacb894d8d4e4242a8f4b6b34d897: Status 404 returned error can't find the container with id c3a17e4936ec01beb9b933b77bf5742da1cbacb894d8d4e4242a8f4b6b34d897 Mar 19 12:31:32.564188 master-0 kubenswrapper[31830]: I0319 12:31:32.564136 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-mlg8z" event={"ID":"c7ef7174-3939-4606-a689-d29f50fd7790","Type":"ContainerStarted","Data":"c3a17e4936ec01beb9b933b77bf5742da1cbacb894d8d4e4242a8f4b6b34d897"} Mar 19 12:31:38.612680 master-0 kubenswrapper[31830]: I0319 12:31:38.612634 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-mlg8z" event={"ID":"c7ef7174-3939-4606-a689-d29f50fd7790","Type":"ContainerStarted","Data":"689f70d240d65aaa9278beb88f018f1ea46b15e09470596960c0d81a1f6bb06d"} Mar 19 12:31:38.613352 master-0 kubenswrapper[31830]: I0319 12:31:38.613333 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-mlg8z" Mar 19 12:31:38.642209 master-0 kubenswrapper[31830]: I0319 12:31:38.642122 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-mlg8z" podStartSLOduration=2.597083995 podStartE2EDuration="7.642100784s" podCreationTimestamp="2026-03-19 12:31:31 +0000 UTC" firstStartedPulling="2026-03-19 12:31:32.407240415 +0000 UTC m=+1030.956201119" lastFinishedPulling="2026-03-19 12:31:37.452257204 +0000 UTC m=+1036.001217908" observedRunningTime="2026-03-19 12:31:38.636983646 +0000 UTC m=+1037.185944360" watchObservedRunningTime="2026-03-19 12:31:38.642100784 +0000 UTC m=+1037.191061488" Mar 19 12:31:51.923192 master-0 kubenswrapper[31830]: I0319 12:31:51.923129 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-mlg8z" Mar 19 12:32:12.179238 master-0 kubenswrapper[31830]: I0319 12:32:12.179138 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-x6nhk"] Mar 19 12:32:12.180270 master-0 kubenswrapper[31830]: I0319 12:32:12.180251 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-x6nhk" Mar 19 12:32:12.204825 master-0 kubenswrapper[31830]: I0319 12:32:12.198733 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-cbcqj"] Mar 19 12:32:12.204825 master-0 kubenswrapper[31830]: I0319 12:32:12.199969 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-cbcqj" Mar 19 12:32:12.210822 master-0 kubenswrapper[31830]: I0319 12:32:12.209746 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-x6nhk"] Mar 19 12:32:12.232504 master-0 kubenswrapper[31830]: I0319 12:32:12.232429 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-cbcqj"] Mar 19 12:32:12.252606 master-0 kubenswrapper[31830]: I0319 12:32:12.252559 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-4zgd2"] Mar 19 12:32:12.262111 master-0 kubenswrapper[31830]: I0319 12:32:12.262064 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-4zgd2" Mar 19 12:32:12.272165 master-0 kubenswrapper[31830]: I0319 12:32:12.271424 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-4zgd2"] Mar 19 12:32:12.287421 master-0 kubenswrapper[31830]: I0319 12:32:12.287354 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l67fh\" (UniqueName: \"kubernetes.io/projected/528a7681-3153-4efc-9a5b-538929555c6d-kube-api-access-l67fh\") pod \"barbican-operator-controller-manager-59bc569d95-x6nhk\" (UID: \"528a7681-3153-4efc-9a5b-538929555c6d\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-x6nhk" Mar 19 12:32:12.287421 master-0 kubenswrapper[31830]: I0319 12:32:12.287424 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8qkl\" (UniqueName: \"kubernetes.io/projected/ec2e9575-5f21-44a5-a34c-f076f726a1d2-kube-api-access-x8qkl\") pod \"cinder-operator-controller-manager-8d58dc466-cbcqj\" (UID: \"ec2e9575-5f21-44a5-a34c-f076f726a1d2\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-cbcqj" Mar 19 12:32:12.358273 master-0 kubenswrapper[31830]: I0319 12:32:12.358146 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-cbj6m"] Mar 19 12:32:12.359524 master-0 kubenswrapper[31830]: I0319 12:32:12.359479 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-cbj6m" Mar 19 12:32:12.390137 master-0 kubenswrapper[31830]: I0319 12:32:12.389422 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-wq7gk"] Mar 19 12:32:12.392012 master-0 kubenswrapper[31830]: I0319 12:32:12.391417 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-wq7gk" Mar 19 12:32:12.397904 master-0 kubenswrapper[31830]: I0319 12:32:12.396625 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwr45\" (UniqueName: \"kubernetes.io/projected/9962d57a-2869-4044-a24e-65338d28f6c3-kube-api-access-wwr45\") pod \"designate-operator-controller-manager-588d4d986b-4zgd2\" (UID: \"9962d57a-2869-4044-a24e-65338d28f6c3\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-4zgd2" Mar 19 12:32:12.397904 master-0 kubenswrapper[31830]: I0319 12:32:12.396750 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l67fh\" (UniqueName: \"kubernetes.io/projected/528a7681-3153-4efc-9a5b-538929555c6d-kube-api-access-l67fh\") pod \"barbican-operator-controller-manager-59bc569d95-x6nhk\" (UID: \"528a7681-3153-4efc-9a5b-538929555c6d\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-x6nhk" Mar 19 12:32:12.397904 master-0 kubenswrapper[31830]: I0319 12:32:12.396796 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79sdl\" (UniqueName: \"kubernetes.io/projected/0bf9354e-75bc-4f4d-b665-f23bf828bfa8-kube-api-access-79sdl\") pod \"glance-operator-controller-manager-79df6bcc97-cbj6m\" (UID: \"0bf9354e-75bc-4f4d-b665-f23bf828bfa8\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-cbj6m" Mar 19 12:32:12.397904 master-0 kubenswrapper[31830]: I0319 12:32:12.396861 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8qkl\" (UniqueName: \"kubernetes.io/projected/ec2e9575-5f21-44a5-a34c-f076f726a1d2-kube-api-access-x8qkl\") pod \"cinder-operator-controller-manager-8d58dc466-cbcqj\" (UID: \"ec2e9575-5f21-44a5-a34c-f076f726a1d2\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-cbcqj" Mar 19 12:32:12.407858 master-0 kubenswrapper[31830]: I0319 12:32:12.406952 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-cbj6m"] Mar 19 12:32:12.420938 master-0 kubenswrapper[31830]: I0319 12:32:12.417305 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-wq7gk"] Mar 19 12:32:12.432933 master-0 kubenswrapper[31830]: I0319 12:32:12.428850 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-d2qll"] Mar 19 12:32:12.435604 master-0 kubenswrapper[31830]: I0319 12:32:12.435550 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-d2qll" Mar 19 12:32:12.470845 master-0 kubenswrapper[31830]: I0319 12:32:12.469119 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8qkl\" (UniqueName: \"kubernetes.io/projected/ec2e9575-5f21-44a5-a34c-f076f726a1d2-kube-api-access-x8qkl\") pod \"cinder-operator-controller-manager-8d58dc466-cbcqj\" (UID: \"ec2e9575-5f21-44a5-a34c-f076f726a1d2\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-cbcqj" Mar 19 12:32:12.470845 master-0 kubenswrapper[31830]: I0319 12:32:12.469314 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l67fh\" (UniqueName: \"kubernetes.io/projected/528a7681-3153-4efc-9a5b-538929555c6d-kube-api-access-l67fh\") pod \"barbican-operator-controller-manager-59bc569d95-x6nhk\" (UID: \"528a7681-3153-4efc-9a5b-538929555c6d\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-x6nhk" Mar 19 12:32:12.499957 master-0 kubenswrapper[31830]: I0319 12:32:12.499897 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79sdl\" (UniqueName: \"kubernetes.io/projected/0bf9354e-75bc-4f4d-b665-f23bf828bfa8-kube-api-access-79sdl\") pod \"glance-operator-controller-manager-79df6bcc97-cbj6m\" (UID: \"0bf9354e-75bc-4f4d-b665-f23bf828bfa8\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-cbj6m" Mar 19 12:32:12.500213 master-0 kubenswrapper[31830]: I0319 12:32:12.499987 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpl5t\" (UniqueName: \"kubernetes.io/projected/2ca9358c-cf3c-4965-a617-08dcd5e916c4-kube-api-access-rpl5t\") pod \"heat-operator-controller-manager-67dd5f86f5-wq7gk\" (UID: \"2ca9358c-cf3c-4965-a617-08dcd5e916c4\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-wq7gk" Mar 19 12:32:12.500213 master-0 kubenswrapper[31830]: I0319 12:32:12.500158 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwr45\" (UniqueName: \"kubernetes.io/projected/9962d57a-2869-4044-a24e-65338d28f6c3-kube-api-access-wwr45\") pod \"designate-operator-controller-manager-588d4d986b-4zgd2\" (UID: \"9962d57a-2869-4044-a24e-65338d28f6c3\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-4zgd2" Mar 19 12:32:12.500316 master-0 kubenswrapper[31830]: I0319 12:32:12.500271 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnvfn\" (UniqueName: \"kubernetes.io/projected/333d933c-7a84-455c-80c8-d5795ba1058d-kube-api-access-vnvfn\") pod \"horizon-operator-controller-manager-8464cc45fb-d2qll\" (UID: \"333d933c-7a84-455c-80c8-d5795ba1058d\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-d2qll" Mar 19 12:32:12.531848 master-0 kubenswrapper[31830]: I0319 12:32:12.531703 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwr45\" (UniqueName: \"kubernetes.io/projected/9962d57a-2869-4044-a24e-65338d28f6c3-kube-api-access-wwr45\") pod \"designate-operator-controller-manager-588d4d986b-4zgd2\" (UID: \"9962d57a-2869-4044-a24e-65338d28f6c3\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-4zgd2" Mar 19 12:32:12.556443 master-0 kubenswrapper[31830]: I0319 12:32:12.556361 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-cbcqj" Mar 19 12:32:12.557608 master-0 kubenswrapper[31830]: I0319 12:32:12.556873 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-x6nhk" Mar 19 12:32:12.559025 master-0 kubenswrapper[31830]: I0319 12:32:12.557902 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79sdl\" (UniqueName: \"kubernetes.io/projected/0bf9354e-75bc-4f4d-b665-f23bf828bfa8-kube-api-access-79sdl\") pod \"glance-operator-controller-manager-79df6bcc97-cbj6m\" (UID: \"0bf9354e-75bc-4f4d-b665-f23bf828bfa8\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-cbj6m" Mar 19 12:32:12.560168 master-0 kubenswrapper[31830]: I0319 12:32:12.560007 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-d2qll"] Mar 19 12:32:12.576485 master-0 kubenswrapper[31830]: I0319 12:32:12.575055 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-lpsb9"] Mar 19 12:32:12.576485 master-0 kubenswrapper[31830]: I0319 12:32:12.576229 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lpsb9" Mar 19 12:32:12.615836 master-0 kubenswrapper[31830]: I0319 12:32:12.611325 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpl5t\" (UniqueName: \"kubernetes.io/projected/2ca9358c-cf3c-4965-a617-08dcd5e916c4-kube-api-access-rpl5t\") pod \"heat-operator-controller-manager-67dd5f86f5-wq7gk\" (UID: \"2ca9358c-cf3c-4965-a617-08dcd5e916c4\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-wq7gk" Mar 19 12:32:12.615836 master-0 kubenswrapper[31830]: I0319 12:32:12.611460 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn5nf\" (UniqueName: \"kubernetes.io/projected/cc913cd6-6365-4019-a201-f4ed756e7238-kube-api-access-nn5nf\") pod \"ironic-operator-controller-manager-6f787dddc9-lpsb9\" (UID: \"cc913cd6-6365-4019-a201-f4ed756e7238\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lpsb9" Mar 19 12:32:12.615836 master-0 kubenswrapper[31830]: I0319 12:32:12.611486 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnvfn\" (UniqueName: \"kubernetes.io/projected/333d933c-7a84-455c-80c8-d5795ba1058d-kube-api-access-vnvfn\") pod \"horizon-operator-controller-manager-8464cc45fb-d2qll\" (UID: \"333d933c-7a84-455c-80c8-d5795ba1058d\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-d2qll" Mar 19 12:32:12.655433 master-0 kubenswrapper[31830]: I0319 12:32:12.620631 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv"] Mar 19 12:32:12.655433 master-0 kubenswrapper[31830]: I0319 12:32:12.622080 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv" Mar 19 12:32:12.655433 master-0 kubenswrapper[31830]: I0319 12:32:12.625103 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 19 12:32:12.655433 master-0 kubenswrapper[31830]: I0319 12:32:12.638616 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-4zgd2" Mar 19 12:32:12.655433 master-0 kubenswrapper[31830]: I0319 12:32:12.643227 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv"] Mar 19 12:32:12.661431 master-0 kubenswrapper[31830]: I0319 12:32:12.661331 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-lpsb9"] Mar 19 12:32:12.705645 master-0 kubenswrapper[31830]: I0319 12:32:12.695756 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnvfn\" (UniqueName: \"kubernetes.io/projected/333d933c-7a84-455c-80c8-d5795ba1058d-kube-api-access-vnvfn\") pod \"horizon-operator-controller-manager-8464cc45fb-d2qll\" (UID: \"333d933c-7a84-455c-80c8-d5795ba1058d\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-d2qll" Mar 19 12:32:12.705645 master-0 kubenswrapper[31830]: I0319 12:32:12.703961 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-tc6m5"] Mar 19 12:32:12.705645 master-0 kubenswrapper[31830]: I0319 12:32:12.705477 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-tc6m5" Mar 19 12:32:12.729654 master-0 kubenswrapper[31830]: I0319 12:32:12.729422 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f0b9a13-7862-4829-a97d-56034487da2e-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-6kkfv\" (UID: \"1f0b9a13-7862-4829-a97d-56034487da2e\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv" Mar 19 12:32:12.729654 master-0 kubenswrapper[31830]: I0319 12:32:12.729555 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn5nf\" (UniqueName: \"kubernetes.io/projected/cc913cd6-6365-4019-a201-f4ed756e7238-kube-api-access-nn5nf\") pod \"ironic-operator-controller-manager-6f787dddc9-lpsb9\" (UID: \"cc913cd6-6365-4019-a201-f4ed756e7238\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lpsb9" Mar 19 12:32:12.729654 master-0 kubenswrapper[31830]: I0319 12:32:12.729583 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnpnj\" (UniqueName: \"kubernetes.io/projected/1f0b9a13-7862-4829-a97d-56034487da2e-kube-api-access-nnpnj\") pod \"infra-operator-controller-manager-7dd6bb94c9-6kkfv\" (UID: \"1f0b9a13-7862-4829-a97d-56034487da2e\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv" Mar 19 12:32:12.733817 master-0 kubenswrapper[31830]: I0319 12:32:12.732148 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-cbj6m" Mar 19 12:32:12.741853 master-0 kubenswrapper[31830]: I0319 12:32:12.738866 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-tc6m5"] Mar 19 12:32:12.744403 master-0 kubenswrapper[31830]: I0319 12:32:12.744230 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpl5t\" (UniqueName: \"kubernetes.io/projected/2ca9358c-cf3c-4965-a617-08dcd5e916c4-kube-api-access-rpl5t\") pod \"heat-operator-controller-manager-67dd5f86f5-wq7gk\" (UID: \"2ca9358c-cf3c-4965-a617-08dcd5e916c4\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-wq7gk" Mar 19 12:32:12.756561 master-0 kubenswrapper[31830]: I0319 12:32:12.753239 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-vrk79"] Mar 19 12:32:12.756561 master-0 kubenswrapper[31830]: I0319 12:32:12.754562 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-vrk79" Mar 19 12:32:12.778543 master-0 kubenswrapper[31830]: I0319 12:32:12.776500 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-vrk79"] Mar 19 12:32:12.794025 master-0 kubenswrapper[31830]: I0319 12:32:12.792481 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-d2qll" Mar 19 12:32:12.806387 master-0 kubenswrapper[31830]: I0319 12:32:12.801033 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-xslmv"] Mar 19 12:32:12.806387 master-0 kubenswrapper[31830]: I0319 12:32:12.802822 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-xslmv" Mar 19 12:32:12.806387 master-0 kubenswrapper[31830]: I0319 12:32:12.803244 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn5nf\" (UniqueName: \"kubernetes.io/projected/cc913cd6-6365-4019-a201-f4ed756e7238-kube-api-access-nn5nf\") pod \"ironic-operator-controller-manager-6f787dddc9-lpsb9\" (UID: \"cc913cd6-6365-4019-a201-f4ed756e7238\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lpsb9" Mar 19 12:32:12.825913 master-0 kubenswrapper[31830]: I0319 12:32:12.825865 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lpsb9" Mar 19 12:32:12.835631 master-0 kubenswrapper[31830]: I0319 12:32:12.830972 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f0b9a13-7862-4829-a97d-56034487da2e-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-6kkfv\" (UID: \"1f0b9a13-7862-4829-a97d-56034487da2e\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv" Mar 19 12:32:12.835631 master-0 kubenswrapper[31830]: I0319 12:32:12.831028 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krrj2\" (UniqueName: \"kubernetes.io/projected/ac5b2ff6-6088-45fd-9b33-6d20a3ad9e59-kube-api-access-krrj2\") pod \"manila-operator-controller-manager-55f864c847-vrk79\" (UID: \"ac5b2ff6-6088-45fd-9b33-6d20a3ad9e59\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-vrk79" Mar 19 12:32:12.835631 master-0 kubenswrapper[31830]: I0319 12:32:12.831068 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8nhr\" (UniqueName: \"kubernetes.io/projected/a30da668-d209-4afb-a612-79302fb7942e-kube-api-access-h8nhr\") pod \"keystone-operator-controller-manager-768b96df4c-tc6m5\" (UID: \"a30da668-d209-4afb-a612-79302fb7942e\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-tc6m5" Mar 19 12:32:12.835631 master-0 kubenswrapper[31830]: I0319 12:32:12.831416 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnpnj\" (UniqueName: \"kubernetes.io/projected/1f0b9a13-7862-4829-a97d-56034487da2e-kube-api-access-nnpnj\") pod \"infra-operator-controller-manager-7dd6bb94c9-6kkfv\" (UID: \"1f0b9a13-7862-4829-a97d-56034487da2e\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv" Mar 19 12:32:12.835631 master-0 kubenswrapper[31830]: E0319 12:32:12.831947 31830 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 19 12:32:12.835631 master-0 kubenswrapper[31830]: E0319 12:32:12.832017 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f0b9a13-7862-4829-a97d-56034487da2e-cert podName:1f0b9a13-7862-4829-a97d-56034487da2e nodeName:}" failed. No retries permitted until 2026-03-19 12:32:13.331996997 +0000 UTC m=+1071.880957701 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f0b9a13-7862-4829-a97d-56034487da2e-cert") pod "infra-operator-controller-manager-7dd6bb94c9-6kkfv" (UID: "1f0b9a13-7862-4829-a97d-56034487da2e") : secret "infra-operator-webhook-server-cert" not found Mar 19 12:32:12.837926 master-0 kubenswrapper[31830]: I0319 12:32:12.837780 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-xslmv"] Mar 19 12:32:12.861556 master-0 kubenswrapper[31830]: I0319 12:32:12.854636 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-px9d7"] Mar 19 12:32:12.861556 master-0 kubenswrapper[31830]: I0319 12:32:12.857390 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-px9d7" Mar 19 12:32:12.862716 master-0 kubenswrapper[31830]: I0319 12:32:12.862671 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnpnj\" (UniqueName: \"kubernetes.io/projected/1f0b9a13-7862-4829-a97d-56034487da2e-kube-api-access-nnpnj\") pod \"infra-operator-controller-manager-7dd6bb94c9-6kkfv\" (UID: \"1f0b9a13-7862-4829-a97d-56034487da2e\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv" Mar 19 12:32:12.883914 master-0 kubenswrapper[31830]: I0319 12:32:12.874965 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-9vdlb"] Mar 19 12:32:12.883914 master-0 kubenswrapper[31830]: I0319 12:32:12.876351 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-9vdlb" Mar 19 12:32:12.887516 master-0 kubenswrapper[31830]: I0319 12:32:12.887466 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-px9d7"] Mar 19 12:32:12.906242 master-0 kubenswrapper[31830]: I0319 12:32:12.905863 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-9vdlb"] Mar 19 12:32:12.917845 master-0 kubenswrapper[31830]: I0319 12:32:12.917792 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-22wxs"] Mar 19 12:32:12.919111 master-0 kubenswrapper[31830]: I0319 12:32:12.918971 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-22wxs" Mar 19 12:32:12.932959 master-0 kubenswrapper[31830]: I0319 12:32:12.932422 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-285t7\" (UniqueName: \"kubernetes.io/projected/b5cf325f-5ed3-416a-b7cf-c95cc198afff-kube-api-access-285t7\") pod \"mariadb-operator-controller-manager-67ccfc9778-xslmv\" (UID: \"b5cf325f-5ed3-416a-b7cf-c95cc198afff\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-xslmv" Mar 19 12:32:12.932959 master-0 kubenswrapper[31830]: I0319 12:32:12.932470 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7fhr\" (UniqueName: \"kubernetes.io/projected/82330c7e-a21c-42e0-9f7c-ddc6e7269f0c-kube-api-access-p7fhr\") pod \"nova-operator-controller-manager-5d488d59fb-9vdlb\" (UID: \"82330c7e-a21c-42e0-9f7c-ddc6e7269f0c\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-9vdlb" Mar 19 12:32:12.932959 master-0 kubenswrapper[31830]: I0319 12:32:12.932539 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krrj2\" (UniqueName: \"kubernetes.io/projected/ac5b2ff6-6088-45fd-9b33-6d20a3ad9e59-kube-api-access-krrj2\") pod \"manila-operator-controller-manager-55f864c847-vrk79\" (UID: \"ac5b2ff6-6088-45fd-9b33-6d20a3ad9e59\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-vrk79" Mar 19 12:32:12.932959 master-0 kubenswrapper[31830]: I0319 12:32:12.932596 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8nhr\" (UniqueName: \"kubernetes.io/projected/a30da668-d209-4afb-a612-79302fb7942e-kube-api-access-h8nhr\") pod \"keystone-operator-controller-manager-768b96df4c-tc6m5\" (UID: \"a30da668-d209-4afb-a612-79302fb7942e\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-tc6m5" Mar 19 12:32:12.932959 master-0 kubenswrapper[31830]: I0319 12:32:12.932690 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxz4w\" (UniqueName: \"kubernetes.io/projected/467e2f90-bbbf-4d88-9b56-9ed6a353b45f-kube-api-access-sxz4w\") pod \"neutron-operator-controller-manager-767865f676-px9d7\" (UID: \"467e2f90-bbbf-4d88-9b56-9ed6a353b45f\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-px9d7" Mar 19 12:32:12.957186 master-0 kubenswrapper[31830]: I0319 12:32:12.952892 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-22wxs"] Mar 19 12:32:12.960201 master-0 kubenswrapper[31830]: I0319 12:32:12.959951 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-4jhnc"] Mar 19 12:32:12.961191 master-0 kubenswrapper[31830]: I0319 12:32:12.961166 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-4jhnc" Mar 19 12:32:12.970007 master-0 kubenswrapper[31830]: I0319 12:32:12.969933 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-4jhnc"] Mar 19 12:32:12.981035 master-0 kubenswrapper[31830]: I0319 12:32:12.980625 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8nhr\" (UniqueName: \"kubernetes.io/projected/a30da668-d209-4afb-a612-79302fb7942e-kube-api-access-h8nhr\") pod \"keystone-operator-controller-manager-768b96df4c-tc6m5\" (UID: \"a30da668-d209-4afb-a612-79302fb7942e\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-tc6m5" Mar 19 12:32:12.981630 master-0 kubenswrapper[31830]: I0319 12:32:12.981605 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krrj2\" (UniqueName: \"kubernetes.io/projected/ac5b2ff6-6088-45fd-9b33-6d20a3ad9e59-kube-api-access-krrj2\") pod \"manila-operator-controller-manager-55f864c847-vrk79\" (UID: \"ac5b2ff6-6088-45fd-9b33-6d20a3ad9e59\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-vrk79" Mar 19 12:32:12.983062 master-0 kubenswrapper[31830]: I0319 12:32:12.983031 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-wq7gk" Mar 19 12:32:12.984573 master-0 kubenswrapper[31830]: I0319 12:32:12.984537 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-9ldx8"] Mar 19 12:32:12.988216 master-0 kubenswrapper[31830]: I0319 12:32:12.985710 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-9ldx8" Mar 19 12:32:12.998547 master-0 kubenswrapper[31830]: I0319 12:32:12.998506 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-9ldx8"] Mar 19 12:32:13.021244 master-0 kubenswrapper[31830]: I0319 12:32:13.010697 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-85lzd"] Mar 19 12:32:13.021244 master-0 kubenswrapper[31830]: I0319 12:32:13.012149 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-85lzd" Mar 19 12:32:13.035565 master-0 kubenswrapper[31830]: I0319 12:32:13.033646 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr985\" (UniqueName: \"kubernetes.io/projected/9897197c-6347-48f3-bce4-f2e70d2241af-kube-api-access-vr985\") pod \"placement-operator-controller-manager-5784578c99-9ldx8\" (UID: \"9897197c-6347-48f3-bce4-f2e70d2241af\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-9ldx8" Mar 19 12:32:13.035565 master-0 kubenswrapper[31830]: I0319 12:32:13.033725 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxz4w\" (UniqueName: \"kubernetes.io/projected/467e2f90-bbbf-4d88-9b56-9ed6a353b45f-kube-api-access-sxz4w\") pod \"neutron-operator-controller-manager-767865f676-px9d7\" (UID: \"467e2f90-bbbf-4d88-9b56-9ed6a353b45f\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-px9d7" Mar 19 12:32:13.035565 master-0 kubenswrapper[31830]: I0319 12:32:13.033755 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrfzz\" (UniqueName: \"kubernetes.io/projected/1e3ac87a-41fb-4d68-8531-01685bc8f17c-kube-api-access-hrfzz\") pod \"swift-operator-controller-manager-c674c5965-85lzd\" (UID: \"1e3ac87a-41fb-4d68-8531-01685bc8f17c\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-85lzd" Mar 19 12:32:13.035565 master-0 kubenswrapper[31830]: I0319 12:32:13.033782 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-285t7\" (UniqueName: \"kubernetes.io/projected/b5cf325f-5ed3-416a-b7cf-c95cc198afff-kube-api-access-285t7\") pod \"mariadb-operator-controller-manager-67ccfc9778-xslmv\" (UID: \"b5cf325f-5ed3-416a-b7cf-c95cc198afff\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-xslmv" Mar 19 12:32:13.035565 master-0 kubenswrapper[31830]: I0319 12:32:13.033990 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7fhr\" (UniqueName: \"kubernetes.io/projected/82330c7e-a21c-42e0-9f7c-ddc6e7269f0c-kube-api-access-p7fhr\") pod \"nova-operator-controller-manager-5d488d59fb-9vdlb\" (UID: \"82330c7e-a21c-42e0-9f7c-ddc6e7269f0c\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-9vdlb" Mar 19 12:32:13.035565 master-0 kubenswrapper[31830]: I0319 12:32:13.034037 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz8z6\" (UniqueName: \"kubernetes.io/projected/ebd2199d-6888-4d1a-8e5d-b951062bdc18-kube-api-access-wz8z6\") pod \"ovn-operator-controller-manager-884679f54-4jhnc\" (UID: \"ebd2199d-6888-4d1a-8e5d-b951062bdc18\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-4jhnc" Mar 19 12:32:13.035565 master-0 kubenswrapper[31830]: I0319 12:32:13.034091 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbnm8\" (UniqueName: \"kubernetes.io/projected/da8e07b7-2ac3-454b-a30a-51b242c86b6a-kube-api-access-bbnm8\") pod \"octavia-operator-controller-manager-5b9f45d989-22wxs\" (UID: \"da8e07b7-2ac3-454b-a30a-51b242c86b6a\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-22wxs" Mar 19 12:32:13.091181 master-0 kubenswrapper[31830]: I0319 12:32:13.083414 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr"] Mar 19 12:32:13.091181 master-0 kubenswrapper[31830]: I0319 12:32:13.084578 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr" Mar 19 12:32:13.091181 master-0 kubenswrapper[31830]: I0319 12:32:13.088196 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 19 12:32:13.091181 master-0 kubenswrapper[31830]: I0319 12:32:13.088602 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxz4w\" (UniqueName: \"kubernetes.io/projected/467e2f90-bbbf-4d88-9b56-9ed6a353b45f-kube-api-access-sxz4w\") pod \"neutron-operator-controller-manager-767865f676-px9d7\" (UID: \"467e2f90-bbbf-4d88-9b56-9ed6a353b45f\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-px9d7" Mar 19 12:32:13.091181 master-0 kubenswrapper[31830]: I0319 12:32:13.090898 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-ztnkm"] Mar 19 12:32:13.092663 master-0 kubenswrapper[31830]: I0319 12:32:13.092633 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-ztnkm" Mar 19 12:32:13.093582 master-0 kubenswrapper[31830]: I0319 12:32:13.093552 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-285t7\" (UniqueName: \"kubernetes.io/projected/b5cf325f-5ed3-416a-b7cf-c95cc198afff-kube-api-access-285t7\") pod \"mariadb-operator-controller-manager-67ccfc9778-xslmv\" (UID: \"b5cf325f-5ed3-416a-b7cf-c95cc198afff\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-xslmv" Mar 19 12:32:13.099502 master-0 kubenswrapper[31830]: I0319 12:32:13.099435 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-vrk79" Mar 19 12:32:13.107207 master-0 kubenswrapper[31830]: I0319 12:32:13.107164 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7fhr\" (UniqueName: \"kubernetes.io/projected/82330c7e-a21c-42e0-9f7c-ddc6e7269f0c-kube-api-access-p7fhr\") pod \"nova-operator-controller-manager-5d488d59fb-9vdlb\" (UID: \"82330c7e-a21c-42e0-9f7c-ddc6e7269f0c\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-9vdlb" Mar 19 12:32:13.107417 master-0 kubenswrapper[31830]: I0319 12:32:13.107363 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-85lzd"] Mar 19 12:32:13.117377 master-0 kubenswrapper[31830]: I0319 12:32:13.117337 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr"] Mar 19 12:32:13.135660 master-0 kubenswrapper[31830]: I0319 12:32:13.135618 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbnm8\" (UniqueName: \"kubernetes.io/projected/da8e07b7-2ac3-454b-a30a-51b242c86b6a-kube-api-access-bbnm8\") pod \"octavia-operator-controller-manager-5b9f45d989-22wxs\" (UID: \"da8e07b7-2ac3-454b-a30a-51b242c86b6a\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-22wxs" Mar 19 12:32:13.136008 master-0 kubenswrapper[31830]: I0319 12:32:13.135957 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr985\" (UniqueName: \"kubernetes.io/projected/9897197c-6347-48f3-bce4-f2e70d2241af-kube-api-access-vr985\") pod \"placement-operator-controller-manager-5784578c99-9ldx8\" (UID: \"9897197c-6347-48f3-bce4-f2e70d2241af\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-9ldx8" Mar 19 12:32:13.136085 master-0 kubenswrapper[31830]: I0319 12:32:13.136058 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnbl2\" (UniqueName: \"kubernetes.io/projected/49025043-9018-47ec-8930-e6580af6aeb2-kube-api-access-jnbl2\") pod \"openstack-baremetal-operator-controller-manager-74c4796899m7flr\" (UID: \"49025043-9018-47ec-8930-e6580af6aeb2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr" Mar 19 12:32:13.136137 master-0 kubenswrapper[31830]: I0319 12:32:13.136106 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49025043-9018-47ec-8930-e6580af6aeb2-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899m7flr\" (UID: \"49025043-9018-47ec-8930-e6580af6aeb2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr" Mar 19 12:32:13.136205 master-0 kubenswrapper[31830]: I0319 12:32:13.136181 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrfzz\" (UniqueName: \"kubernetes.io/projected/1e3ac87a-41fb-4d68-8531-01685bc8f17c-kube-api-access-hrfzz\") pod \"swift-operator-controller-manager-c674c5965-85lzd\" (UID: \"1e3ac87a-41fb-4d68-8531-01685bc8f17c\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-85lzd" Mar 19 12:32:13.136328 master-0 kubenswrapper[31830]: I0319 12:32:13.136262 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz8z6\" (UniqueName: \"kubernetes.io/projected/ebd2199d-6888-4d1a-8e5d-b951062bdc18-kube-api-access-wz8z6\") pod \"ovn-operator-controller-manager-884679f54-4jhnc\" (UID: \"ebd2199d-6888-4d1a-8e5d-b951062bdc18\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-4jhnc" Mar 19 12:32:13.136328 master-0 kubenswrapper[31830]: I0319 12:32:13.136309 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t9xw\" (UniqueName: \"kubernetes.io/projected/8dac5751-ffc3-4927-9cb4-362538cffc88-kube-api-access-9t9xw\") pod \"telemetry-operator-controller-manager-d6b694c5-ztnkm\" (UID: \"8dac5751-ffc3-4927-9cb4-362538cffc88\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-ztnkm" Mar 19 12:32:13.155935 master-0 kubenswrapper[31830]: I0319 12:32:13.150316 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-ztnkm"] Mar 19 12:32:13.184077 master-0 kubenswrapper[31830]: I0319 12:32:13.181779 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bqfjg"] Mar 19 12:32:13.189729 master-0 kubenswrapper[31830]: I0319 12:32:13.189681 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz8z6\" (UniqueName: \"kubernetes.io/projected/ebd2199d-6888-4d1a-8e5d-b951062bdc18-kube-api-access-wz8z6\") pod \"ovn-operator-controller-manager-884679f54-4jhnc\" (UID: \"ebd2199d-6888-4d1a-8e5d-b951062bdc18\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-4jhnc" Mar 19 12:32:13.196181 master-0 kubenswrapper[31830]: I0319 12:32:13.196133 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbnm8\" (UniqueName: \"kubernetes.io/projected/da8e07b7-2ac3-454b-a30a-51b242c86b6a-kube-api-access-bbnm8\") pod \"octavia-operator-controller-manager-5b9f45d989-22wxs\" (UID: \"da8e07b7-2ac3-454b-a30a-51b242c86b6a\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-22wxs" Mar 19 12:32:13.201576 master-0 kubenswrapper[31830]: I0319 12:32:13.201492 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr985\" (UniqueName: \"kubernetes.io/projected/9897197c-6347-48f3-bce4-f2e70d2241af-kube-api-access-vr985\") pod \"placement-operator-controller-manager-5784578c99-9ldx8\" (UID: \"9897197c-6347-48f3-bce4-f2e70d2241af\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-9ldx8" Mar 19 12:32:13.203551 master-0 kubenswrapper[31830]: I0319 12:32:13.203507 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrfzz\" (UniqueName: \"kubernetes.io/projected/1e3ac87a-41fb-4d68-8531-01685bc8f17c-kube-api-access-hrfzz\") pod \"swift-operator-controller-manager-c674c5965-85lzd\" (UID: \"1e3ac87a-41fb-4d68-8531-01685bc8f17c\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-85lzd" Mar 19 12:32:13.204029 master-0 kubenswrapper[31830]: I0319 12:32:13.203983 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bqfjg" Mar 19 12:32:13.215994 master-0 kubenswrapper[31830]: I0319 12:32:13.214848 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bqfjg"] Mar 19 12:32:13.237711 master-0 kubenswrapper[31830]: I0319 12:32:13.237664 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t9xw\" (UniqueName: \"kubernetes.io/projected/8dac5751-ffc3-4927-9cb4-362538cffc88-kube-api-access-9t9xw\") pod \"telemetry-operator-controller-manager-d6b694c5-ztnkm\" (UID: \"8dac5751-ffc3-4927-9cb4-362538cffc88\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-ztnkm" Mar 19 12:32:13.237932 master-0 kubenswrapper[31830]: I0319 12:32:13.237879 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnbl2\" (UniqueName: \"kubernetes.io/projected/49025043-9018-47ec-8930-e6580af6aeb2-kube-api-access-jnbl2\") pod \"openstack-baremetal-operator-controller-manager-74c4796899m7flr\" (UID: \"49025043-9018-47ec-8930-e6580af6aeb2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr" Mar 19 12:32:13.237932 master-0 kubenswrapper[31830]: I0319 12:32:13.237928 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49025043-9018-47ec-8930-e6580af6aeb2-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899m7flr\" (UID: \"49025043-9018-47ec-8930-e6580af6aeb2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr" Mar 19 12:32:13.238006 master-0 kubenswrapper[31830]: I0319 12:32:13.237970 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvrl7\" (UniqueName: \"kubernetes.io/projected/462afca1-50bf-43ba-bcdf-b7d71f9504d5-kube-api-access-cvrl7\") pod \"test-operator-controller-manager-5c5cb9c4d7-bqfjg\" (UID: \"462afca1-50bf-43ba-bcdf-b7d71f9504d5\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bqfjg" Mar 19 12:32:13.238254 master-0 kubenswrapper[31830]: E0319 12:32:13.238219 31830 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 12:32:13.238313 master-0 kubenswrapper[31830]: E0319 12:32:13.238298 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49025043-9018-47ec-8930-e6580af6aeb2-cert podName:49025043-9018-47ec-8930-e6580af6aeb2 nodeName:}" failed. No retries permitted until 2026-03-19 12:32:13.738281017 +0000 UTC m=+1072.287241721 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/49025043-9018-47ec-8930-e6580af6aeb2-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899m7flr" (UID: "49025043-9018-47ec-8930-e6580af6aeb2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 12:32:13.253886 master-0 kubenswrapper[31830]: I0319 12:32:13.252125 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-tc6m5" Mar 19 12:32:13.259342 master-0 kubenswrapper[31830]: I0319 12:32:13.259295 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-gx66s"] Mar 19 12:32:13.260522 master-0 kubenswrapper[31830]: I0319 12:32:13.260501 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-gx66s" Mar 19 12:32:13.270533 master-0 kubenswrapper[31830]: I0319 12:32:13.269990 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t9xw\" (UniqueName: \"kubernetes.io/projected/8dac5751-ffc3-4927-9cb4-362538cffc88-kube-api-access-9t9xw\") pod \"telemetry-operator-controller-manager-d6b694c5-ztnkm\" (UID: \"8dac5751-ffc3-4927-9cb4-362538cffc88\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-ztnkm" Mar 19 12:32:13.273879 master-0 kubenswrapper[31830]: I0319 12:32:13.273236 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-gx66s"] Mar 19 12:32:13.275671 master-0 kubenswrapper[31830]: I0319 12:32:13.275641 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnbl2\" (UniqueName: \"kubernetes.io/projected/49025043-9018-47ec-8930-e6580af6aeb2-kube-api-access-jnbl2\") pod \"openstack-baremetal-operator-controller-manager-74c4796899m7flr\" (UID: \"49025043-9018-47ec-8930-e6580af6aeb2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr" Mar 19 12:32:13.296349 master-0 kubenswrapper[31830]: I0319 12:32:13.296270 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-xslmv" Mar 19 12:32:13.333346 master-0 kubenswrapper[31830]: I0319 12:32:13.333238 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-px9d7" Mar 19 12:32:13.344064 master-0 kubenswrapper[31830]: I0319 12:32:13.339085 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f0b9a13-7862-4829-a97d-56034487da2e-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-6kkfv\" (UID: \"1f0b9a13-7862-4829-a97d-56034487da2e\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv" Mar 19 12:32:13.344064 master-0 kubenswrapper[31830]: I0319 12:32:13.339210 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvrl7\" (UniqueName: \"kubernetes.io/projected/462afca1-50bf-43ba-bcdf-b7d71f9504d5-kube-api-access-cvrl7\") pod \"test-operator-controller-manager-5c5cb9c4d7-bqfjg\" (UID: \"462afca1-50bf-43ba-bcdf-b7d71f9504d5\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bqfjg" Mar 19 12:32:13.344064 master-0 kubenswrapper[31830]: I0319 12:32:13.339267 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q4fw\" (UniqueName: \"kubernetes.io/projected/558a5b2d-e0d2-4a17-ab12-f4e3da3c522a-kube-api-access-2q4fw\") pod \"watcher-operator-controller-manager-6c4d75f7f9-gx66s\" (UID: \"558a5b2d-e0d2-4a17-ab12-f4e3da3c522a\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-gx66s" Mar 19 12:32:13.344064 master-0 kubenswrapper[31830]: E0319 12:32:13.339268 31830 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 19 12:32:13.344064 master-0 kubenswrapper[31830]: E0319 12:32:13.339362 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f0b9a13-7862-4829-a97d-56034487da2e-cert podName:1f0b9a13-7862-4829-a97d-56034487da2e nodeName:}" failed. No retries permitted until 2026-03-19 12:32:14.339338731 +0000 UTC m=+1072.888299495 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f0b9a13-7862-4829-a97d-56034487da2e-cert") pod "infra-operator-controller-manager-7dd6bb94c9-6kkfv" (UID: "1f0b9a13-7862-4829-a97d-56034487da2e") : secret "infra-operator-webhook-server-cert" not found Mar 19 12:32:13.366017 master-0 kubenswrapper[31830]: I0319 12:32:13.365760 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvrl7\" (UniqueName: \"kubernetes.io/projected/462afca1-50bf-43ba-bcdf-b7d71f9504d5-kube-api-access-cvrl7\") pod \"test-operator-controller-manager-5c5cb9c4d7-bqfjg\" (UID: \"462afca1-50bf-43ba-bcdf-b7d71f9504d5\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bqfjg" Mar 19 12:32:13.367176 master-0 kubenswrapper[31830]: I0319 12:32:13.367102 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-9vdlb" Mar 19 12:32:13.417985 master-0 kubenswrapper[31830]: I0319 12:32:13.400381 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk"] Mar 19 12:32:13.417985 master-0 kubenswrapper[31830]: I0319 12:32:13.402521 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:13.417985 master-0 kubenswrapper[31830]: I0319 12:32:13.405539 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-22wxs" Mar 19 12:32:13.417985 master-0 kubenswrapper[31830]: I0319 12:32:13.409585 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 19 12:32:13.417985 master-0 kubenswrapper[31830]: I0319 12:32:13.416714 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk"] Mar 19 12:32:13.417985 master-0 kubenswrapper[31830]: I0319 12:32:13.417766 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 19 12:32:13.427016 master-0 kubenswrapper[31830]: I0319 12:32:13.426798 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-4jhnc" Mar 19 12:32:13.441326 master-0 kubenswrapper[31830]: I0319 12:32:13.440399 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q4fw\" (UniqueName: \"kubernetes.io/projected/558a5b2d-e0d2-4a17-ab12-f4e3da3c522a-kube-api-access-2q4fw\") pod \"watcher-operator-controller-manager-6c4d75f7f9-gx66s\" (UID: \"558a5b2d-e0d2-4a17-ab12-f4e3da3c522a\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-gx66s" Mar 19 12:32:13.441326 master-0 kubenswrapper[31830]: I0319 12:32:13.440540 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-8j8qk\" (UID: \"45a81c5f-fb70-4b84-8c91-bc55830c36cd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:13.441326 master-0 kubenswrapper[31830]: I0319 12:32:13.440600 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-8j8qk\" (UID: \"45a81c5f-fb70-4b84-8c91-bc55830c36cd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:13.441326 master-0 kubenswrapper[31830]: I0319 12:32:13.440633 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv7z6\" (UniqueName: \"kubernetes.io/projected/45a81c5f-fb70-4b84-8c91-bc55830c36cd-kube-api-access-nv7z6\") pod \"openstack-operator-controller-manager-86bd8996f6-8j8qk\" (UID: \"45a81c5f-fb70-4b84-8c91-bc55830c36cd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:13.453455 master-0 kubenswrapper[31830]: I0319 12:32:13.453365 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zt7k5"] Mar 19 12:32:13.455425 master-0 kubenswrapper[31830]: I0319 12:32:13.455374 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zt7k5" Mar 19 12:32:13.464332 master-0 kubenswrapper[31830]: I0319 12:32:13.463495 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zt7k5"] Mar 19 12:32:13.497087 master-0 kubenswrapper[31830]: I0319 12:32:13.497026 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q4fw\" (UniqueName: \"kubernetes.io/projected/558a5b2d-e0d2-4a17-ab12-f4e3da3c522a-kube-api-access-2q4fw\") pod \"watcher-operator-controller-manager-6c4d75f7f9-gx66s\" (UID: \"558a5b2d-e0d2-4a17-ab12-f4e3da3c522a\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-gx66s" Mar 19 12:32:13.516541 master-0 kubenswrapper[31830]: I0319 12:32:13.516452 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-cbcqj"] Mar 19 12:32:13.543108 master-0 kubenswrapper[31830]: I0319 12:32:13.543072 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbm7k\" (UniqueName: \"kubernetes.io/projected/24b5e2be-28d1-44bc-a999-d68572529f9a-kube-api-access-fbm7k\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zt7k5\" (UID: \"24b5e2be-28d1-44bc-a999-d68572529f9a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zt7k5" Mar 19 12:32:13.543189 master-0 kubenswrapper[31830]: I0319 12:32:13.543152 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-8j8qk\" (UID: \"45a81c5f-fb70-4b84-8c91-bc55830c36cd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:13.543233 master-0 kubenswrapper[31830]: I0319 12:32:13.543209 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-8j8qk\" (UID: \"45a81c5f-fb70-4b84-8c91-bc55830c36cd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:13.543233 master-0 kubenswrapper[31830]: I0319 12:32:13.543230 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv7z6\" (UniqueName: \"kubernetes.io/projected/45a81c5f-fb70-4b84-8c91-bc55830c36cd-kube-api-access-nv7z6\") pod \"openstack-operator-controller-manager-86bd8996f6-8j8qk\" (UID: \"45a81c5f-fb70-4b84-8c91-bc55830c36cd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:13.543630 master-0 kubenswrapper[31830]: E0319 12:32:13.543605 31830 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 19 12:32:13.543689 master-0 kubenswrapper[31830]: E0319 12:32:13.543650 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-metrics-certs podName:45a81c5f-fb70-4b84-8c91-bc55830c36cd nodeName:}" failed. No retries permitted until 2026-03-19 12:32:14.043633707 +0000 UTC m=+1072.592594411 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-8j8qk" (UID: "45a81c5f-fb70-4b84-8c91-bc55830c36cd") : secret "metrics-server-cert" not found Mar 19 12:32:13.543733 master-0 kubenswrapper[31830]: E0319 12:32:13.543691 31830 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 19 12:32:13.543733 master-0 kubenswrapper[31830]: E0319 12:32:13.543710 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-webhook-certs podName:45a81c5f-fb70-4b84-8c91-bc55830c36cd nodeName:}" failed. No retries permitted until 2026-03-19 12:32:14.043704679 +0000 UTC m=+1072.592665373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-8j8qk" (UID: "45a81c5f-fb70-4b84-8c91-bc55830c36cd") : secret "webhook-server-cert" not found Mar 19 12:32:13.562297 master-0 kubenswrapper[31830]: I0319 12:32:13.561892 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-9ldx8" Mar 19 12:32:13.596068 master-0 kubenswrapper[31830]: I0319 12:32:13.595911 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-85lzd" Mar 19 12:32:13.599373 master-0 kubenswrapper[31830]: I0319 12:32:13.599296 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv7z6\" (UniqueName: \"kubernetes.io/projected/45a81c5f-fb70-4b84-8c91-bc55830c36cd-kube-api-access-nv7z6\") pod \"openstack-operator-controller-manager-86bd8996f6-8j8qk\" (UID: \"45a81c5f-fb70-4b84-8c91-bc55830c36cd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:13.644713 master-0 kubenswrapper[31830]: I0319 12:32:13.644549 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbm7k\" (UniqueName: \"kubernetes.io/projected/24b5e2be-28d1-44bc-a999-d68572529f9a-kube-api-access-fbm7k\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zt7k5\" (UID: \"24b5e2be-28d1-44bc-a999-d68572529f9a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zt7k5" Mar 19 12:32:13.663329 master-0 kubenswrapper[31830]: I0319 12:32:13.663282 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbm7k\" (UniqueName: \"kubernetes.io/projected/24b5e2be-28d1-44bc-a999-d68572529f9a-kube-api-access-fbm7k\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zt7k5\" (UID: \"24b5e2be-28d1-44bc-a999-d68572529f9a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zt7k5" Mar 19 12:32:13.736327 master-0 kubenswrapper[31830]: I0319 12:32:13.736297 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-ztnkm" Mar 19 12:32:13.747996 master-0 kubenswrapper[31830]: I0319 12:32:13.747228 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49025043-9018-47ec-8930-e6580af6aeb2-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899m7flr\" (UID: \"49025043-9018-47ec-8930-e6580af6aeb2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr" Mar 19 12:32:13.748959 master-0 kubenswrapper[31830]: E0319 12:32:13.748341 31830 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 12:32:13.748959 master-0 kubenswrapper[31830]: E0319 12:32:13.748431 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49025043-9018-47ec-8930-e6580af6aeb2-cert podName:49025043-9018-47ec-8930-e6580af6aeb2 nodeName:}" failed. No retries permitted until 2026-03-19 12:32:14.748386897 +0000 UTC m=+1073.297347601 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/49025043-9018-47ec-8930-e6580af6aeb2-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899m7flr" (UID: "49025043-9018-47ec-8930-e6580af6aeb2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 12:32:13.817069 master-0 kubenswrapper[31830]: I0319 12:32:13.816207 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bqfjg" Mar 19 12:32:13.817069 master-0 kubenswrapper[31830]: I0319 12:32:13.816734 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-gx66s" Mar 19 12:32:13.876253 master-0 kubenswrapper[31830]: W0319 12:32:13.874716 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9962d57a_2869_4044_a24e_65338d28f6c3.slice/crio-256bd54f450217f43678411992af48905d2997356b7a1947e429b3b32a1af35a WatchSource:0}: Error finding container 256bd54f450217f43678411992af48905d2997356b7a1947e429b3b32a1af35a: Status 404 returned error can't find the container with id 256bd54f450217f43678411992af48905d2997356b7a1947e429b3b32a1af35a Mar 19 12:32:13.882408 master-0 kubenswrapper[31830]: I0319 12:32:13.882310 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zt7k5" Mar 19 12:32:13.885709 master-0 kubenswrapper[31830]: I0319 12:32:13.885679 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-4zgd2"] Mar 19 12:32:13.998952 master-0 kubenswrapper[31830]: I0319 12:32:13.998849 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-4zgd2" event={"ID":"9962d57a-2869-4044-a24e-65338d28f6c3","Type":"ContainerStarted","Data":"256bd54f450217f43678411992af48905d2997356b7a1947e429b3b32a1af35a"} Mar 19 12:32:14.008723 master-0 kubenswrapper[31830]: I0319 12:32:14.008656 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-cbcqj" event={"ID":"ec2e9575-5f21-44a5-a34c-f076f726a1d2","Type":"ContainerStarted","Data":"a09de25ca21df9e977b9f8185a7852f568177e2fee17bfa0533345ae863cbb56"} Mar 19 12:32:14.090888 master-0 kubenswrapper[31830]: I0319 12:32:14.090834 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-8j8qk\" (UID: \"45a81c5f-fb70-4b84-8c91-bc55830c36cd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:14.093747 master-0 kubenswrapper[31830]: E0319 12:32:14.091190 31830 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 19 12:32:14.093747 master-0 kubenswrapper[31830]: E0319 12:32:14.091480 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-metrics-certs podName:45a81c5f-fb70-4b84-8c91-bc55830c36cd nodeName:}" failed. No retries permitted until 2026-03-19 12:32:15.091302151 +0000 UTC m=+1073.640262855 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-8j8qk" (UID: "45a81c5f-fb70-4b84-8c91-bc55830c36cd") : secret "metrics-server-cert" not found Mar 19 12:32:14.093747 master-0 kubenswrapper[31830]: E0319 12:32:14.091868 31830 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 19 12:32:14.093747 master-0 kubenswrapper[31830]: E0319 12:32:14.091954 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-webhook-certs podName:45a81c5f-fb70-4b84-8c91-bc55830c36cd nodeName:}" failed. No retries permitted until 2026-03-19 12:32:15.091939911 +0000 UTC m=+1073.640900615 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-8j8qk" (UID: "45a81c5f-fb70-4b84-8c91-bc55830c36cd") : secret "webhook-server-cert" not found Mar 19 12:32:14.093747 master-0 kubenswrapper[31830]: I0319 12:32:14.091672 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-8j8qk\" (UID: \"45a81c5f-fb70-4b84-8c91-bc55830c36cd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:14.172870 master-0 kubenswrapper[31830]: I0319 12:32:14.172242 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-d2qll"] Mar 19 12:32:14.190598 master-0 kubenswrapper[31830]: W0319 12:32:14.180997 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod333d933c_7a84_455c_80c8_d5795ba1058d.slice/crio-c476cab3581712ad1baf3b584c4ae575d8f8df0a2047e4e9dfc0578bbfe060c2 WatchSource:0}: Error finding container c476cab3581712ad1baf3b584c4ae575d8f8df0a2047e4e9dfc0578bbfe060c2: Status 404 returned error can't find the container with id c476cab3581712ad1baf3b584c4ae575d8f8df0a2047e4e9dfc0578bbfe060c2 Mar 19 12:32:14.190598 master-0 kubenswrapper[31830]: I0319 12:32:14.182005 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-x6nhk"] Mar 19 12:32:14.193912 master-0 kubenswrapper[31830]: I0319 12:32:14.192402 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-wq7gk"] Mar 19 12:32:14.399960 master-0 kubenswrapper[31830]: I0319 12:32:14.398672 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f0b9a13-7862-4829-a97d-56034487da2e-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-6kkfv\" (UID: \"1f0b9a13-7862-4829-a97d-56034487da2e\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv" Mar 19 12:32:14.399960 master-0 kubenswrapper[31830]: E0319 12:32:14.399021 31830 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 19 12:32:14.399960 master-0 kubenswrapper[31830]: E0319 12:32:14.399103 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f0b9a13-7862-4829-a97d-56034487da2e-cert podName:1f0b9a13-7862-4829-a97d-56034487da2e nodeName:}" failed. No retries permitted until 2026-03-19 12:32:16.399088496 +0000 UTC m=+1074.948049200 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f0b9a13-7862-4829-a97d-56034487da2e-cert") pod "infra-operator-controller-manager-7dd6bb94c9-6kkfv" (UID: "1f0b9a13-7862-4829-a97d-56034487da2e") : secret "infra-operator-webhook-server-cert" not found Mar 19 12:32:14.813876 master-0 kubenswrapper[31830]: I0319 12:32:14.813529 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49025043-9018-47ec-8930-e6580af6aeb2-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899m7flr\" (UID: \"49025043-9018-47ec-8930-e6580af6aeb2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr" Mar 19 12:32:14.813876 master-0 kubenswrapper[31830]: E0319 12:32:14.813849 31830 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 12:32:14.814162 master-0 kubenswrapper[31830]: E0319 12:32:14.813907 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49025043-9018-47ec-8930-e6580af6aeb2-cert podName:49025043-9018-47ec-8930-e6580af6aeb2 nodeName:}" failed. No retries permitted until 2026-03-19 12:32:16.81389218 +0000 UTC m=+1075.362852884 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/49025043-9018-47ec-8930-e6580af6aeb2-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899m7flr" (UID: "49025043-9018-47ec-8930-e6580af6aeb2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 12:32:15.031915 master-0 kubenswrapper[31830]: I0319 12:32:15.029890 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-4jhnc"] Mar 19 12:32:15.060776 master-0 kubenswrapper[31830]: I0319 12:32:15.060653 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-x6nhk" event={"ID":"528a7681-3153-4efc-9a5b-538929555c6d","Type":"ContainerStarted","Data":"05119e8f987034add64475bc75fa11a8a5be6936828e7404fd188bf8b85a64ea"} Mar 19 12:32:15.062252 master-0 kubenswrapper[31830]: I0319 12:32:15.062216 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-wq7gk" event={"ID":"2ca9358c-cf3c-4965-a617-08dcd5e916c4","Type":"ContainerStarted","Data":"a942ba943cf5a9aa32fccf8bc2f7d7a5848fd998319dad04493458bf3e4e7d15"} Mar 19 12:32:15.064625 master-0 kubenswrapper[31830]: I0319 12:32:15.064577 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-d2qll" event={"ID":"333d933c-7a84-455c-80c8-d5795ba1058d","Type":"ContainerStarted","Data":"c476cab3581712ad1baf3b584c4ae575d8f8df0a2047e4e9dfc0578bbfe060c2"} Mar 19 12:32:15.077322 master-0 kubenswrapper[31830]: I0319 12:32:15.077272 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-cbj6m"] Mar 19 12:32:15.167201 master-0 kubenswrapper[31830]: I0319 12:32:15.145606 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-8j8qk\" (UID: \"45a81c5f-fb70-4b84-8c91-bc55830c36cd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:15.167201 master-0 kubenswrapper[31830]: I0319 12:32:15.145681 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-8j8qk\" (UID: \"45a81c5f-fb70-4b84-8c91-bc55830c36cd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:15.167201 master-0 kubenswrapper[31830]: E0319 12:32:15.145880 31830 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 19 12:32:15.167201 master-0 kubenswrapper[31830]: E0319 12:32:15.146671 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-webhook-certs podName:45a81c5f-fb70-4b84-8c91-bc55830c36cd nodeName:}" failed. No retries permitted until 2026-03-19 12:32:17.14665305 +0000 UTC m=+1075.695613754 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-8j8qk" (UID: "45a81c5f-fb70-4b84-8c91-bc55830c36cd") : secret "webhook-server-cert" not found Mar 19 12:32:15.167201 master-0 kubenswrapper[31830]: E0319 12:32:15.147812 31830 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 19 12:32:15.167201 master-0 kubenswrapper[31830]: E0319 12:32:15.147908 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-metrics-certs podName:45a81c5f-fb70-4b84-8c91-bc55830c36cd nodeName:}" failed. No retries permitted until 2026-03-19 12:32:17.147834517 +0000 UTC m=+1075.696795221 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-8j8qk" (UID: "45a81c5f-fb70-4b84-8c91-bc55830c36cd") : secret "metrics-server-cert" not found Mar 19 12:32:15.167201 master-0 kubenswrapper[31830]: I0319 12:32:15.150952 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-tc6m5"] Mar 19 12:32:15.203426 master-0 kubenswrapper[31830]: W0319 12:32:15.202061 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5cf325f_5ed3_416a_b7cf_c95cc198afff.slice/crio-820129ba46082275c0ad4414ec035469ecff635e5623f2d7dc5ab0e5a5a11f48 WatchSource:0}: Error finding container 820129ba46082275c0ad4414ec035469ecff635e5623f2d7dc5ab0e5a5a11f48: Status 404 returned error can't find the container with id 820129ba46082275c0ad4414ec035469ecff635e5623f2d7dc5ab0e5a5a11f48 Mar 19 12:32:15.237618 master-0 kubenswrapper[31830]: I0319 12:32:15.237568 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-9vdlb"] Mar 19 12:32:15.288284 master-0 kubenswrapper[31830]: I0319 12:32:15.288214 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-vrk79"] Mar 19 12:32:15.295753 master-0 kubenswrapper[31830]: I0319 12:32:15.295711 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-xslmv"] Mar 19 12:32:15.303559 master-0 kubenswrapper[31830]: I0319 12:32:15.303508 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-lpsb9"] Mar 19 12:32:15.312137 master-0 kubenswrapper[31830]: I0319 12:32:15.312054 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-px9d7"] Mar 19 12:32:15.722113 master-0 kubenswrapper[31830]: I0319 12:32:15.720485 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bqfjg"] Mar 19 12:32:15.753519 master-0 kubenswrapper[31830]: I0319 12:32:15.753456 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-85lzd"] Mar 19 12:32:15.777479 master-0 kubenswrapper[31830]: I0319 12:32:15.777356 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-9ldx8"] Mar 19 12:32:15.797672 master-0 kubenswrapper[31830]: E0319 12:32:15.797320 31830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:c500fa7080b94105e85eeced772d8872e4168904e74ba02116e15ab66f522444,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9t9xw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-d6b694c5-ztnkm_openstack-operators(8dac5751-ffc3-4927-9cb4-362538cffc88): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 19 12:32:15.797984 master-0 kubenswrapper[31830]: W0319 12:32:15.797943 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9897197c_6347_48f3_bce4_f2e70d2241af.slice/crio-6e0703625967e025268540fad5a8d55a2328819932b23949582ffe49cecdd5bc WatchSource:0}: Error finding container 6e0703625967e025268540fad5a8d55a2328819932b23949582ffe49cecdd5bc: Status 404 returned error can't find the container with id 6e0703625967e025268540fad5a8d55a2328819932b23949582ffe49cecdd5bc Mar 19 12:32:15.798687 master-0 kubenswrapper[31830]: E0319 12:32:15.798605 31830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-ztnkm" podUID="8dac5751-ffc3-4927-9cb4-362538cffc88" Mar 19 12:32:15.810528 master-0 kubenswrapper[31830]: I0319 12:32:15.810226 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-gx66s"] Mar 19 12:32:15.814653 master-0 kubenswrapper[31830]: E0319 12:32:15.814487 31830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vr985,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5784578c99-9ldx8_openstack-operators(9897197c-6347-48f3-bce4-f2e70d2241af): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 19 12:32:15.816257 master-0 kubenswrapper[31830]: E0319 12:32:15.816134 31830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5784578c99-9ldx8" podUID="9897197c-6347-48f3-bce4-f2e70d2241af" Mar 19 12:32:15.833131 master-0 kubenswrapper[31830]: I0319 12:32:15.833060 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zt7k5"] Mar 19 12:32:15.844453 master-0 kubenswrapper[31830]: I0319 12:32:15.844387 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-22wxs"] Mar 19 12:32:15.872017 master-0 kubenswrapper[31830]: I0319 12:32:15.871958 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-ztnkm"] Mar 19 12:32:16.096956 master-0 kubenswrapper[31830]: I0319 12:32:16.092913 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-px9d7" event={"ID":"467e2f90-bbbf-4d88-9b56-9ed6a353b45f","Type":"ContainerStarted","Data":"374b11a1531b831ef88985ebf9e57f88b97f4fe5475dd778ba6c3c4d657934d0"} Mar 19 12:32:16.105851 master-0 kubenswrapper[31830]: I0319 12:32:16.105724 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-gx66s" event={"ID":"558a5b2d-e0d2-4a17-ab12-f4e3da3c522a","Type":"ContainerStarted","Data":"0a1c08cbf6fb614470437fb52bb1b6298c75b5aebd265710568b0500cd6f0365"} Mar 19 12:32:16.107688 master-0 kubenswrapper[31830]: I0319 12:32:16.107284 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-9vdlb" event={"ID":"82330c7e-a21c-42e0-9f7c-ddc6e7269f0c","Type":"ContainerStarted","Data":"4a24839482f2143c10001b86767bd4424eed328721c1b3425c5add2ea9e24036"} Mar 19 12:32:16.110243 master-0 kubenswrapper[31830]: I0319 12:32:16.110205 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-85lzd" event={"ID":"1e3ac87a-41fb-4d68-8531-01685bc8f17c","Type":"ContainerStarted","Data":"e9e4f6428082618db52ce7b35395e3a1850d029836a0d8e6903bb3e300cf2dce"} Mar 19 12:32:16.120913 master-0 kubenswrapper[31830]: I0319 12:32:16.120823 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-9ldx8" event={"ID":"9897197c-6347-48f3-bce4-f2e70d2241af","Type":"ContainerStarted","Data":"6e0703625967e025268540fad5a8d55a2328819932b23949582ffe49cecdd5bc"} Mar 19 12:32:16.123833 master-0 kubenswrapper[31830]: I0319 12:32:16.123756 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-ztnkm" event={"ID":"8dac5751-ffc3-4927-9cb4-362538cffc88","Type":"ContainerStarted","Data":"463aea6c356f6e4ec58ef3ab694c64bda13a6b71c78838f622fb43837ec6853f"} Mar 19 12:32:16.125250 master-0 kubenswrapper[31830]: I0319 12:32:16.125224 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-vrk79" event={"ID":"ac5b2ff6-6088-45fd-9b33-6d20a3ad9e59","Type":"ContainerStarted","Data":"034006f92a93c8064ea70d6dd95b4ff75bbe2d316576e3c3135324163fc929e9"} Mar 19 12:32:16.129968 master-0 kubenswrapper[31830]: I0319 12:32:16.129917 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-tc6m5" event={"ID":"a30da668-d209-4afb-a612-79302fb7942e","Type":"ContainerStarted","Data":"e9934d546d19d5e7deee2deff791fc63aa05812661451b8f334625df4196d3fe"} Mar 19 12:32:16.132606 master-0 kubenswrapper[31830]: I0319 12:32:16.132564 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-22wxs" event={"ID":"da8e07b7-2ac3-454b-a30a-51b242c86b6a","Type":"ContainerStarted","Data":"e3e42361d4b051c161ff4dbda1f663deeb71bcd537cd6f317701ed9d564d5509"} Mar 19 12:32:16.133646 master-0 kubenswrapper[31830]: I0319 12:32:16.133600 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-xslmv" event={"ID":"b5cf325f-5ed3-416a-b7cf-c95cc198afff","Type":"ContainerStarted","Data":"820129ba46082275c0ad4414ec035469ecff635e5623f2d7dc5ab0e5a5a11f48"} Mar 19 12:32:16.136300 master-0 kubenswrapper[31830]: E0319 12:32:16.136207 31830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:c500fa7080b94105e85eeced772d8872e4168904e74ba02116e15ab66f522444\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-ztnkm" podUID="8dac5751-ffc3-4927-9cb4-362538cffc88" Mar 19 12:32:16.136420 master-0 kubenswrapper[31830]: E0319 12:32:16.136334 31830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5784578c99-9ldx8" podUID="9897197c-6347-48f3-bce4-f2e70d2241af" Mar 19 12:32:16.141150 master-0 kubenswrapper[31830]: I0319 12:32:16.141005 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-cbj6m" event={"ID":"0bf9354e-75bc-4f4d-b665-f23bf828bfa8","Type":"ContainerStarted","Data":"b6a24cb511618ea9dc05437bdfa675a0abd119625f583650d3e7517b18e6b26d"} Mar 19 12:32:16.145397 master-0 kubenswrapper[31830]: I0319 12:32:16.145207 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lpsb9" event={"ID":"cc913cd6-6365-4019-a201-f4ed756e7238","Type":"ContainerStarted","Data":"e316d53f588cb83ec70738982f5ab51a5d96dfe0676c10a9e83115480f1f7035"} Mar 19 12:32:16.147385 master-0 kubenswrapper[31830]: I0319 12:32:16.147329 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bqfjg" event={"ID":"462afca1-50bf-43ba-bcdf-b7d71f9504d5","Type":"ContainerStarted","Data":"b91aafac79c3c3862f96b48cef18a16911cc6b73bc9d8c1d6052108da7e67b53"} Mar 19 12:32:16.150899 master-0 kubenswrapper[31830]: I0319 12:32:16.150749 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zt7k5" event={"ID":"24b5e2be-28d1-44bc-a999-d68572529f9a","Type":"ContainerStarted","Data":"c89be4b1bd7b856312845aaf630e7ff6e1d5d50839320f7584212c6375dd1ade"} Mar 19 12:32:16.153467 master-0 kubenswrapper[31830]: I0319 12:32:16.153418 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-4jhnc" event={"ID":"ebd2199d-6888-4d1a-8e5d-b951062bdc18","Type":"ContainerStarted","Data":"c3b46a74968d9038f9d5e6cce5df55c3e8ca0e35b049c014857f56ffb05248c7"} Mar 19 12:32:16.478391 master-0 kubenswrapper[31830]: I0319 12:32:16.478345 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f0b9a13-7862-4829-a97d-56034487da2e-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-6kkfv\" (UID: \"1f0b9a13-7862-4829-a97d-56034487da2e\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv" Mar 19 12:32:16.479095 master-0 kubenswrapper[31830]: E0319 12:32:16.478549 31830 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 19 12:32:16.479095 master-0 kubenswrapper[31830]: E0319 12:32:16.478607 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f0b9a13-7862-4829-a97d-56034487da2e-cert podName:1f0b9a13-7862-4829-a97d-56034487da2e nodeName:}" failed. No retries permitted until 2026-03-19 12:32:20.478588007 +0000 UTC m=+1079.027548711 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f0b9a13-7862-4829-a97d-56034487da2e-cert") pod "infra-operator-controller-manager-7dd6bb94c9-6kkfv" (UID: "1f0b9a13-7862-4829-a97d-56034487da2e") : secret "infra-operator-webhook-server-cert" not found Mar 19 12:32:16.890550 master-0 kubenswrapper[31830]: I0319 12:32:16.890424 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49025043-9018-47ec-8930-e6580af6aeb2-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899m7flr\" (UID: \"49025043-9018-47ec-8930-e6580af6aeb2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr" Mar 19 12:32:16.890947 master-0 kubenswrapper[31830]: E0319 12:32:16.890917 31830 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 12:32:16.891511 master-0 kubenswrapper[31830]: E0319 12:32:16.891354 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49025043-9018-47ec-8930-e6580af6aeb2-cert podName:49025043-9018-47ec-8930-e6580af6aeb2 nodeName:}" failed. No retries permitted until 2026-03-19 12:32:20.891015938 +0000 UTC m=+1079.439976702 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/49025043-9018-47ec-8930-e6580af6aeb2-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899m7flr" (UID: "49025043-9018-47ec-8930-e6580af6aeb2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 12:32:17.165023 master-0 kubenswrapper[31830]: E0319 12:32:17.164723 31830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:c500fa7080b94105e85eeced772d8872e4168904e74ba02116e15ab66f522444\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-ztnkm" podUID="8dac5751-ffc3-4927-9cb4-362538cffc88" Mar 19 12:32:17.167588 master-0 kubenswrapper[31830]: E0319 12:32:17.167509 31830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5784578c99-9ldx8" podUID="9897197c-6347-48f3-bce4-f2e70d2241af" Mar 19 12:32:17.199905 master-0 kubenswrapper[31830]: I0319 12:32:17.199517 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-8j8qk\" (UID: \"45a81c5f-fb70-4b84-8c91-bc55830c36cd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:17.199905 master-0 kubenswrapper[31830]: E0319 12:32:17.199880 31830 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 19 12:32:17.200200 master-0 kubenswrapper[31830]: E0319 12:32:17.200130 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-metrics-certs podName:45a81c5f-fb70-4b84-8c91-bc55830c36cd nodeName:}" failed. No retries permitted until 2026-03-19 12:32:21.199946429 +0000 UTC m=+1079.748907173 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-8j8qk" (UID: "45a81c5f-fb70-4b84-8c91-bc55830c36cd") : secret "metrics-server-cert" not found Mar 19 12:32:17.201119 master-0 kubenswrapper[31830]: I0319 12:32:17.200464 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-8j8qk\" (UID: \"45a81c5f-fb70-4b84-8c91-bc55830c36cd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:17.201119 master-0 kubenswrapper[31830]: E0319 12:32:17.200746 31830 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 19 12:32:17.201119 master-0 kubenswrapper[31830]: E0319 12:32:17.200818 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-webhook-certs podName:45a81c5f-fb70-4b84-8c91-bc55830c36cd nodeName:}" failed. No retries permitted until 2026-03-19 12:32:21.200782154 +0000 UTC m=+1079.749742908 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-8j8qk" (UID: "45a81c5f-fb70-4b84-8c91-bc55830c36cd") : secret "webhook-server-cert" not found Mar 19 12:32:20.567251 master-0 kubenswrapper[31830]: I0319 12:32:20.567195 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f0b9a13-7862-4829-a97d-56034487da2e-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-6kkfv\" (UID: \"1f0b9a13-7862-4829-a97d-56034487da2e\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv" Mar 19 12:32:20.568017 master-0 kubenswrapper[31830]: E0319 12:32:20.567376 31830 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 19 12:32:20.568017 master-0 kubenswrapper[31830]: E0319 12:32:20.567477 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f0b9a13-7862-4829-a97d-56034487da2e-cert podName:1f0b9a13-7862-4829-a97d-56034487da2e nodeName:}" failed. No retries permitted until 2026-03-19 12:32:28.567454864 +0000 UTC m=+1087.116415648 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f0b9a13-7862-4829-a97d-56034487da2e-cert") pod "infra-operator-controller-manager-7dd6bb94c9-6kkfv" (UID: "1f0b9a13-7862-4829-a97d-56034487da2e") : secret "infra-operator-webhook-server-cert" not found Mar 19 12:32:20.976203 master-0 kubenswrapper[31830]: I0319 12:32:20.976145 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49025043-9018-47ec-8930-e6580af6aeb2-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899m7flr\" (UID: \"49025043-9018-47ec-8930-e6580af6aeb2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr" Mar 19 12:32:20.976510 master-0 kubenswrapper[31830]: E0319 12:32:20.976330 31830 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 12:32:20.976510 master-0 kubenswrapper[31830]: E0319 12:32:20.976398 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49025043-9018-47ec-8930-e6580af6aeb2-cert podName:49025043-9018-47ec-8930-e6580af6aeb2 nodeName:}" failed. No retries permitted until 2026-03-19 12:32:28.976381046 +0000 UTC m=+1087.525341750 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/49025043-9018-47ec-8930-e6580af6aeb2-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899m7flr" (UID: "49025043-9018-47ec-8930-e6580af6aeb2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 12:32:21.282889 master-0 kubenswrapper[31830]: I0319 12:32:21.282664 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-8j8qk\" (UID: \"45a81c5f-fb70-4b84-8c91-bc55830c36cd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:21.283101 master-0 kubenswrapper[31830]: E0319 12:32:21.282931 31830 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 19 12:32:21.283101 master-0 kubenswrapper[31830]: I0319 12:32:21.283008 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-8j8qk\" (UID: \"45a81c5f-fb70-4b84-8c91-bc55830c36cd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:21.283101 master-0 kubenswrapper[31830]: E0319 12:32:21.283037 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-webhook-certs podName:45a81c5f-fb70-4b84-8c91-bc55830c36cd nodeName:}" failed. No retries permitted until 2026-03-19 12:32:29.283015225 +0000 UTC m=+1087.831976019 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-8j8qk" (UID: "45a81c5f-fb70-4b84-8c91-bc55830c36cd") : secret "webhook-server-cert" not found Mar 19 12:32:21.283268 master-0 kubenswrapper[31830]: E0319 12:32:21.283215 31830 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 19 12:32:21.283317 master-0 kubenswrapper[31830]: E0319 12:32:21.283305 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-metrics-certs podName:45a81c5f-fb70-4b84-8c91-bc55830c36cd nodeName:}" failed. No retries permitted until 2026-03-19 12:32:29.283279783 +0000 UTC m=+1087.832240557 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-8j8qk" (UID: "45a81c5f-fb70-4b84-8c91-bc55830c36cd") : secret "metrics-server-cert" not found Mar 19 12:32:28.668225 master-0 kubenswrapper[31830]: I0319 12:32:28.668169 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f0b9a13-7862-4829-a97d-56034487da2e-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-6kkfv\" (UID: \"1f0b9a13-7862-4829-a97d-56034487da2e\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv" Mar 19 12:32:28.668896 master-0 kubenswrapper[31830]: E0319 12:32:28.668333 31830 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 19 12:32:28.668896 master-0 kubenswrapper[31830]: E0319 12:32:28.668402 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f0b9a13-7862-4829-a97d-56034487da2e-cert podName:1f0b9a13-7862-4829-a97d-56034487da2e nodeName:}" failed. No retries permitted until 2026-03-19 12:32:44.668384526 +0000 UTC m=+1103.217345230 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f0b9a13-7862-4829-a97d-56034487da2e-cert") pod "infra-operator-controller-manager-7dd6bb94c9-6kkfv" (UID: "1f0b9a13-7862-4829-a97d-56034487da2e") : secret "infra-operator-webhook-server-cert" not found Mar 19 12:32:29.076166 master-0 kubenswrapper[31830]: I0319 12:32:29.075999 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49025043-9018-47ec-8930-e6580af6aeb2-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899m7flr\" (UID: \"49025043-9018-47ec-8930-e6580af6aeb2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr" Mar 19 12:32:29.076423 master-0 kubenswrapper[31830]: E0319 12:32:29.076217 31830 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 12:32:29.076423 master-0 kubenswrapper[31830]: E0319 12:32:29.076293 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49025043-9018-47ec-8930-e6580af6aeb2-cert podName:49025043-9018-47ec-8930-e6580af6aeb2 nodeName:}" failed. No retries permitted until 2026-03-19 12:32:45.076275426 +0000 UTC m=+1103.625236130 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/49025043-9018-47ec-8930-e6580af6aeb2-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899m7flr" (UID: "49025043-9018-47ec-8930-e6580af6aeb2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 12:32:29.381775 master-0 kubenswrapper[31830]: I0319 12:32:29.381603 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-8j8qk\" (UID: \"45a81c5f-fb70-4b84-8c91-bc55830c36cd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:29.381775 master-0 kubenswrapper[31830]: E0319 12:32:29.381716 31830 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 19 12:32:29.381775 master-0 kubenswrapper[31830]: I0319 12:32:29.381761 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-8j8qk\" (UID: \"45a81c5f-fb70-4b84-8c91-bc55830c36cd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:29.382098 master-0 kubenswrapper[31830]: E0319 12:32:29.381785 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-metrics-certs podName:45a81c5f-fb70-4b84-8c91-bc55830c36cd nodeName:}" failed. No retries permitted until 2026-03-19 12:32:45.38176419 +0000 UTC m=+1103.930724894 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-8j8qk" (UID: "45a81c5f-fb70-4b84-8c91-bc55830c36cd") : secret "metrics-server-cert" not found Mar 19 12:32:29.382098 master-0 kubenswrapper[31830]: E0319 12:32:29.381924 31830 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 19 12:32:29.382098 master-0 kubenswrapper[31830]: E0319 12:32:29.381974 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-webhook-certs podName:45a81c5f-fb70-4b84-8c91-bc55830c36cd nodeName:}" failed. No retries permitted until 2026-03-19 12:32:45.381961596 +0000 UTC m=+1103.930922300 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-8j8qk" (UID: "45a81c5f-fb70-4b84-8c91-bc55830c36cd") : secret "webhook-server-cert" not found Mar 19 12:32:39.474826 master-0 kubenswrapper[31830]: I0319 12:32:39.473112 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-d2qll" event={"ID":"333d933c-7a84-455c-80c8-d5795ba1058d","Type":"ContainerStarted","Data":"c40d334b7becbb300e3e21db99324521e356247bc6382e97d13b4eec325cbaf1"} Mar 19 12:32:39.474826 master-0 kubenswrapper[31830]: I0319 12:32:39.473699 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-d2qll" Mar 19 12:32:39.487824 master-0 kubenswrapper[31830]: I0319 12:32:39.485988 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-xslmv" event={"ID":"b5cf325f-5ed3-416a-b7cf-c95cc198afff","Type":"ContainerStarted","Data":"da57ce6f4e5c5497e490c9bce0afa3995e051e26b109b190a9aed88e406febb3"} Mar 19 12:32:39.487824 master-0 kubenswrapper[31830]: I0319 12:32:39.486815 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-xslmv" Mar 19 12:32:39.506841 master-0 kubenswrapper[31830]: I0319 12:32:39.502992 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-x6nhk" event={"ID":"528a7681-3153-4efc-9a5b-538929555c6d","Type":"ContainerStarted","Data":"0ce6c812492aee8eec44e9dc77f45270bfcfd50dc019b0332ef10d1061e96ec6"} Mar 19 12:32:39.506841 master-0 kubenswrapper[31830]: I0319 12:32:39.503743 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-x6nhk" Mar 19 12:32:39.523814 master-0 kubenswrapper[31830]: I0319 12:32:39.523316 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-cbj6m" event={"ID":"0bf9354e-75bc-4f4d-b665-f23bf828bfa8","Type":"ContainerStarted","Data":"efb96bcef924d285b06df58a774d3bb7aaa4ba8ca9d3a36f1fb22baa579e5127"} Mar 19 12:32:39.523814 master-0 kubenswrapper[31830]: I0319 12:32:39.523475 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-cbj6m" Mar 19 12:32:39.534877 master-0 kubenswrapper[31830]: I0319 12:32:39.533110 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-4zgd2" event={"ID":"9962d57a-2869-4044-a24e-65338d28f6c3","Type":"ContainerStarted","Data":"13fb3994fd220f5f171e748a16c28226987dfaf87a6e55ccde588c1166082a17"} Mar 19 12:32:39.534877 master-0 kubenswrapper[31830]: I0319 12:32:39.534105 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-4zgd2" Mar 19 12:32:39.551139 master-0 kubenswrapper[31830]: I0319 12:32:39.551071 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-9ldx8" event={"ID":"9897197c-6347-48f3-bce4-f2e70d2241af","Type":"ContainerStarted","Data":"05a078eaf4c5a22f03302793a273d3c6241234d71df2adf405e8add2f709daad"} Mar 19 12:32:39.551964 master-0 kubenswrapper[31830]: I0319 12:32:39.551919 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5784578c99-9ldx8" Mar 19 12:32:39.572129 master-0 kubenswrapper[31830]: I0319 12:32:39.572054 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bqfjg" event={"ID":"462afca1-50bf-43ba-bcdf-b7d71f9504d5","Type":"ContainerStarted","Data":"b5f6c6d81936ac955f2d958923fc57a19dfa228f069392fea22ebe692ec01a52"} Mar 19 12:32:39.573034 master-0 kubenswrapper[31830]: I0319 12:32:39.572998 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bqfjg" Mar 19 12:32:39.584050 master-0 kubenswrapper[31830]: I0319 12:32:39.584001 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-22wxs" event={"ID":"da8e07b7-2ac3-454b-a30a-51b242c86b6a","Type":"ContainerStarted","Data":"a24a4afb9e816c263c1e4a9d86ec7c7abde7f7c62e79e88e8758737c7b4bcaf1"} Mar 19 12:32:39.585747 master-0 kubenswrapper[31830]: I0319 12:32:39.585659 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-22wxs" Mar 19 12:32:39.606872 master-0 kubenswrapper[31830]: I0319 12:32:39.604327 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zt7k5" event={"ID":"24b5e2be-28d1-44bc-a999-d68572529f9a","Type":"ContainerStarted","Data":"68df6667f09a90bbc72c9d61c889932db29ca088b9d716a7b48794ae7a7167d1"} Mar 19 12:32:39.622942 master-0 kubenswrapper[31830]: I0319 12:32:39.621989 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-4jhnc" event={"ID":"ebd2199d-6888-4d1a-8e5d-b951062bdc18","Type":"ContainerStarted","Data":"db747402b6d7bc27688499c69d28702dcc70b3e59886be4142799c547f25b49d"} Mar 19 12:32:39.622942 master-0 kubenswrapper[31830]: I0319 12:32:39.622872 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-884679f54-4jhnc" Mar 19 12:32:39.632152 master-0 kubenswrapper[31830]: I0319 12:32:39.631233 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-ztnkm" event={"ID":"8dac5751-ffc3-4927-9cb4-362538cffc88","Type":"ContainerStarted","Data":"0c6ff5bdf12282803e18b597ad64002a82b54b4d6680a58344b3d47625f8842d"} Mar 19 12:32:39.632152 master-0 kubenswrapper[31830]: I0319 12:32:39.632077 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-ztnkm" Mar 19 12:32:39.634306 master-0 kubenswrapper[31830]: I0319 12:32:39.634250 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-85lzd" event={"ID":"1e3ac87a-41fb-4d68-8531-01685bc8f17c","Type":"ContainerStarted","Data":"5ab41022c0c38d565f6f9adff70d91a81a0c886fddf8c15fddcd5958f6c7d69c"} Mar 19 12:32:39.635114 master-0 kubenswrapper[31830]: I0319 12:32:39.635071 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-c674c5965-85lzd" Mar 19 12:32:39.650850 master-0 kubenswrapper[31830]: I0319 12:32:39.650787 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-cbcqj" event={"ID":"ec2e9575-5f21-44a5-a34c-f076f726a1d2","Type":"ContainerStarted","Data":"9ddc9fccee7a2bd0442c627428a034c5bd7309e81caa3c7559f77fbfcc64686e"} Mar 19 12:32:39.651141 master-0 kubenswrapper[31830]: I0319 12:32:39.651096 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-cbcqj" Mar 19 12:32:39.659347 master-0 kubenswrapper[31830]: I0319 12:32:39.659287 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-vrk79" event={"ID":"ac5b2ff6-6088-45fd-9b33-6d20a3ad9e59","Type":"ContainerStarted","Data":"ffaaee3de0702331c9fe16fddadbe0d34cd0485533f5268a33eccf0795679057"} Mar 19 12:32:39.660068 master-0 kubenswrapper[31830]: I0319 12:32:39.660037 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-55f864c847-vrk79" Mar 19 12:32:39.661184 master-0 kubenswrapper[31830]: I0319 12:32:39.661149 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lpsb9" event={"ID":"cc913cd6-6365-4019-a201-f4ed756e7238","Type":"ContainerStarted","Data":"8ac681e3e05d3d0a39c8f3a1c5e5a2ce2d50d55312a1580470394aa63e8372ac"} Mar 19 12:32:39.661592 master-0 kubenswrapper[31830]: I0319 12:32:39.661564 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lpsb9" Mar 19 12:32:39.662577 master-0 kubenswrapper[31830]: I0319 12:32:39.662543 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-gx66s" event={"ID":"558a5b2d-e0d2-4a17-ab12-f4e3da3c522a","Type":"ContainerStarted","Data":"c30ce5ec2e8ade591e3e3d780d74c750e2af3f69587f48600b9d0fb5c8424f9d"} Mar 19 12:32:39.662987 master-0 kubenswrapper[31830]: I0319 12:32:39.662959 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-gx66s" Mar 19 12:32:39.663956 master-0 kubenswrapper[31830]: I0319 12:32:39.663922 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-9vdlb" event={"ID":"82330c7e-a21c-42e0-9f7c-ddc6e7269f0c","Type":"ContainerStarted","Data":"1d1b9fc550b7cc6902ea9278c48367b44da38f3b7a09e2cc0564aa65fc77fad8"} Mar 19 12:32:39.664332 master-0 kubenswrapper[31830]: I0319 12:32:39.664304 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-9vdlb" Mar 19 12:32:39.665233 master-0 kubenswrapper[31830]: I0319 12:32:39.665191 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-wq7gk" event={"ID":"2ca9358c-cf3c-4965-a617-08dcd5e916c4","Type":"ContainerStarted","Data":"3d8a073e29cfb590888c42d8610c5026abb8f57dbdc0158c18a088cb0ed79911"} Mar 19 12:32:39.665616 master-0 kubenswrapper[31830]: I0319 12:32:39.665583 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-wq7gk" Mar 19 12:32:39.675819 master-0 kubenswrapper[31830]: I0319 12:32:39.674061 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-px9d7" event={"ID":"467e2f90-bbbf-4d88-9b56-9ed6a353b45f","Type":"ContainerStarted","Data":"4dae0017581522517c506a6d900dd95af0587be44487041b64e434678a8a4f61"} Mar 19 12:32:39.675819 master-0 kubenswrapper[31830]: I0319 12:32:39.674939 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-767865f676-px9d7" Mar 19 12:32:39.694866 master-0 kubenswrapper[31830]: I0319 12:32:39.693565 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-tc6m5" Mar 19 12:32:39.694866 master-0 kubenswrapper[31830]: I0319 12:32:39.693613 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-tc6m5" event={"ID":"a30da668-d209-4afb-a612-79302fb7942e","Type":"ContainerStarted","Data":"0c5af1c298adc7465c9b83ef0f1c5ca06fb5efe11ce489bb7b0f63af9c9547bb"} Mar 19 12:32:41.361130 master-0 kubenswrapper[31830]: I0319 12:32:41.361003 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-d2qll" podStartSLOduration=9.503836172 podStartE2EDuration="29.360986537s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="2026-03-19 12:32:14.189878938 +0000 UTC m=+1072.738839642" lastFinishedPulling="2026-03-19 12:32:34.047029303 +0000 UTC m=+1092.595990007" observedRunningTime="2026-03-19 12:32:41.358649365 +0000 UTC m=+1099.907610069" watchObservedRunningTime="2026-03-19 12:32:41.360986537 +0000 UTC m=+1099.909947241" Mar 19 12:32:41.402975 master-0 kubenswrapper[31830]: I0319 12:32:41.402901 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-884679f54-4jhnc" podStartSLOduration=6.155557655 podStartE2EDuration="29.402883907s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="2026-03-19 12:32:15.034018767 +0000 UTC m=+1073.582979471" lastFinishedPulling="2026-03-19 12:32:38.281345019 +0000 UTC m=+1096.830305723" observedRunningTime="2026-03-19 12:32:41.395001772 +0000 UTC m=+1099.943962476" watchObservedRunningTime="2026-03-19 12:32:41.402883907 +0000 UTC m=+1099.951844611" Mar 19 12:32:41.497823 master-0 kubenswrapper[31830]: I0319 12:32:41.495825 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-cbcqj" podStartSLOduration=8.991119352 podStartE2EDuration="29.495786448s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="2026-03-19 12:32:13.542374297 +0000 UTC m=+1072.091335001" lastFinishedPulling="2026-03-19 12:32:34.047041393 +0000 UTC m=+1092.596002097" observedRunningTime="2026-03-19 12:32:41.480151824 +0000 UTC m=+1100.029112528" watchObservedRunningTime="2026-03-19 12:32:41.495786448 +0000 UTC m=+1100.044747152" Mar 19 12:32:41.497823 master-0 kubenswrapper[31830]: I0319 12:32:41.495977 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-ztnkm" podStartSLOduration=6.965207364 podStartE2EDuration="29.495974354s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="2026-03-19 12:32:15.797116043 +0000 UTC m=+1074.346076747" lastFinishedPulling="2026-03-19 12:32:38.327883043 +0000 UTC m=+1096.876843737" observedRunningTime="2026-03-19 12:32:41.452096683 +0000 UTC m=+1100.001057387" watchObservedRunningTime="2026-03-19 12:32:41.495974354 +0000 UTC m=+1100.044935058" Mar 19 12:32:41.539091 master-0 kubenswrapper[31830]: I0319 12:32:41.538904 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-wq7gk" podStartSLOduration=5.457886078 podStartE2EDuration="29.538886665s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="2026-03-19 12:32:14.200364343 +0000 UTC m=+1072.749325047" lastFinishedPulling="2026-03-19 12:32:38.28136493 +0000 UTC m=+1096.830325634" observedRunningTime="2026-03-19 12:32:41.533766226 +0000 UTC m=+1100.082726930" watchObservedRunningTime="2026-03-19 12:32:41.538886665 +0000 UTC m=+1100.087847369" Mar 19 12:32:41.592142 master-0 kubenswrapper[31830]: I0319 12:32:41.588657 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-55f864c847-vrk79" podStartSLOduration=6.488423438 podStartE2EDuration="29.588637298s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="2026-03-19 12:32:15.181337556 +0000 UTC m=+1073.730298260" lastFinishedPulling="2026-03-19 12:32:38.281551416 +0000 UTC m=+1096.830512120" observedRunningTime="2026-03-19 12:32:41.586070028 +0000 UTC m=+1100.135030732" watchObservedRunningTime="2026-03-19 12:32:41.588637298 +0000 UTC m=+1100.137598002" Mar 19 12:32:41.644878 master-0 kubenswrapper[31830]: I0319 12:32:41.644374 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-22wxs" podStartSLOduration=7.099418776 podStartE2EDuration="29.644356976s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="2026-03-19 12:32:15.786285857 +0000 UTC m=+1074.335246561" lastFinishedPulling="2026-03-19 12:32:38.331224057 +0000 UTC m=+1096.880184761" observedRunningTime="2026-03-19 12:32:41.63771653 +0000 UTC m=+1100.186677234" watchObservedRunningTime="2026-03-19 12:32:41.644356976 +0000 UTC m=+1100.193317680" Mar 19 12:32:41.673824 master-0 kubenswrapper[31830]: I0319 12:32:41.673410 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-gx66s" podStartSLOduration=7.1781922 podStartE2EDuration="29.673381576s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="2026-03-19 12:32:15.786284527 +0000 UTC m=+1074.335245241" lastFinishedPulling="2026-03-19 12:32:38.281473913 +0000 UTC m=+1096.830434617" observedRunningTime="2026-03-19 12:32:41.672094136 +0000 UTC m=+1100.221054840" watchObservedRunningTime="2026-03-19 12:32:41.673381576 +0000 UTC m=+1100.222342280" Mar 19 12:32:41.760927 master-0 kubenswrapper[31830]: I0319 12:32:41.739723 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-4zgd2" podStartSLOduration=5.333821041 podStartE2EDuration="29.739699803s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="2026-03-19 12:32:13.875655333 +0000 UTC m=+1072.424616037" lastFinishedPulling="2026-03-19 12:32:38.281534085 +0000 UTC m=+1096.830494799" observedRunningTime="2026-03-19 12:32:41.728493745 +0000 UTC m=+1100.277454469" watchObservedRunningTime="2026-03-19 12:32:41.739699803 +0000 UTC m=+1100.288660587" Mar 19 12:32:41.784821 master-0 kubenswrapper[31830]: I0319 12:32:41.783956 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lpsb9" podStartSLOduration=7.525707977 podStartE2EDuration="29.783925764s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="2026-03-19 12:32:15.209678415 +0000 UTC m=+1073.758639119" lastFinishedPulling="2026-03-19 12:32:37.467896202 +0000 UTC m=+1096.016856906" observedRunningTime="2026-03-19 12:32:41.769855767 +0000 UTC m=+1100.318816471" watchObservedRunningTime="2026-03-19 12:32:41.783925764 +0000 UTC m=+1100.332886468" Mar 19 12:32:41.826689 master-0 kubenswrapper[31830]: I0319 12:32:41.826582 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-767865f676-px9d7" podStartSLOduration=6.77257455 podStartE2EDuration="29.826562056s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="2026-03-19 12:32:15.227417565 +0000 UTC m=+1073.776378269" lastFinishedPulling="2026-03-19 12:32:38.281405071 +0000 UTC m=+1096.830365775" observedRunningTime="2026-03-19 12:32:41.802125589 +0000 UTC m=+1100.351086293" watchObservedRunningTime="2026-03-19 12:32:41.826562056 +0000 UTC m=+1100.375522760" Mar 19 12:32:41.861828 master-0 kubenswrapper[31830]: I0319 12:32:41.858523 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5784578c99-9ldx8" podStartSLOduration=7.390772012 podStartE2EDuration="29.858504887s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="2026-03-19 12:32:15.814361838 +0000 UTC m=+1074.363322542" lastFinishedPulling="2026-03-19 12:32:38.282094713 +0000 UTC m=+1096.831055417" observedRunningTime="2026-03-19 12:32:41.858186607 +0000 UTC m=+1100.407147311" watchObservedRunningTime="2026-03-19 12:32:41.858504887 +0000 UTC m=+1100.407465591" Mar 19 12:32:41.891200 master-0 kubenswrapper[31830]: I0319 12:32:41.889174 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-c674c5965-85lzd" podStartSLOduration=7.357679505 podStartE2EDuration="29.889155838s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="2026-03-19 12:32:15.790043533 +0000 UTC m=+1074.339004237" lastFinishedPulling="2026-03-19 12:32:38.321519866 +0000 UTC m=+1096.870480570" observedRunningTime="2026-03-19 12:32:41.883060569 +0000 UTC m=+1100.432021273" watchObservedRunningTime="2026-03-19 12:32:41.889155838 +0000 UTC m=+1100.438116542" Mar 19 12:32:41.955003 master-0 kubenswrapper[31830]: I0319 12:32:41.954492 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-cbj6m" podStartSLOduration=8.707178626 podStartE2EDuration="29.954474483s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="2026-03-19 12:32:15.079701714 +0000 UTC m=+1073.628662418" lastFinishedPulling="2026-03-19 12:32:36.326997571 +0000 UTC m=+1094.875958275" observedRunningTime="2026-03-19 12:32:41.952207853 +0000 UTC m=+1100.501168557" watchObservedRunningTime="2026-03-19 12:32:41.954474483 +0000 UTC m=+1100.503435187" Mar 19 12:32:41.963822 master-0 kubenswrapper[31830]: I0319 12:32:41.960164 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-9vdlb" podStartSLOduration=6.853899102 podStartE2EDuration="29.960152999s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="2026-03-19 12:32:15.183920366 +0000 UTC m=+1073.732881070" lastFinishedPulling="2026-03-19 12:32:38.290174263 +0000 UTC m=+1096.839134967" observedRunningTime="2026-03-19 12:32:41.918710254 +0000 UTC m=+1100.467670958" watchObservedRunningTime="2026-03-19 12:32:41.960152999 +0000 UTC m=+1100.509113703" Mar 19 12:32:41.987549 master-0 kubenswrapper[31830]: I0319 12:32:41.986840 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-tc6m5" podStartSLOduration=6.887874756 podStartE2EDuration="29.986823117s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="2026-03-19 12:32:15.181407838 +0000 UTC m=+1073.730368542" lastFinishedPulling="2026-03-19 12:32:38.280356199 +0000 UTC m=+1096.829316903" observedRunningTime="2026-03-19 12:32:41.982173362 +0000 UTC m=+1100.531134066" watchObservedRunningTime="2026-03-19 12:32:41.986823117 +0000 UTC m=+1100.535783821" Mar 19 12:32:42.021581 master-0 kubenswrapper[31830]: I0319 12:32:42.021505 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-x6nhk" podStartSLOduration=10.176650108 podStartE2EDuration="30.021482011s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="2026-03-19 12:32:14.20220827 +0000 UTC m=+1072.751168974" lastFinishedPulling="2026-03-19 12:32:34.047040173 +0000 UTC m=+1092.596000877" observedRunningTime="2026-03-19 12:32:42.011536743 +0000 UTC m=+1100.560497457" watchObservedRunningTime="2026-03-19 12:32:42.021482011 +0000 UTC m=+1100.570442725" Mar 19 12:32:42.093608 master-0 kubenswrapper[31830]: I0319 12:32:42.093521 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-xslmv" podStartSLOduration=7.033793421 podStartE2EDuration="30.093504935s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="2026-03-19 12:32:15.222683088 +0000 UTC m=+1073.771643792" lastFinishedPulling="2026-03-19 12:32:38.282394602 +0000 UTC m=+1096.831355306" observedRunningTime="2026-03-19 12:32:42.049453118 +0000 UTC m=+1100.598413822" watchObservedRunningTime="2026-03-19 12:32:42.093504935 +0000 UTC m=+1100.642465639" Mar 19 12:32:42.105232 master-0 kubenswrapper[31830]: I0319 12:32:42.095783 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zt7k5" podStartSLOduration=6.490050591 podStartE2EDuration="29.095775416s" podCreationTimestamp="2026-03-19 12:32:13 +0000 UTC" firstStartedPulling="2026-03-19 12:32:15.768757473 +0000 UTC m=+1074.317718177" lastFinishedPulling="2026-03-19 12:32:38.374482298 +0000 UTC m=+1096.923443002" observedRunningTime="2026-03-19 12:32:42.083319189 +0000 UTC m=+1100.632279893" watchObservedRunningTime="2026-03-19 12:32:42.095775416 +0000 UTC m=+1100.644736120" Mar 19 12:32:42.123990 master-0 kubenswrapper[31830]: I0319 12:32:42.123897 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bqfjg" podStartSLOduration=7.5476067449999995 podStartE2EDuration="30.123879757s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="2026-03-19 12:32:15.711788716 +0000 UTC m=+1074.260749420" lastFinishedPulling="2026-03-19 12:32:38.288061718 +0000 UTC m=+1096.837022432" observedRunningTime="2026-03-19 12:32:42.11658949 +0000 UTC m=+1100.665550184" watchObservedRunningTime="2026-03-19 12:32:42.123879757 +0000 UTC m=+1100.672840461" Mar 19 12:32:43.105287 master-0 kubenswrapper[31830]: I0319 12:32:43.105171 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-55f864c847-vrk79" Mar 19 12:32:43.271163 master-0 kubenswrapper[31830]: I0319 12:32:43.271119 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-tc6m5" Mar 19 12:32:43.373181 master-0 kubenswrapper[31830]: I0319 12:32:43.371274 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-9vdlb" Mar 19 12:32:43.429351 master-0 kubenswrapper[31830]: I0319 12:32:43.429294 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-884679f54-4jhnc" Mar 19 12:32:43.819068 master-0 kubenswrapper[31830]: I0319 12:32:43.819010 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-gx66s" Mar 19 12:32:44.739386 master-0 kubenswrapper[31830]: I0319 12:32:44.739309 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f0b9a13-7862-4829-a97d-56034487da2e-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-6kkfv\" (UID: \"1f0b9a13-7862-4829-a97d-56034487da2e\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv" Mar 19 12:32:44.742420 master-0 kubenswrapper[31830]: I0319 12:32:44.742366 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f0b9a13-7862-4829-a97d-56034487da2e-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-6kkfv\" (UID: \"1f0b9a13-7862-4829-a97d-56034487da2e\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv" Mar 19 12:32:45.026598 master-0 kubenswrapper[31830]: I0319 12:32:45.026453 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv" Mar 19 12:32:45.147752 master-0 kubenswrapper[31830]: I0319 12:32:45.147684 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49025043-9018-47ec-8930-e6580af6aeb2-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899m7flr\" (UID: \"49025043-9018-47ec-8930-e6580af6aeb2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr" Mar 19 12:32:45.168478 master-0 kubenswrapper[31830]: I0319 12:32:45.157900 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49025043-9018-47ec-8930-e6580af6aeb2-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899m7flr\" (UID: \"49025043-9018-47ec-8930-e6580af6aeb2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr" Mar 19 12:32:45.221572 master-0 kubenswrapper[31830]: I0319 12:32:45.221485 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr" Mar 19 12:32:45.457353 master-0 kubenswrapper[31830]: I0319 12:32:45.454307 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv"] Mar 19 12:32:45.457353 master-0 kubenswrapper[31830]: I0319 12:32:45.455999 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-8j8qk\" (UID: \"45a81c5f-fb70-4b84-8c91-bc55830c36cd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:45.457353 master-0 kubenswrapper[31830]: I0319 12:32:45.456347 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-8j8qk\" (UID: \"45a81c5f-fb70-4b84-8c91-bc55830c36cd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:45.459118 master-0 kubenswrapper[31830]: I0319 12:32:45.459070 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-8j8qk\" (UID: \"45a81c5f-fb70-4b84-8c91-bc55830c36cd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:45.461150 master-0 kubenswrapper[31830]: I0319 12:32:45.461101 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45a81c5f-fb70-4b84-8c91-bc55830c36cd-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-8j8qk\" (UID: \"45a81c5f-fb70-4b84-8c91-bc55830c36cd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:45.635264 master-0 kubenswrapper[31830]: I0319 12:32:45.635139 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:45.721421 master-0 kubenswrapper[31830]: I0319 12:32:45.721346 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr"] Mar 19 12:32:45.732210 master-0 kubenswrapper[31830]: W0319 12:32:45.732151 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49025043_9018_47ec_8930_e6580af6aeb2.slice/crio-18a718e94607f42e5c27ee2e5acca1072bd9a3884b883006307b9f96441dca7c WatchSource:0}: Error finding container 18a718e94607f42e5c27ee2e5acca1072bd9a3884b883006307b9f96441dca7c: Status 404 returned error can't find the container with id 18a718e94607f42e5c27ee2e5acca1072bd9a3884b883006307b9f96441dca7c Mar 19 12:32:46.094370 master-0 kubenswrapper[31830]: I0319 12:32:46.093933 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv" event={"ID":"1f0b9a13-7862-4829-a97d-56034487da2e","Type":"ContainerStarted","Data":"eec6737638442c5004556fd848cf40030a3bba336aae85054e66a0f51e91f81a"} Mar 19 12:32:46.096396 master-0 kubenswrapper[31830]: I0319 12:32:46.096356 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr" event={"ID":"49025043-9018-47ec-8930-e6580af6aeb2","Type":"ContainerStarted","Data":"18a718e94607f42e5c27ee2e5acca1072bd9a3884b883006307b9f96441dca7c"} Mar 19 12:32:46.140335 master-0 kubenswrapper[31830]: I0319 12:32:46.140261 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk"] Mar 19 12:32:46.146142 master-0 kubenswrapper[31830]: W0319 12:32:46.146104 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45a81c5f_fb70_4b84_8c91_bc55830c36cd.slice/crio-568a67c53127553a44fa4433245d122d251375f3872a6a5d557d6c791086ae63 WatchSource:0}: Error finding container 568a67c53127553a44fa4433245d122d251375f3872a6a5d557d6c791086ae63: Status 404 returned error can't find the container with id 568a67c53127553a44fa4433245d122d251375f3872a6a5d557d6c791086ae63 Mar 19 12:32:47.107996 master-0 kubenswrapper[31830]: I0319 12:32:47.107923 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" event={"ID":"45a81c5f-fb70-4b84-8c91-bc55830c36cd","Type":"ContainerStarted","Data":"07b82872b70ec9626e888171e087a8a69ac2e56f1480090f2ceb873a81828697"} Mar 19 12:32:47.107996 master-0 kubenswrapper[31830]: I0319 12:32:47.107979 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" event={"ID":"45a81c5f-fb70-4b84-8c91-bc55830c36cd","Type":"ContainerStarted","Data":"568a67c53127553a44fa4433245d122d251375f3872a6a5d557d6c791086ae63"} Mar 19 12:32:47.108536 master-0 kubenswrapper[31830]: I0319 12:32:47.108072 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:32:47.157836 master-0 kubenswrapper[31830]: I0319 12:32:47.153616 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" podStartSLOduration=35.153586471 podStartE2EDuration="35.153586471s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:32:47.13515533 +0000 UTC m=+1105.684116034" watchObservedRunningTime="2026-03-19 12:32:47.153586471 +0000 UTC m=+1105.702547185" Mar 19 12:32:50.145353 master-0 kubenswrapper[31830]: I0319 12:32:50.145282 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv" event={"ID":"1f0b9a13-7862-4829-a97d-56034487da2e","Type":"ContainerStarted","Data":"46935570d53c488877124547e2c476be91e52ae09fc20b5ed4c7f04723dc0c8e"} Mar 19 12:32:50.146589 master-0 kubenswrapper[31830]: I0319 12:32:50.146545 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv" Mar 19 12:32:50.150553 master-0 kubenswrapper[31830]: I0319 12:32:50.149542 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr" event={"ID":"49025043-9018-47ec-8930-e6580af6aeb2","Type":"ContainerStarted","Data":"31db89cb104db9202f6e7546102feee327bb2ee9fff865c23744c0430d81f6e1"} Mar 19 12:32:50.150553 master-0 kubenswrapper[31830]: I0319 12:32:50.149722 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr" Mar 19 12:32:50.171833 master-0 kubenswrapper[31830]: I0319 12:32:50.171715 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv" podStartSLOduration=34.353961992 podStartE2EDuration="38.171698521s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="2026-03-19 12:32:45.459213934 +0000 UTC m=+1104.008174638" lastFinishedPulling="2026-03-19 12:32:49.276950463 +0000 UTC m=+1107.825911167" observedRunningTime="2026-03-19 12:32:50.165720075 +0000 UTC m=+1108.714680779" watchObservedRunningTime="2026-03-19 12:32:50.171698521 +0000 UTC m=+1108.720659225" Mar 19 12:32:50.208139 master-0 kubenswrapper[31830]: I0319 12:32:50.208041 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr" podStartSLOduration=34.648850047 podStartE2EDuration="38.208017767s" podCreationTimestamp="2026-03-19 12:32:12 +0000 UTC" firstStartedPulling="2026-03-19 12:32:45.734922185 +0000 UTC m=+1104.283882889" lastFinishedPulling="2026-03-19 12:32:49.294089905 +0000 UTC m=+1107.843050609" observedRunningTime="2026-03-19 12:32:50.203187638 +0000 UTC m=+1108.752148342" watchObservedRunningTime="2026-03-19 12:32:50.208017767 +0000 UTC m=+1108.756978471" Mar 19 12:32:52.559662 master-0 kubenswrapper[31830]: I0319 12:32:52.559599 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-x6nhk" Mar 19 12:32:52.559662 master-0 kubenswrapper[31830]: I0319 12:32:52.559654 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-cbcqj" Mar 19 12:32:52.652255 master-0 kubenswrapper[31830]: I0319 12:32:52.651768 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-4zgd2" Mar 19 12:32:52.736726 master-0 kubenswrapper[31830]: I0319 12:32:52.736666 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-cbj6m" Mar 19 12:32:52.797853 master-0 kubenswrapper[31830]: I0319 12:32:52.797146 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-d2qll" Mar 19 12:32:52.832892 master-0 kubenswrapper[31830]: I0319 12:32:52.830116 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lpsb9" Mar 19 12:32:52.992054 master-0 kubenswrapper[31830]: I0319 12:32:52.991986 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-wq7gk" Mar 19 12:32:53.301637 master-0 kubenswrapper[31830]: I0319 12:32:53.301592 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-xslmv" Mar 19 12:32:53.346681 master-0 kubenswrapper[31830]: I0319 12:32:53.340268 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-767865f676-px9d7" Mar 19 12:32:53.410951 master-0 kubenswrapper[31830]: I0319 12:32:53.410908 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-22wxs" Mar 19 12:32:53.566146 master-0 kubenswrapper[31830]: I0319 12:32:53.566023 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5784578c99-9ldx8" Mar 19 12:32:53.603820 master-0 kubenswrapper[31830]: I0319 12:32:53.600873 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-c674c5965-85lzd" Mar 19 12:32:53.738656 master-0 kubenswrapper[31830]: I0319 12:32:53.738591 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-ztnkm" Mar 19 12:32:53.819349 master-0 kubenswrapper[31830]: I0319 12:32:53.819230 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bqfjg" Mar 19 12:32:55.038657 master-0 kubenswrapper[31830]: I0319 12:32:55.038595 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-6kkfv" Mar 19 12:32:55.228597 master-0 kubenswrapper[31830]: I0319 12:32:55.228538 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899m7flr" Mar 19 12:32:55.646753 master-0 kubenswrapper[31830]: I0319 12:32:55.646695 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-8j8qk" Mar 19 12:33:34.650586 master-0 kubenswrapper[31830]: I0319 12:33:34.650533 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-vlql9"] Mar 19 12:33:34.657041 master-0 kubenswrapper[31830]: I0319 12:33:34.656929 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-vlql9" Mar 19 12:33:34.666010 master-0 kubenswrapper[31830]: I0319 12:33:34.663279 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 19 12:33:34.666010 master-0 kubenswrapper[31830]: I0319 12:33:34.663540 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 19 12:33:34.666010 master-0 kubenswrapper[31830]: I0319 12:33:34.665660 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 19 12:33:34.696426 master-0 kubenswrapper[31830]: I0319 12:33:34.695678 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-vlql9"] Mar 19 12:33:34.714897 master-0 kubenswrapper[31830]: I0319 12:33:34.712996 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-8wx24"] Mar 19 12:33:34.715128 master-0 kubenswrapper[31830]: I0319 12:33:34.715068 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-8wx24" Mar 19 12:33:34.741277 master-0 kubenswrapper[31830]: I0319 12:33:34.738816 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 19 12:33:34.741277 master-0 kubenswrapper[31830]: I0319 12:33:34.738989 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-8wx24"] Mar 19 12:33:34.800820 master-0 kubenswrapper[31830]: I0319 12:33:34.791480 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c712d63-eb7d-40d5-9f5f-05124cac728f-config\") pod \"dnsmasq-dns-685c76cf85-vlql9\" (UID: \"6c712d63-eb7d-40d5-9f5f-05124cac728f\") " pod="openstack/dnsmasq-dns-685c76cf85-vlql9" Mar 19 12:33:34.800820 master-0 kubenswrapper[31830]: I0319 12:33:34.791537 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4xcs\" (UniqueName: \"kubernetes.io/projected/f74e08c8-33e6-4926-9d30-ffdd77005bcf-kube-api-access-s4xcs\") pod \"dnsmasq-dns-8476fd89bc-8wx24\" (UID: \"f74e08c8-33e6-4926-9d30-ffdd77005bcf\") " pod="openstack/dnsmasq-dns-8476fd89bc-8wx24" Mar 19 12:33:34.800820 master-0 kubenswrapper[31830]: I0319 12:33:34.791589 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f74e08c8-33e6-4926-9d30-ffdd77005bcf-config\") pod \"dnsmasq-dns-8476fd89bc-8wx24\" (UID: \"f74e08c8-33e6-4926-9d30-ffdd77005bcf\") " pod="openstack/dnsmasq-dns-8476fd89bc-8wx24" Mar 19 12:33:34.800820 master-0 kubenswrapper[31830]: I0319 12:33:34.793588 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f74e08c8-33e6-4926-9d30-ffdd77005bcf-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-8wx24\" (UID: \"f74e08c8-33e6-4926-9d30-ffdd77005bcf\") " pod="openstack/dnsmasq-dns-8476fd89bc-8wx24" Mar 19 12:33:34.800820 master-0 kubenswrapper[31830]: I0319 12:33:34.793654 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khxws\" (UniqueName: \"kubernetes.io/projected/6c712d63-eb7d-40d5-9f5f-05124cac728f-kube-api-access-khxws\") pod \"dnsmasq-dns-685c76cf85-vlql9\" (UID: \"6c712d63-eb7d-40d5-9f5f-05124cac728f\") " pod="openstack/dnsmasq-dns-685c76cf85-vlql9" Mar 19 12:33:34.896816 master-0 kubenswrapper[31830]: I0319 12:33:34.895157 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khxws\" (UniqueName: \"kubernetes.io/projected/6c712d63-eb7d-40d5-9f5f-05124cac728f-kube-api-access-khxws\") pod \"dnsmasq-dns-685c76cf85-vlql9\" (UID: \"6c712d63-eb7d-40d5-9f5f-05124cac728f\") " pod="openstack/dnsmasq-dns-685c76cf85-vlql9" Mar 19 12:33:34.896816 master-0 kubenswrapper[31830]: I0319 12:33:34.895300 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c712d63-eb7d-40d5-9f5f-05124cac728f-config\") pod \"dnsmasq-dns-685c76cf85-vlql9\" (UID: \"6c712d63-eb7d-40d5-9f5f-05124cac728f\") " pod="openstack/dnsmasq-dns-685c76cf85-vlql9" Mar 19 12:33:34.896816 master-0 kubenswrapper[31830]: I0319 12:33:34.895350 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4xcs\" (UniqueName: \"kubernetes.io/projected/f74e08c8-33e6-4926-9d30-ffdd77005bcf-kube-api-access-s4xcs\") pod \"dnsmasq-dns-8476fd89bc-8wx24\" (UID: \"f74e08c8-33e6-4926-9d30-ffdd77005bcf\") " pod="openstack/dnsmasq-dns-8476fd89bc-8wx24" Mar 19 12:33:34.896816 master-0 kubenswrapper[31830]: I0319 12:33:34.895413 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f74e08c8-33e6-4926-9d30-ffdd77005bcf-config\") pod \"dnsmasq-dns-8476fd89bc-8wx24\" (UID: \"f74e08c8-33e6-4926-9d30-ffdd77005bcf\") " pod="openstack/dnsmasq-dns-8476fd89bc-8wx24" Mar 19 12:33:34.896816 master-0 kubenswrapper[31830]: I0319 12:33:34.895440 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f74e08c8-33e6-4926-9d30-ffdd77005bcf-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-8wx24\" (UID: \"f74e08c8-33e6-4926-9d30-ffdd77005bcf\") " pod="openstack/dnsmasq-dns-8476fd89bc-8wx24" Mar 19 12:33:34.896816 master-0 kubenswrapper[31830]: I0319 12:33:34.896317 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f74e08c8-33e6-4926-9d30-ffdd77005bcf-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-8wx24\" (UID: \"f74e08c8-33e6-4926-9d30-ffdd77005bcf\") " pod="openstack/dnsmasq-dns-8476fd89bc-8wx24" Mar 19 12:33:34.896816 master-0 kubenswrapper[31830]: I0319 12:33:34.896446 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c712d63-eb7d-40d5-9f5f-05124cac728f-config\") pod \"dnsmasq-dns-685c76cf85-vlql9\" (UID: \"6c712d63-eb7d-40d5-9f5f-05124cac728f\") " pod="openstack/dnsmasq-dns-685c76cf85-vlql9" Mar 19 12:33:34.897262 master-0 kubenswrapper[31830]: I0319 12:33:34.897065 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f74e08c8-33e6-4926-9d30-ffdd77005bcf-config\") pod \"dnsmasq-dns-8476fd89bc-8wx24\" (UID: \"f74e08c8-33e6-4926-9d30-ffdd77005bcf\") " pod="openstack/dnsmasq-dns-8476fd89bc-8wx24" Mar 19 12:33:34.919825 master-0 kubenswrapper[31830]: I0319 12:33:34.918866 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4xcs\" (UniqueName: \"kubernetes.io/projected/f74e08c8-33e6-4926-9d30-ffdd77005bcf-kube-api-access-s4xcs\") pod \"dnsmasq-dns-8476fd89bc-8wx24\" (UID: \"f74e08c8-33e6-4926-9d30-ffdd77005bcf\") " pod="openstack/dnsmasq-dns-8476fd89bc-8wx24" Mar 19 12:33:34.932828 master-0 kubenswrapper[31830]: I0319 12:33:34.931280 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khxws\" (UniqueName: \"kubernetes.io/projected/6c712d63-eb7d-40d5-9f5f-05124cac728f-kube-api-access-khxws\") pod \"dnsmasq-dns-685c76cf85-vlql9\" (UID: \"6c712d63-eb7d-40d5-9f5f-05124cac728f\") " pod="openstack/dnsmasq-dns-685c76cf85-vlql9" Mar 19 12:33:34.999508 master-0 kubenswrapper[31830]: I0319 12:33:34.999440 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-vlql9" Mar 19 12:33:35.104887 master-0 kubenswrapper[31830]: I0319 12:33:35.102011 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-8wx24" Mar 19 12:33:35.453738 master-0 kubenswrapper[31830]: I0319 12:33:35.453518 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-vlql9"] Mar 19 12:33:35.456523 master-0 kubenswrapper[31830]: W0319 12:33:35.456470 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c712d63_eb7d_40d5_9f5f_05124cac728f.slice/crio-658c4068878569a13c5d02b6dc0d8e8d94abfd9eb7f0f81d8e8be0fd54cdb17a WatchSource:0}: Error finding container 658c4068878569a13c5d02b6dc0d8e8d94abfd9eb7f0f81d8e8be0fd54cdb17a: Status 404 returned error can't find the container with id 658c4068878569a13c5d02b6dc0d8e8d94abfd9eb7f0f81d8e8be0fd54cdb17a Mar 19 12:33:35.458547 master-0 kubenswrapper[31830]: I0319 12:33:35.458459 31830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 12:33:35.588582 master-0 kubenswrapper[31830]: I0319 12:33:35.588544 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-8wx24"] Mar 19 12:33:35.593201 master-0 kubenswrapper[31830]: W0319 12:33:35.593150 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf74e08c8_33e6_4926_9d30_ffdd77005bcf.slice/crio-d69f270a5314a71bfab67735ab530e2051558c33da3d9e7c45b31e95bf2cf38b WatchSource:0}: Error finding container d69f270a5314a71bfab67735ab530e2051558c33da3d9e7c45b31e95bf2cf38b: Status 404 returned error can't find the container with id d69f270a5314a71bfab67735ab530e2051558c33da3d9e7c45b31e95bf2cf38b Mar 19 12:33:35.595188 master-0 kubenswrapper[31830]: I0319 12:33:35.595152 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-vlql9" event={"ID":"6c712d63-eb7d-40d5-9f5f-05124cac728f","Type":"ContainerStarted","Data":"658c4068878569a13c5d02b6dc0d8e8d94abfd9eb7f0f81d8e8be0fd54cdb17a"} Mar 19 12:33:36.634692 master-0 kubenswrapper[31830]: I0319 12:33:36.634618 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-8wx24" event={"ID":"f74e08c8-33e6-4926-9d30-ffdd77005bcf","Type":"ContainerStarted","Data":"d69f270a5314a71bfab67735ab530e2051558c33da3d9e7c45b31e95bf2cf38b"} Mar 19 12:33:37.539529 master-0 kubenswrapper[31830]: I0319 12:33:37.536977 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-vlql9"] Mar 19 12:33:37.586323 master-0 kubenswrapper[31830]: I0319 12:33:37.584212 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76849d6659-2hlm9"] Mar 19 12:33:37.586524 master-0 kubenswrapper[31830]: I0319 12:33:37.586339 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76849d6659-2hlm9" Mar 19 12:33:37.612643 master-0 kubenswrapper[31830]: I0319 12:33:37.612297 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76849d6659-2hlm9"] Mar 19 12:33:37.654125 master-0 kubenswrapper[31830]: I0319 12:33:37.652267 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b384031f-cffb-4dff-b0ff-df09432a1451-config\") pod \"dnsmasq-dns-76849d6659-2hlm9\" (UID: \"b384031f-cffb-4dff-b0ff-df09432a1451\") " pod="openstack/dnsmasq-dns-76849d6659-2hlm9" Mar 19 12:33:37.654125 master-0 kubenswrapper[31830]: I0319 12:33:37.652315 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx4v9\" (UniqueName: \"kubernetes.io/projected/b384031f-cffb-4dff-b0ff-df09432a1451-kube-api-access-tx4v9\") pod \"dnsmasq-dns-76849d6659-2hlm9\" (UID: \"b384031f-cffb-4dff-b0ff-df09432a1451\") " pod="openstack/dnsmasq-dns-76849d6659-2hlm9" Mar 19 12:33:37.654125 master-0 kubenswrapper[31830]: I0319 12:33:37.652343 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b384031f-cffb-4dff-b0ff-df09432a1451-dns-svc\") pod \"dnsmasq-dns-76849d6659-2hlm9\" (UID: \"b384031f-cffb-4dff-b0ff-df09432a1451\") " pod="openstack/dnsmasq-dns-76849d6659-2hlm9" Mar 19 12:33:37.760162 master-0 kubenswrapper[31830]: I0319 12:33:37.755013 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b384031f-cffb-4dff-b0ff-df09432a1451-config\") pod \"dnsmasq-dns-76849d6659-2hlm9\" (UID: \"b384031f-cffb-4dff-b0ff-df09432a1451\") " pod="openstack/dnsmasq-dns-76849d6659-2hlm9" Mar 19 12:33:37.761155 master-0 kubenswrapper[31830]: I0319 12:33:37.761097 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx4v9\" (UniqueName: \"kubernetes.io/projected/b384031f-cffb-4dff-b0ff-df09432a1451-kube-api-access-tx4v9\") pod \"dnsmasq-dns-76849d6659-2hlm9\" (UID: \"b384031f-cffb-4dff-b0ff-df09432a1451\") " pod="openstack/dnsmasq-dns-76849d6659-2hlm9" Mar 19 12:33:37.762023 master-0 kubenswrapper[31830]: I0319 12:33:37.757067 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b384031f-cffb-4dff-b0ff-df09432a1451-config\") pod \"dnsmasq-dns-76849d6659-2hlm9\" (UID: \"b384031f-cffb-4dff-b0ff-df09432a1451\") " pod="openstack/dnsmasq-dns-76849d6659-2hlm9" Mar 19 12:33:37.762023 master-0 kubenswrapper[31830]: I0319 12:33:37.761278 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b384031f-cffb-4dff-b0ff-df09432a1451-dns-svc\") pod \"dnsmasq-dns-76849d6659-2hlm9\" (UID: \"b384031f-cffb-4dff-b0ff-df09432a1451\") " pod="openstack/dnsmasq-dns-76849d6659-2hlm9" Mar 19 12:33:37.766479 master-0 kubenswrapper[31830]: I0319 12:33:37.763224 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b384031f-cffb-4dff-b0ff-df09432a1451-dns-svc\") pod \"dnsmasq-dns-76849d6659-2hlm9\" (UID: \"b384031f-cffb-4dff-b0ff-df09432a1451\") " pod="openstack/dnsmasq-dns-76849d6659-2hlm9" Mar 19 12:33:37.800772 master-0 kubenswrapper[31830]: I0319 12:33:37.800662 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx4v9\" (UniqueName: \"kubernetes.io/projected/b384031f-cffb-4dff-b0ff-df09432a1451-kube-api-access-tx4v9\") pod \"dnsmasq-dns-76849d6659-2hlm9\" (UID: \"b384031f-cffb-4dff-b0ff-df09432a1451\") " pod="openstack/dnsmasq-dns-76849d6659-2hlm9" Mar 19 12:33:37.902917 master-0 kubenswrapper[31830]: I0319 12:33:37.902859 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-8wx24"] Mar 19 12:33:37.942287 master-0 kubenswrapper[31830]: I0319 12:33:37.940219 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-sxht2"] Mar 19 12:33:37.942287 master-0 kubenswrapper[31830]: I0319 12:33:37.941679 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" Mar 19 12:33:37.954976 master-0 kubenswrapper[31830]: I0319 12:33:37.954938 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76849d6659-2hlm9" Mar 19 12:33:37.968261 master-0 kubenswrapper[31830]: I0319 12:33:37.968191 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-sxht2"] Mar 19 12:33:38.077960 master-0 kubenswrapper[31830]: I0319 12:33:38.077843 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc881ed-8448-4279-97e5-cb834cab7a64-config\") pod \"dnsmasq-dns-6ff8fd9d5c-sxht2\" (UID: \"5bc881ed-8448-4279-97e5-cb834cab7a64\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" Mar 19 12:33:38.077960 master-0 kubenswrapper[31830]: I0319 12:33:38.077896 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bc881ed-8448-4279-97e5-cb834cab7a64-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-sxht2\" (UID: \"5bc881ed-8448-4279-97e5-cb834cab7a64\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" Mar 19 12:33:38.077960 master-0 kubenswrapper[31830]: I0319 12:33:38.077948 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9fsm\" (UniqueName: \"kubernetes.io/projected/5bc881ed-8448-4279-97e5-cb834cab7a64-kube-api-access-q9fsm\") pod \"dnsmasq-dns-6ff8fd9d5c-sxht2\" (UID: \"5bc881ed-8448-4279-97e5-cb834cab7a64\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" Mar 19 12:33:38.189898 master-0 kubenswrapper[31830]: I0319 12:33:38.189820 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc881ed-8448-4279-97e5-cb834cab7a64-config\") pod \"dnsmasq-dns-6ff8fd9d5c-sxht2\" (UID: \"5bc881ed-8448-4279-97e5-cb834cab7a64\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" Mar 19 12:33:38.189898 master-0 kubenswrapper[31830]: I0319 12:33:38.189901 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bc881ed-8448-4279-97e5-cb834cab7a64-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-sxht2\" (UID: \"5bc881ed-8448-4279-97e5-cb834cab7a64\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" Mar 19 12:33:38.190162 master-0 kubenswrapper[31830]: I0319 12:33:38.189963 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9fsm\" (UniqueName: \"kubernetes.io/projected/5bc881ed-8448-4279-97e5-cb834cab7a64-kube-api-access-q9fsm\") pod \"dnsmasq-dns-6ff8fd9d5c-sxht2\" (UID: \"5bc881ed-8448-4279-97e5-cb834cab7a64\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" Mar 19 12:33:38.191651 master-0 kubenswrapper[31830]: I0319 12:33:38.191603 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc881ed-8448-4279-97e5-cb834cab7a64-config\") pod \"dnsmasq-dns-6ff8fd9d5c-sxht2\" (UID: \"5bc881ed-8448-4279-97e5-cb834cab7a64\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" Mar 19 12:33:38.193109 master-0 kubenswrapper[31830]: I0319 12:33:38.193082 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bc881ed-8448-4279-97e5-cb834cab7a64-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-sxht2\" (UID: \"5bc881ed-8448-4279-97e5-cb834cab7a64\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" Mar 19 12:33:38.995228 master-0 kubenswrapper[31830]: I0319 12:33:38.995168 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9fsm\" (UniqueName: \"kubernetes.io/projected/5bc881ed-8448-4279-97e5-cb834cab7a64-kube-api-access-q9fsm\") pod \"dnsmasq-dns-6ff8fd9d5c-sxht2\" (UID: \"5bc881ed-8448-4279-97e5-cb834cab7a64\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" Mar 19 12:33:39.045420 master-0 kubenswrapper[31830]: W0319 12:33:39.045359 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb384031f_cffb_4dff_b0ff_df09432a1451.slice/crio-bb04f3f8401a33003845e19cd5c475d67561af033f56546bf4ba4bf2e9847af3 WatchSource:0}: Error finding container bb04f3f8401a33003845e19cd5c475d67561af033f56546bf4ba4bf2e9847af3: Status 404 returned error can't find the container with id bb04f3f8401a33003845e19cd5c475d67561af033f56546bf4ba4bf2e9847af3 Mar 19 12:33:39.073696 master-0 kubenswrapper[31830]: I0319 12:33:39.053387 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76849d6659-2hlm9"] Mar 19 12:33:39.163229 master-0 kubenswrapper[31830]: I0319 12:33:39.163152 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" Mar 19 12:33:39.702918 master-0 kubenswrapper[31830]: I0319 12:33:39.702519 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76849d6659-2hlm9" event={"ID":"b384031f-cffb-4dff-b0ff-df09432a1451","Type":"ContainerStarted","Data":"bb04f3f8401a33003845e19cd5c475d67561af033f56546bf4ba4bf2e9847af3"} Mar 19 12:33:39.751226 master-0 kubenswrapper[31830]: I0319 12:33:39.750924 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-sxht2"] Mar 19 12:33:40.718905 master-0 kubenswrapper[31830]: I0319 12:33:40.718852 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" event={"ID":"5bc881ed-8448-4279-97e5-cb834cab7a64","Type":"ContainerStarted","Data":"55745afab891f80cc8d864bc450badad8772651bce7dd15d68d5017001ef8de7"} Mar 19 12:33:47.132435 master-0 kubenswrapper[31830]: I0319 12:33:47.132368 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 19 12:33:47.135327 master-0 kubenswrapper[31830]: I0319 12:33:47.135287 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.138212 master-0 kubenswrapper[31830]: I0319 12:33:47.138174 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 19 12:33:47.138334 master-0 kubenswrapper[31830]: I0319 12:33:47.138308 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 19 12:33:47.138389 master-0 kubenswrapper[31830]: I0319 12:33:47.138350 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 19 12:33:47.138429 master-0 kubenswrapper[31830]: I0319 12:33:47.138408 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 19 12:33:47.139748 master-0 kubenswrapper[31830]: I0319 12:33:47.139722 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 19 12:33:47.140414 master-0 kubenswrapper[31830]: I0319 12:33:47.140386 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 19 12:33:47.692947 master-0 kubenswrapper[31830]: I0319 12:33:47.692848 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 19 12:33:47.703975 master-0 kubenswrapper[31830]: I0319 12:33:47.703143 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 19 12:33:47.704724 master-0 kubenswrapper[31830]: I0319 12:33:47.704324 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 19 12:33:47.706178 master-0 kubenswrapper[31830]: I0319 12:33:47.704985 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e496a21c-f671-402f-a15c-911b063428c5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.706178 master-0 kubenswrapper[31830]: I0319 12:33:47.705045 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e496a21c-f671-402f-a15c-911b063428c5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.706178 master-0 kubenswrapper[31830]: I0319 12:33:47.705077 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e496a21c-f671-402f-a15c-911b063428c5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.706178 master-0 kubenswrapper[31830]: I0319 12:33:47.705096 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e496a21c-f671-402f-a15c-911b063428c5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.706178 master-0 kubenswrapper[31830]: I0319 12:33:47.705115 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e496a21c-f671-402f-a15c-911b063428c5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.706178 master-0 kubenswrapper[31830]: I0319 12:33:47.705146 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e496a21c-f671-402f-a15c-911b063428c5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.706178 master-0 kubenswrapper[31830]: I0319 12:33:47.705163 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e496a21c-f671-402f-a15c-911b063428c5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.706178 master-0 kubenswrapper[31830]: I0319 12:33:47.705207 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bae8aa1d-b599-4adf-a571-bff2ab669174\" (UniqueName: \"kubernetes.io/csi/topolvm.io^97c74bc3-65ef-49b4-8cbd-106e3edf09b7\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.706178 master-0 kubenswrapper[31830]: I0319 12:33:47.705231 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e496a21c-f671-402f-a15c-911b063428c5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.706178 master-0 kubenswrapper[31830]: I0319 12:33:47.705250 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc556\" (UniqueName: \"kubernetes.io/projected/e496a21c-f671-402f-a15c-911b063428c5-kube-api-access-lc556\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.706178 master-0 kubenswrapper[31830]: I0319 12:33:47.705267 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e496a21c-f671-402f-a15c-911b063428c5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.712885 master-0 kubenswrapper[31830]: I0319 12:33:47.712096 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 19 12:33:47.713400 master-0 kubenswrapper[31830]: I0319 12:33:47.713377 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 19 12:33:47.716937 master-0 kubenswrapper[31830]: I0319 12:33:47.716900 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 19 12:33:47.740972 master-0 kubenswrapper[31830]: I0319 12:33:47.739790 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 19 12:33:47.806819 master-0 kubenswrapper[31830]: I0319 12:33:47.806741 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e496a21c-f671-402f-a15c-911b063428c5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.807044 master-0 kubenswrapper[31830]: I0319 12:33:47.806805 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3cbc6ce-25bb-4672-bcf9-813c973d8bcf-config-data\") pod \"memcached-0\" (UID: \"f3cbc6ce-25bb-4672-bcf9-813c973d8bcf\") " pod="openstack/memcached-0" Mar 19 12:33:47.807044 master-0 kubenswrapper[31830]: I0319 12:33:47.806888 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e496a21c-f671-402f-a15c-911b063428c5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.807044 master-0 kubenswrapper[31830]: I0319 12:33:47.806908 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e496a21c-f671-402f-a15c-911b063428c5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.807044 master-0 kubenswrapper[31830]: I0319 12:33:47.806946 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f3cbc6ce-25bb-4672-bcf9-813c973d8bcf-kolla-config\") pod \"memcached-0\" (UID: \"f3cbc6ce-25bb-4672-bcf9-813c973d8bcf\") " pod="openstack/memcached-0" Mar 19 12:33:47.807044 master-0 kubenswrapper[31830]: I0319 12:33:47.806972 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3cbc6ce-25bb-4672-bcf9-813c973d8bcf-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f3cbc6ce-25bb-4672-bcf9-813c973d8bcf\") " pod="openstack/memcached-0" Mar 19 12:33:47.807044 master-0 kubenswrapper[31830]: I0319 12:33:47.807002 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3cbc6ce-25bb-4672-bcf9-813c973d8bcf-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f3cbc6ce-25bb-4672-bcf9-813c973d8bcf\") " pod="openstack/memcached-0" Mar 19 12:33:47.807044 master-0 kubenswrapper[31830]: I0319 12:33:47.807027 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bae8aa1d-b599-4adf-a571-bff2ab669174\" (UniqueName: \"kubernetes.io/csi/topolvm.io^97c74bc3-65ef-49b4-8cbd-106e3edf09b7\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.807264 master-0 kubenswrapper[31830]: I0319 12:33:47.807057 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e496a21c-f671-402f-a15c-911b063428c5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.807264 master-0 kubenswrapper[31830]: I0319 12:33:47.807082 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc556\" (UniqueName: \"kubernetes.io/projected/e496a21c-f671-402f-a15c-911b063428c5-kube-api-access-lc556\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.807264 master-0 kubenswrapper[31830]: I0319 12:33:47.807104 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e496a21c-f671-402f-a15c-911b063428c5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.807264 master-0 kubenswrapper[31830]: I0319 12:33:47.807175 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e496a21c-f671-402f-a15c-911b063428c5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.807264 master-0 kubenswrapper[31830]: I0319 12:33:47.807196 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsklp\" (UniqueName: \"kubernetes.io/projected/f3cbc6ce-25bb-4672-bcf9-813c973d8bcf-kube-api-access-jsklp\") pod \"memcached-0\" (UID: \"f3cbc6ce-25bb-4672-bcf9-813c973d8bcf\") " pod="openstack/memcached-0" Mar 19 12:33:47.807264 master-0 kubenswrapper[31830]: I0319 12:33:47.807231 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e496a21c-f671-402f-a15c-911b063428c5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.807449 master-0 kubenswrapper[31830]: I0319 12:33:47.807268 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e496a21c-f671-402f-a15c-911b063428c5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.807449 master-0 kubenswrapper[31830]: I0319 12:33:47.807290 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e496a21c-f671-402f-a15c-911b063428c5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.808741 master-0 kubenswrapper[31830]: I0319 12:33:47.808413 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e496a21c-f671-402f-a15c-911b063428c5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.808741 master-0 kubenswrapper[31830]: I0319 12:33:47.808533 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e496a21c-f671-402f-a15c-911b063428c5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.808741 master-0 kubenswrapper[31830]: I0319 12:33:47.808574 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e496a21c-f671-402f-a15c-911b063428c5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.811671 master-0 kubenswrapper[31830]: I0319 12:33:47.810703 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e496a21c-f671-402f-a15c-911b063428c5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.813696 master-0 kubenswrapper[31830]: I0319 12:33:47.813639 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e496a21c-f671-402f-a15c-911b063428c5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.815815 master-0 kubenswrapper[31830]: I0319 12:33:47.814185 31830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 12:33:47.815815 master-0 kubenswrapper[31830]: I0319 12:33:47.814254 31830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bae8aa1d-b599-4adf-a571-bff2ab669174\" (UniqueName: \"kubernetes.io/csi/topolvm.io^97c74bc3-65ef-49b4-8cbd-106e3edf09b7\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/64045f84410297b44aed90a9a214e4f1d69bb2b2262c1a1036d1cc7745c77a7e/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.815815 master-0 kubenswrapper[31830]: I0319 12:33:47.815225 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e496a21c-f671-402f-a15c-911b063428c5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.815815 master-0 kubenswrapper[31830]: I0319 12:33:47.815350 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e496a21c-f671-402f-a15c-911b063428c5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.816377 master-0 kubenswrapper[31830]: I0319 12:33:47.816328 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e496a21c-f671-402f-a15c-911b063428c5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.820312 master-0 kubenswrapper[31830]: I0319 12:33:47.820261 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e496a21c-f671-402f-a15c-911b063428c5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.831601 master-0 kubenswrapper[31830]: I0319 12:33:47.831547 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc556\" (UniqueName: \"kubernetes.io/projected/e496a21c-f671-402f-a15c-911b063428c5-kube-api-access-lc556\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:47.913033 master-0 kubenswrapper[31830]: I0319 12:33:47.911739 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3cbc6ce-25bb-4672-bcf9-813c973d8bcf-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f3cbc6ce-25bb-4672-bcf9-813c973d8bcf\") " pod="openstack/memcached-0" Mar 19 12:33:47.913033 master-0 kubenswrapper[31830]: I0319 12:33:47.911923 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3cbc6ce-25bb-4672-bcf9-813c973d8bcf-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f3cbc6ce-25bb-4672-bcf9-813c973d8bcf\") " pod="openstack/memcached-0" Mar 19 12:33:47.913033 master-0 kubenswrapper[31830]: I0319 12:33:47.912061 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsklp\" (UniqueName: \"kubernetes.io/projected/f3cbc6ce-25bb-4672-bcf9-813c973d8bcf-kube-api-access-jsklp\") pod \"memcached-0\" (UID: \"f3cbc6ce-25bb-4672-bcf9-813c973d8bcf\") " pod="openstack/memcached-0" Mar 19 12:33:47.913033 master-0 kubenswrapper[31830]: I0319 12:33:47.912162 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3cbc6ce-25bb-4672-bcf9-813c973d8bcf-config-data\") pod \"memcached-0\" (UID: \"f3cbc6ce-25bb-4672-bcf9-813c973d8bcf\") " pod="openstack/memcached-0" Mar 19 12:33:47.913033 master-0 kubenswrapper[31830]: I0319 12:33:47.912233 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f3cbc6ce-25bb-4672-bcf9-813c973d8bcf-kolla-config\") pod \"memcached-0\" (UID: \"f3cbc6ce-25bb-4672-bcf9-813c973d8bcf\") " pod="openstack/memcached-0" Mar 19 12:33:47.913424 master-0 kubenswrapper[31830]: I0319 12:33:47.913392 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f3cbc6ce-25bb-4672-bcf9-813c973d8bcf-kolla-config\") pod \"memcached-0\" (UID: \"f3cbc6ce-25bb-4672-bcf9-813c973d8bcf\") " pod="openstack/memcached-0" Mar 19 12:33:47.914150 master-0 kubenswrapper[31830]: I0319 12:33:47.914113 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3cbc6ce-25bb-4672-bcf9-813c973d8bcf-config-data\") pod \"memcached-0\" (UID: \"f3cbc6ce-25bb-4672-bcf9-813c973d8bcf\") " pod="openstack/memcached-0" Mar 19 12:33:47.919855 master-0 kubenswrapper[31830]: I0319 12:33:47.919798 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3cbc6ce-25bb-4672-bcf9-813c973d8bcf-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f3cbc6ce-25bb-4672-bcf9-813c973d8bcf\") " pod="openstack/memcached-0" Mar 19 12:33:47.921939 master-0 kubenswrapper[31830]: I0319 12:33:47.920535 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3cbc6ce-25bb-4672-bcf9-813c973d8bcf-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f3cbc6ce-25bb-4672-bcf9-813c973d8bcf\") " pod="openstack/memcached-0" Mar 19 12:33:47.978703 master-0 kubenswrapper[31830]: I0319 12:33:47.976199 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsklp\" (UniqueName: \"kubernetes.io/projected/f3cbc6ce-25bb-4672-bcf9-813c973d8bcf-kube-api-access-jsklp\") pod \"memcached-0\" (UID: \"f3cbc6ce-25bb-4672-bcf9-813c973d8bcf\") " pod="openstack/memcached-0" Mar 19 12:33:48.059835 master-0 kubenswrapper[31830]: I0319 12:33:48.059634 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 19 12:33:49.495441 master-0 kubenswrapper[31830]: I0319 12:33:49.495385 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bae8aa1d-b599-4adf-a571-bff2ab669174\" (UniqueName: \"kubernetes.io/csi/topolvm.io^97c74bc3-65ef-49b4-8cbd-106e3edf09b7\") pod \"rabbitmq-cell1-server-0\" (UID: \"e496a21c-f671-402f-a15c-911b063428c5\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:49.558943 master-0 kubenswrapper[31830]: I0319 12:33:49.558898 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:33:50.160066 master-0 kubenswrapper[31830]: I0319 12:33:50.159988 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 19 12:33:50.162466 master-0 kubenswrapper[31830]: I0319 12:33:50.162398 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.166857 master-0 kubenswrapper[31830]: I0319 12:33:50.166775 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 19 12:33:50.167890 master-0 kubenswrapper[31830]: I0319 12:33:50.167836 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 19 12:33:50.168016 master-0 kubenswrapper[31830]: I0319 12:33:50.167996 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 19 12:33:50.168198 master-0 kubenswrapper[31830]: I0319 12:33:50.168140 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 19 12:33:50.168335 master-0 kubenswrapper[31830]: I0319 12:33:50.168309 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 19 12:33:50.168455 master-0 kubenswrapper[31830]: I0319 12:33:50.168436 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 19 12:33:50.175885 master-0 kubenswrapper[31830]: I0319 12:33:50.175787 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 19 12:33:50.266311 master-0 kubenswrapper[31830]: I0319 12:33:50.266245 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.266311 master-0 kubenswrapper[31830]: I0319 12:33:50.266311 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1781bd31-cf4f-4488-8c71-cb00178fbcf3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^186be095-9493-4d00-b006-a18c41f62ab5\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.266601 master-0 kubenswrapper[31830]: I0319 12:33:50.266335 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.266601 master-0 kubenswrapper[31830]: I0319 12:33:50.266354 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.266601 master-0 kubenswrapper[31830]: I0319 12:33:50.266376 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.266601 master-0 kubenswrapper[31830]: I0319 12:33:50.266396 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-config-data\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.266601 master-0 kubenswrapper[31830]: I0319 12:33:50.266433 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.266826 master-0 kubenswrapper[31830]: I0319 12:33:50.266603 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.266826 master-0 kubenswrapper[31830]: I0319 12:33:50.266687 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.266826 master-0 kubenswrapper[31830]: I0319 12:33:50.266717 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.266826 master-0 kubenswrapper[31830]: I0319 12:33:50.266789 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24sjf\" (UniqueName: \"kubernetes.io/projected/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-kube-api-access-24sjf\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.369518 master-0 kubenswrapper[31830]: I0319 12:33:50.369452 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.369711 master-0 kubenswrapper[31830]: I0319 12:33:50.369528 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1781bd31-cf4f-4488-8c71-cb00178fbcf3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^186be095-9493-4d00-b006-a18c41f62ab5\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.369711 master-0 kubenswrapper[31830]: I0319 12:33:50.369554 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.369711 master-0 kubenswrapper[31830]: I0319 12:33:50.369581 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.369711 master-0 kubenswrapper[31830]: I0319 12:33:50.369611 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.369711 master-0 kubenswrapper[31830]: I0319 12:33:50.369647 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-config-data\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.369711 master-0 kubenswrapper[31830]: I0319 12:33:50.369693 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.369962 master-0 kubenswrapper[31830]: I0319 12:33:50.369728 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.369962 master-0 kubenswrapper[31830]: I0319 12:33:50.369754 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.369962 master-0 kubenswrapper[31830]: I0319 12:33:50.369774 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.369962 master-0 kubenswrapper[31830]: I0319 12:33:50.369901 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24sjf\" (UniqueName: \"kubernetes.io/projected/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-kube-api-access-24sjf\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.371288 master-0 kubenswrapper[31830]: I0319 12:33:50.371258 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-config-data\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.371440 master-0 kubenswrapper[31830]: I0319 12:33:50.371379 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.371942 master-0 kubenswrapper[31830]: I0319 12:33:50.371891 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.372058 master-0 kubenswrapper[31830]: I0319 12:33:50.372027 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.372905 master-0 kubenswrapper[31830]: I0319 12:33:50.372854 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.374276 master-0 kubenswrapper[31830]: I0319 12:33:50.374240 31830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 12:33:50.374339 master-0 kubenswrapper[31830]: I0319 12:33:50.374269 31830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1781bd31-cf4f-4488-8c71-cb00178fbcf3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^186be095-9493-4d00-b006-a18c41f62ab5\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/146da145244cf93e1352ca3a2be4f2b0559718b2a69701385fa8b80c8c8ec903/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.375329 master-0 kubenswrapper[31830]: I0319 12:33:50.375288 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.376322 master-0 kubenswrapper[31830]: I0319 12:33:50.376288 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.378400 master-0 kubenswrapper[31830]: I0319 12:33:50.378375 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.386737 master-0 kubenswrapper[31830]: I0319 12:33:50.386698 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.404084 master-0 kubenswrapper[31830]: I0319 12:33:50.404037 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24sjf\" (UniqueName: \"kubernetes.io/projected/aee036d1-9a03-42ac-9beb-ef7ecc09c98d-kube-api-access-24sjf\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:50.572745 master-0 kubenswrapper[31830]: I0319 12:33:50.572651 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 19 12:33:50.574891 master-0 kubenswrapper[31830]: I0319 12:33:50.574833 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 19 12:33:50.586851 master-0 kubenswrapper[31830]: I0319 12:33:50.586758 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 19 12:33:50.591039 master-0 kubenswrapper[31830]: I0319 12:33:50.590945 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 19 12:33:50.591251 master-0 kubenswrapper[31830]: I0319 12:33:50.591191 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 19 12:33:50.606777 master-0 kubenswrapper[31830]: I0319 12:33:50.606702 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 19 12:33:50.681187 master-0 kubenswrapper[31830]: I0319 12:33:50.681142 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae148a74-f9ec-4ee8-be58-c14c466f4b9f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.681911 master-0 kubenswrapper[31830]: I0319 12:33:50.681597 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ae148a74-f9ec-4ee8-be58-c14c466f4b9f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.681911 master-0 kubenswrapper[31830]: I0319 12:33:50.681682 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae148a74-f9ec-4ee8-be58-c14c466f4b9f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.681911 master-0 kubenswrapper[31830]: I0319 12:33:50.681818 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3c3c418b-8e35-4ca0-85a5-d74cd7036430\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f9132448-54a0-45dc-87d6-a65a8d3b93aa\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.681911 master-0 kubenswrapper[31830]: I0319 12:33:50.681856 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ae148a74-f9ec-4ee8-be58-c14c466f4b9f-kolla-config\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.682163 master-0 kubenswrapper[31830]: I0319 12:33:50.681917 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frfq7\" (UniqueName: \"kubernetes.io/projected/ae148a74-f9ec-4ee8-be58-c14c466f4b9f-kube-api-access-frfq7\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.682927 master-0 kubenswrapper[31830]: I0319 12:33:50.681953 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae148a74-f9ec-4ee8-be58-c14c466f4b9f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.683132 master-0 kubenswrapper[31830]: I0319 12:33:50.683088 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ae148a74-f9ec-4ee8-be58-c14c466f4b9f-config-data-default\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.789261 master-0 kubenswrapper[31830]: I0319 12:33:50.789155 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3c3c418b-8e35-4ca0-85a5-d74cd7036430\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f9132448-54a0-45dc-87d6-a65a8d3b93aa\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.790555 master-0 kubenswrapper[31830]: I0319 12:33:50.790532 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ae148a74-f9ec-4ee8-be58-c14c466f4b9f-kolla-config\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.790716 master-0 kubenswrapper[31830]: I0319 12:33:50.790700 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frfq7\" (UniqueName: \"kubernetes.io/projected/ae148a74-f9ec-4ee8-be58-c14c466f4b9f-kube-api-access-frfq7\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.790914 master-0 kubenswrapper[31830]: I0319 12:33:50.790895 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae148a74-f9ec-4ee8-be58-c14c466f4b9f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.791063 master-0 kubenswrapper[31830]: I0319 12:33:50.791047 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ae148a74-f9ec-4ee8-be58-c14c466f4b9f-config-data-default\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.791332 master-0 kubenswrapper[31830]: I0319 12:33:50.791299 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae148a74-f9ec-4ee8-be58-c14c466f4b9f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.791645 master-0 kubenswrapper[31830]: I0319 12:33:50.791624 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ae148a74-f9ec-4ee8-be58-c14c466f4b9f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.791781 master-0 kubenswrapper[31830]: I0319 12:33:50.791762 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae148a74-f9ec-4ee8-be58-c14c466f4b9f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.794737 master-0 kubenswrapper[31830]: I0319 12:33:50.794689 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae148a74-f9ec-4ee8-be58-c14c466f4b9f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.794990 master-0 kubenswrapper[31830]: I0319 12:33:50.794968 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ae148a74-f9ec-4ee8-be58-c14c466f4b9f-config-data-default\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.796423 master-0 kubenswrapper[31830]: I0319 12:33:50.796166 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ae148a74-f9ec-4ee8-be58-c14c466f4b9f-kolla-config\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.798294 master-0 kubenswrapper[31830]: I0319 12:33:50.798269 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ae148a74-f9ec-4ee8-be58-c14c466f4b9f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.798940 master-0 kubenswrapper[31830]: I0319 12:33:50.798843 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae148a74-f9ec-4ee8-be58-c14c466f4b9f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.811894 master-0 kubenswrapper[31830]: I0319 12:33:50.807741 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae148a74-f9ec-4ee8-be58-c14c466f4b9f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.811894 master-0 kubenswrapper[31830]: I0319 12:33:50.808854 31830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 12:33:50.811894 master-0 kubenswrapper[31830]: I0319 12:33:50.808893 31830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3c3c418b-8e35-4ca0-85a5-d74cd7036430\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f9132448-54a0-45dc-87d6-a65a8d3b93aa\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/1ab74d2d7d77d025ae5815864293a2a199cc24c3cca12f21921f190044eae428/globalmount\"" pod="openstack/openstack-galera-0" Mar 19 12:33:50.826081 master-0 kubenswrapper[31830]: I0319 12:33:50.825969 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frfq7\" (UniqueName: \"kubernetes.io/projected/ae148a74-f9ec-4ee8-be58-c14c466f4b9f-kube-api-access-frfq7\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:50.833001 master-0 kubenswrapper[31830]: I0319 12:33:50.832949 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kmq6z"] Mar 19 12:33:50.834696 master-0 kubenswrapper[31830]: I0319 12:33:50.834669 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:50.836686 master-0 kubenswrapper[31830]: I0319 12:33:50.836628 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 19 12:33:50.841224 master-0 kubenswrapper[31830]: I0319 12:33:50.841178 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 19 12:33:50.850838 master-0 kubenswrapper[31830]: I0319 12:33:50.850681 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-xpwvp"] Mar 19 12:33:50.852975 master-0 kubenswrapper[31830]: I0319 12:33:50.852869 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:33:50.860944 master-0 kubenswrapper[31830]: I0319 12:33:50.860891 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kmq6z"] Mar 19 12:33:50.892170 master-0 kubenswrapper[31830]: I0319 12:33:50.892115 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-xpwvp"] Mar 19 12:33:50.894359 master-0 kubenswrapper[31830]: I0319 12:33:50.894307 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d516497-0523-41c4-a5cc-75fe94977ac3-var-run\") pod \"ovn-controller-kmq6z\" (UID: \"0d516497-0523-41c4-a5cc-75fe94977ac3\") " pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:50.894463 master-0 kubenswrapper[31830]: I0319 12:33:50.894366 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3cc6301e-c3c2-4a62-af7b-122fbdcd5552-scripts\") pod \"ovn-controller-ovs-xpwvp\" (UID: \"3cc6301e-c3c2-4a62-af7b-122fbdcd5552\") " pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:33:50.894463 master-0 kubenswrapper[31830]: I0319 12:33:50.894400 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0d516497-0523-41c4-a5cc-75fe94977ac3-var-run-ovn\") pod \"ovn-controller-kmq6z\" (UID: \"0d516497-0523-41c4-a5cc-75fe94977ac3\") " pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:50.894463 master-0 kubenswrapper[31830]: I0319 12:33:50.894447 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3cc6301e-c3c2-4a62-af7b-122fbdcd5552-var-lib\") pod \"ovn-controller-ovs-xpwvp\" (UID: \"3cc6301e-c3c2-4a62-af7b-122fbdcd5552\") " pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:33:50.894603 master-0 kubenswrapper[31830]: I0319 12:33:50.894488 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d516497-0523-41c4-a5cc-75fe94977ac3-combined-ca-bundle\") pod \"ovn-controller-kmq6z\" (UID: \"0d516497-0523-41c4-a5cc-75fe94977ac3\") " pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:50.894603 master-0 kubenswrapper[31830]: I0319 12:33:50.894523 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0d516497-0523-41c4-a5cc-75fe94977ac3-var-log-ovn\") pod \"ovn-controller-kmq6z\" (UID: \"0d516497-0523-41c4-a5cc-75fe94977ac3\") " pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:50.894603 master-0 kubenswrapper[31830]: I0319 12:33:50.894548 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d516497-0523-41c4-a5cc-75fe94977ac3-scripts\") pod \"ovn-controller-kmq6z\" (UID: \"0d516497-0523-41c4-a5cc-75fe94977ac3\") " pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:50.894603 master-0 kubenswrapper[31830]: I0319 12:33:50.894578 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3cc6301e-c3c2-4a62-af7b-122fbdcd5552-var-run\") pod \"ovn-controller-ovs-xpwvp\" (UID: \"3cc6301e-c3c2-4a62-af7b-122fbdcd5552\") " pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:33:50.894603 master-0 kubenswrapper[31830]: I0319 12:33:50.894606 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3cc6301e-c3c2-4a62-af7b-122fbdcd5552-etc-ovs\") pod \"ovn-controller-ovs-xpwvp\" (UID: \"3cc6301e-c3c2-4a62-af7b-122fbdcd5552\") " pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:33:50.894871 master-0 kubenswrapper[31830]: I0319 12:33:50.894626 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3cc6301e-c3c2-4a62-af7b-122fbdcd5552-var-log\") pod \"ovn-controller-ovs-xpwvp\" (UID: \"3cc6301e-c3c2-4a62-af7b-122fbdcd5552\") " pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:33:50.894871 master-0 kubenswrapper[31830]: I0319 12:33:50.894665 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d516497-0523-41c4-a5cc-75fe94977ac3-ovn-controller-tls-certs\") pod \"ovn-controller-kmq6z\" (UID: \"0d516497-0523-41c4-a5cc-75fe94977ac3\") " pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:50.894871 master-0 kubenswrapper[31830]: I0319 12:33:50.894685 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjclg\" (UniqueName: \"kubernetes.io/projected/0d516497-0523-41c4-a5cc-75fe94977ac3-kube-api-access-wjclg\") pod \"ovn-controller-kmq6z\" (UID: \"0d516497-0523-41c4-a5cc-75fe94977ac3\") " pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:50.894871 master-0 kubenswrapper[31830]: I0319 12:33:50.894703 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tprk6\" (UniqueName: \"kubernetes.io/projected/3cc6301e-c3c2-4a62-af7b-122fbdcd5552-kube-api-access-tprk6\") pod \"ovn-controller-ovs-xpwvp\" (UID: \"3cc6301e-c3c2-4a62-af7b-122fbdcd5552\") " pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:33:51.004595 master-0 kubenswrapper[31830]: I0319 12:33:51.000330 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d516497-0523-41c4-a5cc-75fe94977ac3-combined-ca-bundle\") pod \"ovn-controller-kmq6z\" (UID: \"0d516497-0523-41c4-a5cc-75fe94977ac3\") " pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:51.004595 master-0 kubenswrapper[31830]: I0319 12:33:51.000462 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0d516497-0523-41c4-a5cc-75fe94977ac3-var-log-ovn\") pod \"ovn-controller-kmq6z\" (UID: \"0d516497-0523-41c4-a5cc-75fe94977ac3\") " pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:51.004595 master-0 kubenswrapper[31830]: I0319 12:33:51.000504 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d516497-0523-41c4-a5cc-75fe94977ac3-scripts\") pod \"ovn-controller-kmq6z\" (UID: \"0d516497-0523-41c4-a5cc-75fe94977ac3\") " pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:51.004595 master-0 kubenswrapper[31830]: I0319 12:33:51.000565 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3cc6301e-c3c2-4a62-af7b-122fbdcd5552-var-run\") pod \"ovn-controller-ovs-xpwvp\" (UID: \"3cc6301e-c3c2-4a62-af7b-122fbdcd5552\") " pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:33:51.004595 master-0 kubenswrapper[31830]: I0319 12:33:51.000679 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3cc6301e-c3c2-4a62-af7b-122fbdcd5552-etc-ovs\") pod \"ovn-controller-ovs-xpwvp\" (UID: \"3cc6301e-c3c2-4a62-af7b-122fbdcd5552\") " pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:33:51.004595 master-0 kubenswrapper[31830]: I0319 12:33:51.000736 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3cc6301e-c3c2-4a62-af7b-122fbdcd5552-var-log\") pod \"ovn-controller-ovs-xpwvp\" (UID: \"3cc6301e-c3c2-4a62-af7b-122fbdcd5552\") " pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:33:51.004595 master-0 kubenswrapper[31830]: I0319 12:33:51.000827 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d516497-0523-41c4-a5cc-75fe94977ac3-ovn-controller-tls-certs\") pod \"ovn-controller-kmq6z\" (UID: \"0d516497-0523-41c4-a5cc-75fe94977ac3\") " pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:51.004595 master-0 kubenswrapper[31830]: I0319 12:33:51.000849 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjclg\" (UniqueName: \"kubernetes.io/projected/0d516497-0523-41c4-a5cc-75fe94977ac3-kube-api-access-wjclg\") pod \"ovn-controller-kmq6z\" (UID: \"0d516497-0523-41c4-a5cc-75fe94977ac3\") " pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:51.004595 master-0 kubenswrapper[31830]: I0319 12:33:51.000883 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tprk6\" (UniqueName: \"kubernetes.io/projected/3cc6301e-c3c2-4a62-af7b-122fbdcd5552-kube-api-access-tprk6\") pod \"ovn-controller-ovs-xpwvp\" (UID: \"3cc6301e-c3c2-4a62-af7b-122fbdcd5552\") " pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:33:51.004595 master-0 kubenswrapper[31830]: I0319 12:33:51.000937 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d516497-0523-41c4-a5cc-75fe94977ac3-var-run\") pod \"ovn-controller-kmq6z\" (UID: \"0d516497-0523-41c4-a5cc-75fe94977ac3\") " pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:51.004595 master-0 kubenswrapper[31830]: I0319 12:33:51.000987 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3cc6301e-c3c2-4a62-af7b-122fbdcd5552-scripts\") pod \"ovn-controller-ovs-xpwvp\" (UID: \"3cc6301e-c3c2-4a62-af7b-122fbdcd5552\") " pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:33:51.004595 master-0 kubenswrapper[31830]: I0319 12:33:51.001017 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0d516497-0523-41c4-a5cc-75fe94977ac3-var-run-ovn\") pod \"ovn-controller-kmq6z\" (UID: \"0d516497-0523-41c4-a5cc-75fe94977ac3\") " pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:51.004595 master-0 kubenswrapper[31830]: I0319 12:33:51.001067 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3cc6301e-c3c2-4a62-af7b-122fbdcd5552-var-lib\") pod \"ovn-controller-ovs-xpwvp\" (UID: \"3cc6301e-c3c2-4a62-af7b-122fbdcd5552\") " pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:33:51.004595 master-0 kubenswrapper[31830]: I0319 12:33:51.003736 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3cc6301e-c3c2-4a62-af7b-122fbdcd5552-var-lib\") pod \"ovn-controller-ovs-xpwvp\" (UID: \"3cc6301e-c3c2-4a62-af7b-122fbdcd5552\") " pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:33:51.007393 master-0 kubenswrapper[31830]: I0319 12:33:51.006838 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d516497-0523-41c4-a5cc-75fe94977ac3-var-run\") pod \"ovn-controller-kmq6z\" (UID: \"0d516497-0523-41c4-a5cc-75fe94977ac3\") " pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:51.007393 master-0 kubenswrapper[31830]: I0319 12:33:51.007272 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3cc6301e-c3c2-4a62-af7b-122fbdcd5552-var-run\") pod \"ovn-controller-ovs-xpwvp\" (UID: \"3cc6301e-c3c2-4a62-af7b-122fbdcd5552\") " pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:33:51.007677 master-0 kubenswrapper[31830]: I0319 12:33:51.007574 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0d516497-0523-41c4-a5cc-75fe94977ac3-var-log-ovn\") pod \"ovn-controller-kmq6z\" (UID: \"0d516497-0523-41c4-a5cc-75fe94977ac3\") " pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:51.008443 master-0 kubenswrapper[31830]: I0319 12:33:51.008417 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d516497-0523-41c4-a5cc-75fe94977ac3-combined-ca-bundle\") pod \"ovn-controller-kmq6z\" (UID: \"0d516497-0523-41c4-a5cc-75fe94977ac3\") " pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:51.008568 master-0 kubenswrapper[31830]: I0319 12:33:51.008543 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0d516497-0523-41c4-a5cc-75fe94977ac3-var-run-ovn\") pod \"ovn-controller-kmq6z\" (UID: \"0d516497-0523-41c4-a5cc-75fe94977ac3\") " pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:51.009417 master-0 kubenswrapper[31830]: I0319 12:33:51.008672 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3cc6301e-c3c2-4a62-af7b-122fbdcd5552-etc-ovs\") pod \"ovn-controller-ovs-xpwvp\" (UID: \"3cc6301e-c3c2-4a62-af7b-122fbdcd5552\") " pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:33:51.009417 master-0 kubenswrapper[31830]: I0319 12:33:51.008752 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3cc6301e-c3c2-4a62-af7b-122fbdcd5552-var-log\") pod \"ovn-controller-ovs-xpwvp\" (UID: \"3cc6301e-c3c2-4a62-af7b-122fbdcd5552\") " pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:33:51.010493 master-0 kubenswrapper[31830]: I0319 12:33:51.010444 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3cc6301e-c3c2-4a62-af7b-122fbdcd5552-scripts\") pod \"ovn-controller-ovs-xpwvp\" (UID: \"3cc6301e-c3c2-4a62-af7b-122fbdcd5552\") " pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:33:51.011787 master-0 kubenswrapper[31830]: I0319 12:33:51.011742 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d516497-0523-41c4-a5cc-75fe94977ac3-scripts\") pod \"ovn-controller-kmq6z\" (UID: \"0d516497-0523-41c4-a5cc-75fe94977ac3\") " pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:51.029449 master-0 kubenswrapper[31830]: I0319 12:33:51.029379 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d516497-0523-41c4-a5cc-75fe94977ac3-ovn-controller-tls-certs\") pod \"ovn-controller-kmq6z\" (UID: \"0d516497-0523-41c4-a5cc-75fe94977ac3\") " pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:51.034104 master-0 kubenswrapper[31830]: I0319 12:33:51.034042 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tprk6\" (UniqueName: \"kubernetes.io/projected/3cc6301e-c3c2-4a62-af7b-122fbdcd5552-kube-api-access-tprk6\") pod \"ovn-controller-ovs-xpwvp\" (UID: \"3cc6301e-c3c2-4a62-af7b-122fbdcd5552\") " pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:33:51.035980 master-0 kubenswrapper[31830]: I0319 12:33:51.035749 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjclg\" (UniqueName: \"kubernetes.io/projected/0d516497-0523-41c4-a5cc-75fe94977ac3-kube-api-access-wjclg\") pod \"ovn-controller-kmq6z\" (UID: \"0d516497-0523-41c4-a5cc-75fe94977ac3\") " pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:51.209457 master-0 kubenswrapper[31830]: I0319 12:33:51.209376 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kmq6z" Mar 19 12:33:51.244956 master-0 kubenswrapper[31830]: I0319 12:33:51.244737 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:33:51.814473 master-0 kubenswrapper[31830]: I0319 12:33:51.814432 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1781bd31-cf4f-4488-8c71-cb00178fbcf3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^186be095-9493-4d00-b006-a18c41f62ab5\") pod \"rabbitmq-server-0\" (UID: \"aee036d1-9a03-42ac-9beb-ef7ecc09c98d\") " pod="openstack/rabbitmq-server-0" Mar 19 12:33:52.282050 master-0 kubenswrapper[31830]: I0319 12:33:52.281971 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 19 12:33:52.453107 master-0 kubenswrapper[31830]: E0319 12:33:52.453035 31830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading blob sha256:f34359107d7c91113ad0aea96cc8ffe07a5ac90a43812c97d7dd8c90d96b9243: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/f3/f34359107d7c91113ad0aea96cc8ffe07a5ac90a43812c97d7dd8c90d96b9243?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20260319%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260319T123339Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=0fa5c0f65288b1c90b4a8f6f7a3bd5223eebd99f19e0e103a8506e33073b0e09®ion=us-east-1&namespace=podified-antelope-centos9&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=openstack-neutron-server&akamai_signature=exp=1773924519~hmac=3c73292d9c95d148da14a51a3ffd99908388d76cdebbb96d51979995a90fd92b\": remote error: tls: internal error" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" Mar 19 12:33:52.453333 master-0 kubenswrapper[31830]: E0319 12:33:52.453225 31830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d7h64dhb8hb8h587h59ch664h5c7h56dh67ch657h657h5fbh5chd8h9hcfh645h594h59ch565h669h648h5d5h8ch597h58bhd5h6fh67dh589hd4q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tx4v9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000800000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-76849d6659-2hlm9_openstack(b384031f-cffb-4dff-b0ff-df09432a1451): ErrImagePull: reading blob sha256:f34359107d7c91113ad0aea96cc8ffe07a5ac90a43812c97d7dd8c90d96b9243: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/f3/f34359107d7c91113ad0aea96cc8ffe07a5ac90a43812c97d7dd8c90d96b9243?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20260319%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260319T123339Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=0fa5c0f65288b1c90b4a8f6f7a3bd5223eebd99f19e0e103a8506e33073b0e09®ion=us-east-1&namespace=podified-antelope-centos9&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=openstack-neutron-server&akamai_signature=exp=1773924519~hmac=3c73292d9c95d148da14a51a3ffd99908388d76cdebbb96d51979995a90fd92b\": remote error: tls: internal error" logger="UnhandledError" Mar 19 12:33:52.454547 master-0 kubenswrapper[31830]: E0319 12:33:52.454475 31830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"reading blob sha256:f34359107d7c91113ad0aea96cc8ffe07a5ac90a43812c97d7dd8c90d96b9243: Get \\\"https://cdn01.quay.io/quayio-production-s3/sha256/f3/f34359107d7c91113ad0aea96cc8ffe07a5ac90a43812c97d7dd8c90d96b9243?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20260319%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260319T123339Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=0fa5c0f65288b1c90b4a8f6f7a3bd5223eebd99f19e0e103a8506e33073b0e09®ion=us-east-1&namespace=podified-antelope-centos9&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=openstack-neutron-server&akamai_signature=exp=1773924519~hmac=3c73292d9c95d148da14a51a3ffd99908388d76cdebbb96d51979995a90fd92b\\\": remote error: tls: internal error\"" pod="openstack/dnsmasq-dns-76849d6659-2hlm9" podUID="b384031f-cffb-4dff-b0ff-df09432a1451" Mar 19 12:33:52.564268 master-0 kubenswrapper[31830]: I0319 12:33:52.564118 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 19 12:33:52.569334 master-0 kubenswrapper[31830]: I0319 12:33:52.569293 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.571685 master-0 kubenswrapper[31830]: I0319 12:33:52.571656 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 19 12:33:52.571856 master-0 kubenswrapper[31830]: I0319 12:33:52.571815 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 19 12:33:52.571856 master-0 kubenswrapper[31830]: I0319 12:33:52.571833 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 19 12:33:52.586596 master-0 kubenswrapper[31830]: I0319 12:33:52.586537 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 19 12:33:52.846847 master-0 kubenswrapper[31830]: I0319 12:33:52.846668 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a77970ce-b081-4823-9337-4c37e16d6e2a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fed87314-9789-4028-b92e-aa8e1af629ec\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.846847 master-0 kubenswrapper[31830]: I0319 12:33:52.846747 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.846847 master-0 kubenswrapper[31830]: I0319 12:33:52.846825 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlb59\" (UniqueName: \"kubernetes.io/projected/c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48-kube-api-access-tlb59\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.847510 master-0 kubenswrapper[31830]: I0319 12:33:52.846859 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.847510 master-0 kubenswrapper[31830]: I0319 12:33:52.846878 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.847510 master-0 kubenswrapper[31830]: I0319 12:33:52.846916 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.847510 master-0 kubenswrapper[31830]: I0319 12:33:52.846939 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.847510 master-0 kubenswrapper[31830]: I0319 12:33:52.846956 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.948673 master-0 kubenswrapper[31830]: I0319 12:33:52.948613 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.948944 master-0 kubenswrapper[31830]: I0319 12:33:52.948761 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlb59\" (UniqueName: \"kubernetes.io/projected/c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48-kube-api-access-tlb59\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.948944 master-0 kubenswrapper[31830]: I0319 12:33:52.948807 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.948944 master-0 kubenswrapper[31830]: I0319 12:33:52.948824 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.948944 master-0 kubenswrapper[31830]: I0319 12:33:52.948857 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.948944 master-0 kubenswrapper[31830]: I0319 12:33:52.948878 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.948944 master-0 kubenswrapper[31830]: I0319 12:33:52.948895 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.949701 master-0 kubenswrapper[31830]: I0319 12:33:52.949679 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.949847 master-0 kubenswrapper[31830]: I0319 12:33:52.949818 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.951297 master-0 kubenswrapper[31830]: I0319 12:33:52.951267 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.953743 master-0 kubenswrapper[31830]: I0319 12:33:52.953716 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.955917 master-0 kubenswrapper[31830]: I0319 12:33:52.955879 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.976753 master-0 kubenswrapper[31830]: I0319 12:33:52.973473 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:52.993249 master-0 kubenswrapper[31830]: I0319 12:33:52.993051 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlb59\" (UniqueName: \"kubernetes.io/projected/c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48-kube-api-access-tlb59\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:53.051129 master-0 kubenswrapper[31830]: I0319 12:33:53.051084 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a77970ce-b081-4823-9337-4c37e16d6e2a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fed87314-9789-4028-b92e-aa8e1af629ec\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:53.053072 master-0 kubenswrapper[31830]: I0319 12:33:53.053036 31830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 12:33:53.053143 master-0 kubenswrapper[31830]: I0319 12:33:53.053084 31830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a77970ce-b081-4823-9337-4c37e16d6e2a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fed87314-9789-4028-b92e-aa8e1af629ec\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/de0e48e6ab052d8cfd943ac0a3b48962725c680866157b601ae126e3c711a251/globalmount\"" pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:53.097970 master-0 kubenswrapper[31830]: I0319 12:33:53.097364 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3c3c418b-8e35-4ca0-85a5-d74cd7036430\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f9132448-54a0-45dc-87d6-a65a8d3b93aa\") pod \"openstack-galera-0\" (UID: \"ae148a74-f9ec-4ee8-be58-c14c466f4b9f\") " pod="openstack/openstack-galera-0" Mar 19 12:33:53.307333 master-0 kubenswrapper[31830]: I0319 12:33:53.307273 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 19 12:33:53.470371 master-0 kubenswrapper[31830]: E0319 12:33:53.470318 31830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51\\\"\"" pod="openstack/dnsmasq-dns-76849d6659-2hlm9" podUID="b384031f-cffb-4dff-b0ff-df09432a1451" Mar 19 12:33:54.127506 master-0 kubenswrapper[31830]: I0319 12:33:54.124752 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a77970ce-b081-4823-9337-4c37e16d6e2a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fed87314-9789-4028-b92e-aa8e1af629ec\") pod \"openstack-cell1-galera-0\" (UID: \"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48\") " pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:54.406266 master-0 kubenswrapper[31830]: I0319 12:33:54.406119 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 19 12:33:55.377200 master-0 kubenswrapper[31830]: I0319 12:33:55.374069 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 19 12:33:55.377865 master-0 kubenswrapper[31830]: I0319 12:33:55.377747 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.387170 master-0 kubenswrapper[31830]: I0319 12:33:55.380723 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 19 12:33:55.387170 master-0 kubenswrapper[31830]: I0319 12:33:55.380846 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 19 12:33:55.387170 master-0 kubenswrapper[31830]: I0319 12:33:55.381002 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 19 12:33:55.387170 master-0 kubenswrapper[31830]: I0319 12:33:55.381185 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 19 12:33:55.417772 master-0 kubenswrapper[31830]: I0319 12:33:55.407404 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 19 12:33:55.519819 master-0 kubenswrapper[31830]: I0319 12:33:55.519742 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17d9c5b7-67e7-4189-9917-722938b3a343-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.520067 master-0 kubenswrapper[31830]: I0319 12:33:55.519852 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17d9c5b7-67e7-4189-9917-722938b3a343-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.520067 master-0 kubenswrapper[31830]: I0319 12:33:55.519922 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvwhb\" (UniqueName: \"kubernetes.io/projected/17d9c5b7-67e7-4189-9917-722938b3a343-kube-api-access-wvwhb\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.520067 master-0 kubenswrapper[31830]: I0319 12:33:55.519955 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d9c5b7-67e7-4189-9917-722938b3a343-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.520067 master-0 kubenswrapper[31830]: I0319 12:33:55.519976 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d9c5b7-67e7-4189-9917-722938b3a343-config\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.520067 master-0 kubenswrapper[31830]: I0319 12:33:55.520012 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-03043fc0-cbff-40bd-9a9f-2c41402febe1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ce69184e-6beb-42cf-af02-8c47c5a4ce6e\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.520067 master-0 kubenswrapper[31830]: I0319 12:33:55.520027 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17d9c5b7-67e7-4189-9917-722938b3a343-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.520067 master-0 kubenswrapper[31830]: I0319 12:33:55.520051 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d9c5b7-67e7-4189-9917-722938b3a343-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.621769 master-0 kubenswrapper[31830]: I0319 12:33:55.621688 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17d9c5b7-67e7-4189-9917-722938b3a343-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.621769 master-0 kubenswrapper[31830]: I0319 12:33:55.621782 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvwhb\" (UniqueName: \"kubernetes.io/projected/17d9c5b7-67e7-4189-9917-722938b3a343-kube-api-access-wvwhb\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.622080 master-0 kubenswrapper[31830]: I0319 12:33:55.621835 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d9c5b7-67e7-4189-9917-722938b3a343-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.622080 master-0 kubenswrapper[31830]: I0319 12:33:55.621858 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d9c5b7-67e7-4189-9917-722938b3a343-config\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.622080 master-0 kubenswrapper[31830]: I0319 12:33:55.621896 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-03043fc0-cbff-40bd-9a9f-2c41402febe1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ce69184e-6beb-42cf-af02-8c47c5a4ce6e\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.622080 master-0 kubenswrapper[31830]: I0319 12:33:55.621914 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17d9c5b7-67e7-4189-9917-722938b3a343-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.622280 master-0 kubenswrapper[31830]: I0319 12:33:55.622215 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d9c5b7-67e7-4189-9917-722938b3a343-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.622426 master-0 kubenswrapper[31830]: I0319 12:33:55.622392 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17d9c5b7-67e7-4189-9917-722938b3a343-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.623001 master-0 kubenswrapper[31830]: I0319 12:33:55.622945 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17d9c5b7-67e7-4189-9917-722938b3a343-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.623001 master-0 kubenswrapper[31830]: I0319 12:33:55.622953 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d9c5b7-67e7-4189-9917-722938b3a343-config\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.624353 master-0 kubenswrapper[31830]: I0319 12:33:55.623441 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17d9c5b7-67e7-4189-9917-722938b3a343-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.626758 master-0 kubenswrapper[31830]: I0319 12:33:55.625472 31830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 12:33:55.626758 master-0 kubenswrapper[31830]: I0319 12:33:55.625527 31830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-03043fc0-cbff-40bd-9a9f-2c41402febe1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ce69184e-6beb-42cf-af02-8c47c5a4ce6e\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/1c35eb5b1d641113991861b1fe74c91b28c45e25cfd23c6b75ada4a9b422d396/globalmount\"" pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.626975 master-0 kubenswrapper[31830]: I0319 12:33:55.626870 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d9c5b7-67e7-4189-9917-722938b3a343-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.641993 master-0 kubenswrapper[31830]: I0319 12:33:55.638350 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17d9c5b7-67e7-4189-9917-722938b3a343-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.641993 master-0 kubenswrapper[31830]: I0319 12:33:55.641242 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d9c5b7-67e7-4189-9917-722938b3a343-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.646159 master-0 kubenswrapper[31830]: I0319 12:33:55.643675 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvwhb\" (UniqueName: \"kubernetes.io/projected/17d9c5b7-67e7-4189-9917-722938b3a343-kube-api-access-wvwhb\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:55.750827 master-0 kubenswrapper[31830]: I0319 12:33:55.744423 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 19 12:33:55.763319 master-0 kubenswrapper[31830]: I0319 12:33:55.763249 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:55.775070 master-0 kubenswrapper[31830]: I0319 12:33:55.775015 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 19 12:33:55.775286 master-0 kubenswrapper[31830]: I0319 12:33:55.775227 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 19 12:33:55.775377 master-0 kubenswrapper[31830]: I0319 12:33:55.775356 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 19 12:33:55.803915 master-0 kubenswrapper[31830]: I0319 12:33:55.796962 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 19 12:33:55.935737 master-0 kubenswrapper[31830]: I0319 12:33:55.935399 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63ea9eeb-9288-44f6-82fb-70ccfb935857-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:55.935853 master-0 kubenswrapper[31830]: I0319 12:33:55.935833 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76pkp\" (UniqueName: \"kubernetes.io/projected/63ea9eeb-9288-44f6-82fb-70ccfb935857-kube-api-access-76pkp\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:55.935909 master-0 kubenswrapper[31830]: I0319 12:33:55.935892 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ea9eeb-9288-44f6-82fb-70ccfb935857-config\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:55.936741 master-0 kubenswrapper[31830]: I0319 12:33:55.935989 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ea9eeb-9288-44f6-82fb-70ccfb935857-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:55.936741 master-0 kubenswrapper[31830]: I0319 12:33:55.936105 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ea9eeb-9288-44f6-82fb-70ccfb935857-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:55.936741 master-0 kubenswrapper[31830]: I0319 12:33:55.936198 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ea9eeb-9288-44f6-82fb-70ccfb935857-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:55.936741 master-0 kubenswrapper[31830]: I0319 12:33:55.936327 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4bd29a7f-9107-404e-a1b8-20cc6cb830af\" (UniqueName: \"kubernetes.io/csi/topolvm.io^461da625-7cce-4867-91f1-c65272fbe894\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:55.936741 master-0 kubenswrapper[31830]: I0319 12:33:55.936363 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/63ea9eeb-9288-44f6-82fb-70ccfb935857-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:56.038286 master-0 kubenswrapper[31830]: I0319 12:33:56.038237 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4bd29a7f-9107-404e-a1b8-20cc6cb830af\" (UniqueName: \"kubernetes.io/csi/topolvm.io^461da625-7cce-4867-91f1-c65272fbe894\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:56.038623 master-0 kubenswrapper[31830]: I0319 12:33:56.038560 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/63ea9eeb-9288-44f6-82fb-70ccfb935857-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:56.038770 master-0 kubenswrapper[31830]: I0319 12:33:56.038749 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63ea9eeb-9288-44f6-82fb-70ccfb935857-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:56.039645 master-0 kubenswrapper[31830]: I0319 12:33:56.039627 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76pkp\" (UniqueName: \"kubernetes.io/projected/63ea9eeb-9288-44f6-82fb-70ccfb935857-kube-api-access-76pkp\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:56.039920 master-0 kubenswrapper[31830]: I0319 12:33:56.039903 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ea9eeb-9288-44f6-82fb-70ccfb935857-config\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:56.040072 master-0 kubenswrapper[31830]: I0319 12:33:56.040056 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ea9eeb-9288-44f6-82fb-70ccfb935857-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:56.040170 master-0 kubenswrapper[31830]: I0319 12:33:56.040156 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ea9eeb-9288-44f6-82fb-70ccfb935857-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:56.040258 master-0 kubenswrapper[31830]: I0319 12:33:56.040244 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ea9eeb-9288-44f6-82fb-70ccfb935857-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:56.040376 master-0 kubenswrapper[31830]: I0319 12:33:56.040344 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63ea9eeb-9288-44f6-82fb-70ccfb935857-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:56.040432 master-0 kubenswrapper[31830]: I0319 12:33:56.040186 31830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 12:33:56.040432 master-0 kubenswrapper[31830]: I0319 12:33:56.040418 31830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4bd29a7f-9107-404e-a1b8-20cc6cb830af\" (UniqueName: \"kubernetes.io/csi/topolvm.io^461da625-7cce-4867-91f1-c65272fbe894\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/36057c2a2c4b591dec6b80a5aa02dafc1d9c9e91e1160e03158dd3272069021f/globalmount\"" pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:56.040600 master-0 kubenswrapper[31830]: I0319 12:33:56.040542 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/63ea9eeb-9288-44f6-82fb-70ccfb935857-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:56.040734 master-0 kubenswrapper[31830]: I0319 12:33:56.040703 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ea9eeb-9288-44f6-82fb-70ccfb935857-config\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:56.046086 master-0 kubenswrapper[31830]: I0319 12:33:56.044394 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ea9eeb-9288-44f6-82fb-70ccfb935857-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:56.046086 master-0 kubenswrapper[31830]: I0319 12:33:56.044513 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ea9eeb-9288-44f6-82fb-70ccfb935857-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:56.046980 master-0 kubenswrapper[31830]: I0319 12:33:56.046950 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ea9eeb-9288-44f6-82fb-70ccfb935857-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:56.061568 master-0 kubenswrapper[31830]: I0319 12:33:56.061517 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76pkp\" (UniqueName: \"kubernetes.io/projected/63ea9eeb-9288-44f6-82fb-70ccfb935857-kube-api-access-76pkp\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:56.964843 master-0 kubenswrapper[31830]: I0319 12:33:56.964764 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-03043fc0-cbff-40bd-9a9f-2c41402febe1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ce69184e-6beb-42cf-af02-8c47c5a4ce6e\") pod \"ovsdbserver-sb-0\" (UID: \"17d9c5b7-67e7-4189-9917-722938b3a343\") " pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:57.198580 master-0 kubenswrapper[31830]: I0319 12:33:57.198457 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 19 12:33:58.259376 master-0 kubenswrapper[31830]: I0319 12:33:58.259063 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4bd29a7f-9107-404e-a1b8-20cc6cb830af\" (UniqueName: \"kubernetes.io/csi/topolvm.io^461da625-7cce-4867-91f1-c65272fbe894\") pod \"ovsdbserver-nb-0\" (UID: \"63ea9eeb-9288-44f6-82fb-70ccfb935857\") " pod="openstack/ovsdbserver-nb-0" Mar 19 12:33:58.483545 master-0 kubenswrapper[31830]: I0319 12:33:58.483456 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 19 12:34:01.120452 master-0 kubenswrapper[31830]: I0319 12:34:01.118499 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 19 12:34:01.544350 master-0 kubenswrapper[31830]: I0319 12:34:01.544303 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f3cbc6ce-25bb-4672-bcf9-813c973d8bcf","Type":"ContainerStarted","Data":"701b695ba8fc1e6d806d6420465ed6a9bae6444d9f12764ac25327a858ce0f7a"} Mar 19 12:34:01.546715 master-0 kubenswrapper[31830]: I0319 12:34:01.546676 31830 generic.go:334] "Generic (PLEG): container finished" podID="5bc881ed-8448-4279-97e5-cb834cab7a64" containerID="2f87b8c68ab857eef8cf34308ba968c1014a0a051c2ab11f727223874338b6a9" exitCode=0 Mar 19 12:34:01.546832 master-0 kubenswrapper[31830]: I0319 12:34:01.546721 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" event={"ID":"5bc881ed-8448-4279-97e5-cb834cab7a64","Type":"ContainerDied","Data":"2f87b8c68ab857eef8cf34308ba968c1014a0a051c2ab11f727223874338b6a9"} Mar 19 12:34:01.551120 master-0 kubenswrapper[31830]: I0319 12:34:01.550650 31830 generic.go:334] "Generic (PLEG): container finished" podID="f74e08c8-33e6-4926-9d30-ffdd77005bcf" containerID="d363254ed41d97b72d038aefc69798e8edb3ff23add6ea346fb0b53e1a23bc52" exitCode=0 Mar 19 12:34:01.551120 master-0 kubenswrapper[31830]: I0319 12:34:01.550758 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-8wx24" event={"ID":"f74e08c8-33e6-4926-9d30-ffdd77005bcf","Type":"ContainerDied","Data":"d363254ed41d97b72d038aefc69798e8edb3ff23add6ea346fb0b53e1a23bc52"} Mar 19 12:34:01.553530 master-0 kubenswrapper[31830]: I0319 12:34:01.553498 31830 generic.go:334] "Generic (PLEG): container finished" podID="6c712d63-eb7d-40d5-9f5f-05124cac728f" containerID="7f1f41f3800d43b1b8bd3af318c82cb38196e25a754615efb7cfde88dacfaaae" exitCode=0 Mar 19 12:34:01.553661 master-0 kubenswrapper[31830]: I0319 12:34:01.553558 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-vlql9" event={"ID":"6c712d63-eb7d-40d5-9f5f-05124cac728f","Type":"ContainerDied","Data":"7f1f41f3800d43b1b8bd3af318c82cb38196e25a754615efb7cfde88dacfaaae"} Mar 19 12:34:01.625879 master-0 kubenswrapper[31830]: I0319 12:34:01.625823 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 19 12:34:01.757085 master-0 kubenswrapper[31830]: I0319 12:34:01.726780 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 19 12:34:01.961966 master-0 kubenswrapper[31830]: W0319 12:34:01.961900 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cc6301e_c3c2_4a62_af7b_122fbdcd5552.slice/crio-52416c8f828287b5e5a67029f154764da18bd5dd0669c1a18675e14d9ee1f4ce WatchSource:0}: Error finding container 52416c8f828287b5e5a67029f154764da18bd5dd0669c1a18675e14d9ee1f4ce: Status 404 returned error can't find the container with id 52416c8f828287b5e5a67029f154764da18bd5dd0669c1a18675e14d9ee1f4ce Mar 19 12:34:01.973225 master-0 kubenswrapper[31830]: I0319 12:34:01.973171 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-xpwvp"] Mar 19 12:34:02.158422 master-0 kubenswrapper[31830]: I0319 12:34:02.158268 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kmq6z"] Mar 19 12:34:02.180973 master-0 kubenswrapper[31830]: I0319 12:34:02.180921 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 19 12:34:02.192285 master-0 kubenswrapper[31830]: I0319 12:34:02.192227 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 19 12:34:02.212146 master-0 kubenswrapper[31830]: W0319 12:34:02.212096 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae148a74_f9ec_4ee8_be58_c14c466f4b9f.slice/crio-41a6fdf048dcdfab9bedd80b5dab4f5f0f4cfceadd31c613377ac4c85e5cbd69 WatchSource:0}: Error finding container 41a6fdf048dcdfab9bedd80b5dab4f5f0f4cfceadd31c613377ac4c85e5cbd69: Status 404 returned error can't find the container with id 41a6fdf048dcdfab9bedd80b5dab4f5f0f4cfceadd31c613377ac4c85e5cbd69 Mar 19 12:34:02.542527 master-0 kubenswrapper[31830]: I0319 12:34:02.542471 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-8wx24" Mar 19 12:34:02.550054 master-0 kubenswrapper[31830]: I0319 12:34:02.549994 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-vlql9" Mar 19 12:34:02.581840 master-0 kubenswrapper[31830]: I0319 12:34:02.573662 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-8wx24" Mar 19 12:34:02.581840 master-0 kubenswrapper[31830]: I0319 12:34:02.573709 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-8wx24" event={"ID":"f74e08c8-33e6-4926-9d30-ffdd77005bcf","Type":"ContainerDied","Data":"d69f270a5314a71bfab67735ab530e2051558c33da3d9e7c45b31e95bf2cf38b"} Mar 19 12:34:02.581840 master-0 kubenswrapper[31830]: I0319 12:34:02.573774 31830 scope.go:117] "RemoveContainer" containerID="d363254ed41d97b72d038aefc69798e8edb3ff23add6ea346fb0b53e1a23bc52" Mar 19 12:34:02.581840 master-0 kubenswrapper[31830]: I0319 12:34:02.574489 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 19 12:34:02.581840 master-0 kubenswrapper[31830]: I0319 12:34:02.575200 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kmq6z" event={"ID":"0d516497-0523-41c4-a5cc-75fe94977ac3","Type":"ContainerStarted","Data":"ff1240096c56d3c7001412d336d86409e77241dc126bb4edec7cd22c08e4f507"} Mar 19 12:34:02.581840 master-0 kubenswrapper[31830]: I0319 12:34:02.581444 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xpwvp" event={"ID":"3cc6301e-c3c2-4a62-af7b-122fbdcd5552","Type":"ContainerStarted","Data":"52416c8f828287b5e5a67029f154764da18bd5dd0669c1a18675e14d9ee1f4ce"} Mar 19 12:34:02.589073 master-0 kubenswrapper[31830]: I0319 12:34:02.589032 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-vlql9" Mar 19 12:34:02.589204 master-0 kubenswrapper[31830]: I0319 12:34:02.589032 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-vlql9" event={"ID":"6c712d63-eb7d-40d5-9f5f-05124cac728f","Type":"ContainerDied","Data":"658c4068878569a13c5d02b6dc0d8e8d94abfd9eb7f0f81d8e8be0fd54cdb17a"} Mar 19 12:34:02.591563 master-0 kubenswrapper[31830]: I0319 12:34:02.591531 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ae148a74-f9ec-4ee8-be58-c14c466f4b9f","Type":"ContainerStarted","Data":"41a6fdf048dcdfab9bedd80b5dab4f5f0f4cfceadd31c613377ac4c85e5cbd69"} Mar 19 12:34:02.593646 master-0 kubenswrapper[31830]: I0319 12:34:02.593489 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48","Type":"ContainerStarted","Data":"84a5cc7d5984562f3e1319608a6cbf7da6508f3aecb4374312588d2581458bd3"} Mar 19 12:34:02.595832 master-0 kubenswrapper[31830]: I0319 12:34:02.595771 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e496a21c-f671-402f-a15c-911b063428c5","Type":"ContainerStarted","Data":"70c1015822c4d7ce97020546426df276be3896add037a939deb695194867b683"} Mar 19 12:34:02.597124 master-0 kubenswrapper[31830]: I0319 12:34:02.597083 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aee036d1-9a03-42ac-9beb-ef7ecc09c98d","Type":"ContainerStarted","Data":"96642820ddb7dd13774daab37f49280cee2665b5fef455339bb5b57dcfd6c7be"} Mar 19 12:34:02.599419 master-0 kubenswrapper[31830]: I0319 12:34:02.599393 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" event={"ID":"5bc881ed-8448-4279-97e5-cb834cab7a64","Type":"ContainerStarted","Data":"fd83cfd0a5030aa8964d4c66ff8815628c26f20ae2479ce72f5d922d8ec37a7e"} Mar 19 12:34:02.600265 master-0 kubenswrapper[31830]: I0319 12:34:02.600237 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" Mar 19 12:34:02.613283 master-0 kubenswrapper[31830]: I0319 12:34:02.613104 31830 scope.go:117] "RemoveContainer" containerID="7f1f41f3800d43b1b8bd3af318c82cb38196e25a754615efb7cfde88dacfaaae" Mar 19 12:34:02.645606 master-0 kubenswrapper[31830]: I0319 12:34:02.644013 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4xcs\" (UniqueName: \"kubernetes.io/projected/f74e08c8-33e6-4926-9d30-ffdd77005bcf-kube-api-access-s4xcs\") pod \"f74e08c8-33e6-4926-9d30-ffdd77005bcf\" (UID: \"f74e08c8-33e6-4926-9d30-ffdd77005bcf\") " Mar 19 12:34:02.645606 master-0 kubenswrapper[31830]: I0319 12:34:02.644085 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c712d63-eb7d-40d5-9f5f-05124cac728f-config\") pod \"6c712d63-eb7d-40d5-9f5f-05124cac728f\" (UID: \"6c712d63-eb7d-40d5-9f5f-05124cac728f\") " Mar 19 12:34:02.645606 master-0 kubenswrapper[31830]: I0319 12:34:02.644131 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f74e08c8-33e6-4926-9d30-ffdd77005bcf-dns-svc\") pod \"f74e08c8-33e6-4926-9d30-ffdd77005bcf\" (UID: \"f74e08c8-33e6-4926-9d30-ffdd77005bcf\") " Mar 19 12:34:02.645606 master-0 kubenswrapper[31830]: I0319 12:34:02.644231 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f74e08c8-33e6-4926-9d30-ffdd77005bcf-config\") pod \"f74e08c8-33e6-4926-9d30-ffdd77005bcf\" (UID: \"f74e08c8-33e6-4926-9d30-ffdd77005bcf\") " Mar 19 12:34:02.645606 master-0 kubenswrapper[31830]: I0319 12:34:02.644289 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khxws\" (UniqueName: \"kubernetes.io/projected/6c712d63-eb7d-40d5-9f5f-05124cac728f-kube-api-access-khxws\") pod \"6c712d63-eb7d-40d5-9f5f-05124cac728f\" (UID: \"6c712d63-eb7d-40d5-9f5f-05124cac728f\") " Mar 19 12:34:02.662464 master-0 kubenswrapper[31830]: I0319 12:34:02.662124 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f74e08c8-33e6-4926-9d30-ffdd77005bcf-kube-api-access-s4xcs" (OuterVolumeSpecName: "kube-api-access-s4xcs") pod "f74e08c8-33e6-4926-9d30-ffdd77005bcf" (UID: "f74e08c8-33e6-4926-9d30-ffdd77005bcf"). InnerVolumeSpecName "kube-api-access-s4xcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:34:02.665196 master-0 kubenswrapper[31830]: I0319 12:34:02.664971 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c712d63-eb7d-40d5-9f5f-05124cac728f-kube-api-access-khxws" (OuterVolumeSpecName: "kube-api-access-khxws") pod "6c712d63-eb7d-40d5-9f5f-05124cac728f" (UID: "6c712d63-eb7d-40d5-9f5f-05124cac728f"). InnerVolumeSpecName "kube-api-access-khxws". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:34:02.673457 master-0 kubenswrapper[31830]: I0319 12:34:02.673388 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f74e08c8-33e6-4926-9d30-ffdd77005bcf-config" (OuterVolumeSpecName: "config") pod "f74e08c8-33e6-4926-9d30-ffdd77005bcf" (UID: "f74e08c8-33e6-4926-9d30-ffdd77005bcf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:02.685663 master-0 kubenswrapper[31830]: I0319 12:34:02.685604 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c712d63-eb7d-40d5-9f5f-05124cac728f-config" (OuterVolumeSpecName: "config") pod "6c712d63-eb7d-40d5-9f5f-05124cac728f" (UID: "6c712d63-eb7d-40d5-9f5f-05124cac728f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:02.716871 master-0 kubenswrapper[31830]: I0319 12:34:02.716711 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" podStartSLOduration=4.492424527 podStartE2EDuration="25.716690828s" podCreationTimestamp="2026-03-19 12:33:37 +0000 UTC" firstStartedPulling="2026-03-19 12:33:39.760298129 +0000 UTC m=+1158.309258833" lastFinishedPulling="2026-03-19 12:34:00.98456443 +0000 UTC m=+1179.533525134" observedRunningTime="2026-03-19 12:34:02.694005844 +0000 UTC m=+1181.242966558" watchObservedRunningTime="2026-03-19 12:34:02.716690828 +0000 UTC m=+1181.265651532" Mar 19 12:34:02.736922 master-0 kubenswrapper[31830]: I0319 12:34:02.736108 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f74e08c8-33e6-4926-9d30-ffdd77005bcf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f74e08c8-33e6-4926-9d30-ffdd77005bcf" (UID: "f74e08c8-33e6-4926-9d30-ffdd77005bcf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:02.749308 master-0 kubenswrapper[31830]: I0319 12:34:02.748543 31830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c712d63-eb7d-40d5-9f5f-05124cac728f-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:02.749308 master-0 kubenswrapper[31830]: I0319 12:34:02.748616 31830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f74e08c8-33e6-4926-9d30-ffdd77005bcf-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:02.749308 master-0 kubenswrapper[31830]: I0319 12:34:02.748630 31830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f74e08c8-33e6-4926-9d30-ffdd77005bcf-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:02.749308 master-0 kubenswrapper[31830]: I0319 12:34:02.748642 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khxws\" (UniqueName: \"kubernetes.io/projected/6c712d63-eb7d-40d5-9f5f-05124cac728f-kube-api-access-khxws\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:02.749308 master-0 kubenswrapper[31830]: I0319 12:34:02.748657 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4xcs\" (UniqueName: \"kubernetes.io/projected/f74e08c8-33e6-4926-9d30-ffdd77005bcf-kube-api-access-s4xcs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:03.034826 master-0 kubenswrapper[31830]: I0319 12:34:03.030896 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-8wx24"] Mar 19 12:34:03.136889 master-0 kubenswrapper[31830]: I0319 12:34:03.136845 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-8wx24"] Mar 19 12:34:03.244526 master-0 kubenswrapper[31830]: I0319 12:34:03.244471 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-vlql9"] Mar 19 12:34:03.287378 master-0 kubenswrapper[31830]: I0319 12:34:03.287337 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-vlql9"] Mar 19 12:34:03.303715 master-0 kubenswrapper[31830]: I0319 12:34:03.303666 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 19 12:34:03.613762 master-0 kubenswrapper[31830]: I0319 12:34:03.612899 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"63ea9eeb-9288-44f6-82fb-70ccfb935857","Type":"ContainerStarted","Data":"8242a1433ed9d1948ae18deab2a6425f9838d4f838c73785fe36f0adf2303274"} Mar 19 12:34:03.701824 master-0 kubenswrapper[31830]: I0319 12:34:03.700759 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c712d63-eb7d-40d5-9f5f-05124cac728f" path="/var/lib/kubelet/pods/6c712d63-eb7d-40d5-9f5f-05124cac728f/volumes" Mar 19 12:34:03.701824 master-0 kubenswrapper[31830]: I0319 12:34:03.701407 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f74e08c8-33e6-4926-9d30-ffdd77005bcf" path="/var/lib/kubelet/pods/f74e08c8-33e6-4926-9d30-ffdd77005bcf/volumes" Mar 19 12:34:05.968743 master-0 kubenswrapper[31830]: W0319 12:34:05.968690 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17d9c5b7_67e7_4189_9917_722938b3a343.slice/crio-b3c19fe443f142cdf52b43385fedaba84523004310d6fd4eeb98132553413f05 WatchSource:0}: Error finding container b3c19fe443f142cdf52b43385fedaba84523004310d6fd4eeb98132553413f05: Status 404 returned error can't find the container with id b3c19fe443f142cdf52b43385fedaba84523004310d6fd4eeb98132553413f05 Mar 19 12:34:06.652720 master-0 kubenswrapper[31830]: I0319 12:34:06.652662 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"17d9c5b7-67e7-4189-9917-722938b3a343","Type":"ContainerStarted","Data":"b3c19fe443f142cdf52b43385fedaba84523004310d6fd4eeb98132553413f05"} Mar 19 12:34:09.165030 master-0 kubenswrapper[31830]: I0319 12:34:09.164985 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" Mar 19 12:34:09.264825 master-0 kubenswrapper[31830]: I0319 12:34:09.264740 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76849d6659-2hlm9"] Mar 19 12:34:11.747908 master-0 kubenswrapper[31830]: I0319 12:34:11.744395 31830 generic.go:334] "Generic (PLEG): container finished" podID="b384031f-cffb-4dff-b0ff-df09432a1451" containerID="b9f4e9b65284eda9443fde79c62eb2cfe743805018fa334e868915b247ae1332" exitCode=0 Mar 19 12:34:11.747908 master-0 kubenswrapper[31830]: I0319 12:34:11.744506 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76849d6659-2hlm9" event={"ID":"b384031f-cffb-4dff-b0ff-df09432a1451","Type":"ContainerDied","Data":"b9f4e9b65284eda9443fde79c62eb2cfe743805018fa334e868915b247ae1332"} Mar 19 12:34:11.760622 master-0 kubenswrapper[31830]: I0319 12:34:11.760517 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f3cbc6ce-25bb-4672-bcf9-813c973d8bcf","Type":"ContainerStarted","Data":"70ab441f29cc01e3591ad0327817f056952c25fd10545a084841f3311d91629e"} Mar 19 12:34:11.760870 master-0 kubenswrapper[31830]: I0319 12:34:11.760764 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 19 12:34:11.767302 master-0 kubenswrapper[31830]: I0319 12:34:11.766662 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48","Type":"ContainerStarted","Data":"a0b0a84025387b446ad6809a2d17239adbb55cb0f42c2f836dc2d168d2028a7c"} Mar 19 12:34:11.771173 master-0 kubenswrapper[31830]: I0319 12:34:11.770612 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-kmq6z" Mar 19 12:34:11.780974 master-0 kubenswrapper[31830]: I0319 12:34:11.780900 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xpwvp" event={"ID":"3cc6301e-c3c2-4a62-af7b-122fbdcd5552","Type":"ContainerStarted","Data":"4aab3b8ec4a16d1f8f0a70e2779c09bc67c13864350815a5d77ecb825f1e2353"} Mar 19 12:34:11.917683 master-0 kubenswrapper[31830]: I0319 12:34:11.915983 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=15.642168735 podStartE2EDuration="24.915956511s" podCreationTimestamp="2026-03-19 12:33:47 +0000 UTC" firstStartedPulling="2026-03-19 12:34:01.182075855 +0000 UTC m=+1179.731036549" lastFinishedPulling="2026-03-19 12:34:10.455863611 +0000 UTC m=+1189.004824325" observedRunningTime="2026-03-19 12:34:11.904002451 +0000 UTC m=+1190.452963155" watchObservedRunningTime="2026-03-19 12:34:11.915956511 +0000 UTC m=+1190.464917215" Mar 19 12:34:12.047991 master-0 kubenswrapper[31830]: I0319 12:34:12.046749 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-kmq6z" podStartSLOduration=13.243245837 podStartE2EDuration="22.046723547s" podCreationTimestamp="2026-03-19 12:33:50 +0000 UTC" firstStartedPulling="2026-03-19 12:34:02.196301359 +0000 UTC m=+1180.745262063" lastFinishedPulling="2026-03-19 12:34:10.999779059 +0000 UTC m=+1189.548739773" observedRunningTime="2026-03-19 12:34:12.034775677 +0000 UTC m=+1190.583736391" watchObservedRunningTime="2026-03-19 12:34:12.046723547 +0000 UTC m=+1190.595684251" Mar 19 12:34:12.226863 master-0 kubenswrapper[31830]: I0319 12:34:12.226785 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76849d6659-2hlm9" Mar 19 12:34:12.324653 master-0 kubenswrapper[31830]: I0319 12:34:12.324092 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx4v9\" (UniqueName: \"kubernetes.io/projected/b384031f-cffb-4dff-b0ff-df09432a1451-kube-api-access-tx4v9\") pod \"b384031f-cffb-4dff-b0ff-df09432a1451\" (UID: \"b384031f-cffb-4dff-b0ff-df09432a1451\") " Mar 19 12:34:12.324653 master-0 kubenswrapper[31830]: I0319 12:34:12.324180 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b384031f-cffb-4dff-b0ff-df09432a1451-dns-svc\") pod \"b384031f-cffb-4dff-b0ff-df09432a1451\" (UID: \"b384031f-cffb-4dff-b0ff-df09432a1451\") " Mar 19 12:34:12.324653 master-0 kubenswrapper[31830]: I0319 12:34:12.324248 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b384031f-cffb-4dff-b0ff-df09432a1451-config\") pod \"b384031f-cffb-4dff-b0ff-df09432a1451\" (UID: \"b384031f-cffb-4dff-b0ff-df09432a1451\") " Mar 19 12:34:12.333216 master-0 kubenswrapper[31830]: I0319 12:34:12.333139 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b384031f-cffb-4dff-b0ff-df09432a1451-kube-api-access-tx4v9" (OuterVolumeSpecName: "kube-api-access-tx4v9") pod "b384031f-cffb-4dff-b0ff-df09432a1451" (UID: "b384031f-cffb-4dff-b0ff-df09432a1451"). InnerVolumeSpecName "kube-api-access-tx4v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:34:12.364337 master-0 kubenswrapper[31830]: I0319 12:34:12.364199 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b384031f-cffb-4dff-b0ff-df09432a1451-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b384031f-cffb-4dff-b0ff-df09432a1451" (UID: "b384031f-cffb-4dff-b0ff-df09432a1451"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:12.426638 master-0 kubenswrapper[31830]: I0319 12:34:12.426587 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx4v9\" (UniqueName: \"kubernetes.io/projected/b384031f-cffb-4dff-b0ff-df09432a1451-kube-api-access-tx4v9\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:12.426638 master-0 kubenswrapper[31830]: I0319 12:34:12.426631 31830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b384031f-cffb-4dff-b0ff-df09432a1451-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:12.549638 master-0 kubenswrapper[31830]: I0319 12:34:12.549582 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b384031f-cffb-4dff-b0ff-df09432a1451-config" (OuterVolumeSpecName: "config") pod "b384031f-cffb-4dff-b0ff-df09432a1451" (UID: "b384031f-cffb-4dff-b0ff-df09432a1451"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:12.629718 master-0 kubenswrapper[31830]: I0319 12:34:12.629656 31830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b384031f-cffb-4dff-b0ff-df09432a1451-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:12.814482 master-0 kubenswrapper[31830]: I0319 12:34:12.814426 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"63ea9eeb-9288-44f6-82fb-70ccfb935857","Type":"ContainerStarted","Data":"04fef8715fdf07976f7948a42de7d29cf8231afb12681361ba70a0dc0277de12"} Mar 19 12:34:12.817076 master-0 kubenswrapper[31830]: I0319 12:34:12.817049 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76849d6659-2hlm9" event={"ID":"b384031f-cffb-4dff-b0ff-df09432a1451","Type":"ContainerDied","Data":"bb04f3f8401a33003845e19cd5c475d67561af033f56546bf4ba4bf2e9847af3"} Mar 19 12:34:12.817158 master-0 kubenswrapper[31830]: I0319 12:34:12.817091 31830 scope.go:117] "RemoveContainer" containerID="b9f4e9b65284eda9443fde79c62eb2cfe743805018fa334e868915b247ae1332" Mar 19 12:34:12.817243 master-0 kubenswrapper[31830]: I0319 12:34:12.817218 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76849d6659-2hlm9" Mar 19 12:34:12.821340 master-0 kubenswrapper[31830]: I0319 12:34:12.821154 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"17d9c5b7-67e7-4189-9917-722938b3a343","Type":"ContainerStarted","Data":"ff3f899b2582043adceb624e4ea934f5fd4e8e43cadb13c4f9251d28cc872d7c"} Mar 19 12:34:12.823436 master-0 kubenswrapper[31830]: I0319 12:34:12.823393 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e496a21c-f671-402f-a15c-911b063428c5","Type":"ContainerStarted","Data":"9bf824edea56fb1c32c625a1fbd691683bcfa672f40ef0b422c5bb99fb1aa218"} Mar 19 12:34:12.825394 master-0 kubenswrapper[31830]: I0319 12:34:12.825362 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kmq6z" event={"ID":"0d516497-0523-41c4-a5cc-75fe94977ac3","Type":"ContainerStarted","Data":"36a5ae451a1ffeca1b9d3428fe43a98aa1a3dfa16b54ed358dea8a17340613e5"} Mar 19 12:34:12.828753 master-0 kubenswrapper[31830]: I0319 12:34:12.827890 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xpwvp" event={"ID":"3cc6301e-c3c2-4a62-af7b-122fbdcd5552","Type":"ContainerDied","Data":"4aab3b8ec4a16d1f8f0a70e2779c09bc67c13864350815a5d77ecb825f1e2353"} Mar 19 12:34:12.828753 master-0 kubenswrapper[31830]: I0319 12:34:12.827940 31830 generic.go:334] "Generic (PLEG): container finished" podID="3cc6301e-c3c2-4a62-af7b-122fbdcd5552" containerID="4aab3b8ec4a16d1f8f0a70e2779c09bc67c13864350815a5d77ecb825f1e2353" exitCode=0 Mar 19 12:34:12.831516 master-0 kubenswrapper[31830]: I0319 12:34:12.831476 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ae148a74-f9ec-4ee8-be58-c14c466f4b9f","Type":"ContainerStarted","Data":"05da40f63092a5e482c23ff14d720a9f24137aa82055cf836dd53348543472a1"} Mar 19 12:34:12.986914 master-0 kubenswrapper[31830]: I0319 12:34:12.985922 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76849d6659-2hlm9"] Mar 19 12:34:13.005838 master-0 kubenswrapper[31830]: I0319 12:34:13.005752 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76849d6659-2hlm9"] Mar 19 12:34:13.697538 master-0 kubenswrapper[31830]: I0319 12:34:13.695772 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b384031f-cffb-4dff-b0ff-df09432a1451" path="/var/lib/kubelet/pods/b384031f-cffb-4dff-b0ff-df09432a1451/volumes" Mar 19 12:34:13.843582 master-0 kubenswrapper[31830]: I0319 12:34:13.843520 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aee036d1-9a03-42ac-9beb-ef7ecc09c98d","Type":"ContainerStarted","Data":"6aa82fa73a97635de0a402e10eaa6df5a6d299e00d439bcce83aa933e91b0ce1"} Mar 19 12:34:13.848908 master-0 kubenswrapper[31830]: I0319 12:34:13.847837 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xpwvp" event={"ID":"3cc6301e-c3c2-4a62-af7b-122fbdcd5552","Type":"ContainerStarted","Data":"a74d54b455b76f92b6c022e9dcdcf86c36dff3b5503a45c14bf6dfe6d946b7d1"} Mar 19 12:34:13.848908 master-0 kubenswrapper[31830]: I0319 12:34:13.847884 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xpwvp" event={"ID":"3cc6301e-c3c2-4a62-af7b-122fbdcd5552","Type":"ContainerStarted","Data":"94aad0743fa330db7b7d28e49d6b077f40f0751dc754c0b939b919fd52fd2d09"} Mar 19 12:34:13.848908 master-0 kubenswrapper[31830]: I0319 12:34:13.848232 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:34:13.848908 master-0 kubenswrapper[31830]: I0319 12:34:13.848373 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:34:13.905821 master-0 kubenswrapper[31830]: I0319 12:34:13.905343 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-xpwvp" podStartSLOduration=15.185877555 podStartE2EDuration="23.905319767s" podCreationTimestamp="2026-03-19 12:33:50 +0000 UTC" firstStartedPulling="2026-03-19 12:34:01.969925089 +0000 UTC m=+1180.518885803" lastFinishedPulling="2026-03-19 12:34:10.689367301 +0000 UTC m=+1189.238328015" observedRunningTime="2026-03-19 12:34:13.896808543 +0000 UTC m=+1192.445769257" watchObservedRunningTime="2026-03-19 12:34:13.905319767 +0000 UTC m=+1192.454280471" Mar 19 12:34:15.871773 master-0 kubenswrapper[31830]: I0319 12:34:15.871568 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"63ea9eeb-9288-44f6-82fb-70ccfb935857","Type":"ContainerStarted","Data":"ba4324c22ade952aaff3c95b921813264381a82e84f36b3cd9033f5e1447a59a"} Mar 19 12:34:15.875589 master-0 kubenswrapper[31830]: I0319 12:34:15.875421 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"17d9c5b7-67e7-4189-9917-722938b3a343","Type":"ContainerStarted","Data":"219b9783633c32d96d948fcc0b5b35fc51e7ea17fde4eb1b53adc5003344678b"} Mar 19 12:34:15.898054 master-0 kubenswrapper[31830]: I0319 12:34:15.897962 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.918106675 podStartE2EDuration="22.897940354s" podCreationTimestamp="2026-03-19 12:33:53 +0000 UTC" firstStartedPulling="2026-03-19 12:34:02.584656113 +0000 UTC m=+1181.133616827" lastFinishedPulling="2026-03-19 12:34:15.564489802 +0000 UTC m=+1194.113450506" observedRunningTime="2026-03-19 12:34:15.892816565 +0000 UTC m=+1194.441777269" watchObservedRunningTime="2026-03-19 12:34:15.897940354 +0000 UTC m=+1194.446901058" Mar 19 12:34:15.929221 master-0 kubenswrapper[31830]: I0319 12:34:15.929146 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=15.518693548 podStartE2EDuration="24.929123881s" podCreationTimestamp="2026-03-19 12:33:51 +0000 UTC" firstStartedPulling="2026-03-19 12:34:06.172201653 +0000 UTC m=+1184.721162357" lastFinishedPulling="2026-03-19 12:34:15.582631976 +0000 UTC m=+1194.131592690" observedRunningTime="2026-03-19 12:34:15.91812777 +0000 UTC m=+1194.467088484" watchObservedRunningTime="2026-03-19 12:34:15.929123881 +0000 UTC m=+1194.478084585" Mar 19 12:34:16.483999 master-0 kubenswrapper[31830]: I0319 12:34:16.483932 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 19 12:34:16.525438 master-0 kubenswrapper[31830]: I0319 12:34:16.525379 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 19 12:34:16.886035 master-0 kubenswrapper[31830]: I0319 12:34:16.885911 31830 generic.go:334] "Generic (PLEG): container finished" podID="c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48" containerID="a0b0a84025387b446ad6809a2d17239adbb55cb0f42c2f836dc2d168d2028a7c" exitCode=0 Mar 19 12:34:16.886035 master-0 kubenswrapper[31830]: I0319 12:34:16.886009 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48","Type":"ContainerDied","Data":"a0b0a84025387b446ad6809a2d17239adbb55cb0f42c2f836dc2d168d2028a7c"} Mar 19 12:34:16.887721 master-0 kubenswrapper[31830]: I0319 12:34:16.887686 31830 generic.go:334] "Generic (PLEG): container finished" podID="ae148a74-f9ec-4ee8-be58-c14c466f4b9f" containerID="05da40f63092a5e482c23ff14d720a9f24137aa82055cf836dd53348543472a1" exitCode=0 Mar 19 12:34:16.887907 master-0 kubenswrapper[31830]: I0319 12:34:16.887785 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ae148a74-f9ec-4ee8-be58-c14c466f4b9f","Type":"ContainerDied","Data":"05da40f63092a5e482c23ff14d720a9f24137aa82055cf836dd53348543472a1"} Mar 19 12:34:16.888507 master-0 kubenswrapper[31830]: I0319 12:34:16.888238 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 19 12:34:16.946271 master-0 kubenswrapper[31830]: I0319 12:34:16.946211 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 19 12:34:17.200036 master-0 kubenswrapper[31830]: I0319 12:34:17.199791 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 19 12:34:17.344349 master-0 kubenswrapper[31830]: I0319 12:34:17.344306 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79d6ccc4b7-kj747"] Mar 19 12:34:17.344998 master-0 kubenswrapper[31830]: E0319 12:34:17.344982 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c712d63-eb7d-40d5-9f5f-05124cac728f" containerName="init" Mar 19 12:34:17.345106 master-0 kubenswrapper[31830]: I0319 12:34:17.345093 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c712d63-eb7d-40d5-9f5f-05124cac728f" containerName="init" Mar 19 12:34:17.345192 master-0 kubenswrapper[31830]: E0319 12:34:17.345182 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b384031f-cffb-4dff-b0ff-df09432a1451" containerName="init" Mar 19 12:34:17.345263 master-0 kubenswrapper[31830]: I0319 12:34:17.345254 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b384031f-cffb-4dff-b0ff-df09432a1451" containerName="init" Mar 19 12:34:17.345358 master-0 kubenswrapper[31830]: E0319 12:34:17.345348 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f74e08c8-33e6-4926-9d30-ffdd77005bcf" containerName="init" Mar 19 12:34:17.345416 master-0 kubenswrapper[31830]: I0319 12:34:17.345407 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f74e08c8-33e6-4926-9d30-ffdd77005bcf" containerName="init" Mar 19 12:34:17.345664 master-0 kubenswrapper[31830]: I0319 12:34:17.345652 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c712d63-eb7d-40d5-9f5f-05124cac728f" containerName="init" Mar 19 12:34:17.345732 master-0 kubenswrapper[31830]: I0319 12:34:17.345722 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f74e08c8-33e6-4926-9d30-ffdd77005bcf" containerName="init" Mar 19 12:34:17.345830 master-0 kubenswrapper[31830]: I0319 12:34:17.345820 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b384031f-cffb-4dff-b0ff-df09432a1451" containerName="init" Mar 19 12:34:17.346969 master-0 kubenswrapper[31830]: I0319 12:34:17.346951 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79d6ccc4b7-kj747" Mar 19 12:34:17.353941 master-0 kubenswrapper[31830]: I0319 12:34:17.350851 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 19 12:34:17.355386 master-0 kubenswrapper[31830]: I0319 12:34:17.355310 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cftnk\" (UniqueName: \"kubernetes.io/projected/9f5e003c-2a5f-4796-8c10-5f5492005f76-kube-api-access-cftnk\") pod \"dnsmasq-dns-79d6ccc4b7-kj747\" (UID: \"9f5e003c-2a5f-4796-8c10-5f5492005f76\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-kj747" Mar 19 12:34:17.355386 master-0 kubenswrapper[31830]: I0319 12:34:17.355379 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f5e003c-2a5f-4796-8c10-5f5492005f76-ovsdbserver-nb\") pod \"dnsmasq-dns-79d6ccc4b7-kj747\" (UID: \"9f5e003c-2a5f-4796-8c10-5f5492005f76\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-kj747" Mar 19 12:34:17.355657 master-0 kubenswrapper[31830]: I0319 12:34:17.355628 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f5e003c-2a5f-4796-8c10-5f5492005f76-config\") pod \"dnsmasq-dns-79d6ccc4b7-kj747\" (UID: \"9f5e003c-2a5f-4796-8c10-5f5492005f76\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-kj747" Mar 19 12:34:17.355722 master-0 kubenswrapper[31830]: I0319 12:34:17.355699 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f5e003c-2a5f-4796-8c10-5f5492005f76-dns-svc\") pod \"dnsmasq-dns-79d6ccc4b7-kj747\" (UID: \"9f5e003c-2a5f-4796-8c10-5f5492005f76\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-kj747" Mar 19 12:34:17.378115 master-0 kubenswrapper[31830]: I0319 12:34:17.373569 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79d6ccc4b7-kj747"] Mar 19 12:34:17.409297 master-0 kubenswrapper[31830]: I0319 12:34:17.409249 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-5sd9s"] Mar 19 12:34:17.420768 master-0 kubenswrapper[31830]: I0319 12:34:17.420636 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5sd9s" Mar 19 12:34:17.425195 master-0 kubenswrapper[31830]: I0319 12:34:17.423730 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 19 12:34:17.439381 master-0 kubenswrapper[31830]: I0319 12:34:17.439325 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5sd9s"] Mar 19 12:34:17.457048 master-0 kubenswrapper[31830]: I0319 12:34:17.456943 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f5e003c-2a5f-4796-8c10-5f5492005f76-config\") pod \"dnsmasq-dns-79d6ccc4b7-kj747\" (UID: \"9f5e003c-2a5f-4796-8c10-5f5492005f76\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-kj747" Mar 19 12:34:17.457316 master-0 kubenswrapper[31830]: I0319 12:34:17.457297 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f5e003c-2a5f-4796-8c10-5f5492005f76-dns-svc\") pod \"dnsmasq-dns-79d6ccc4b7-kj747\" (UID: \"9f5e003c-2a5f-4796-8c10-5f5492005f76\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-kj747" Mar 19 12:34:17.457497 master-0 kubenswrapper[31830]: I0319 12:34:17.457483 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cftnk\" (UniqueName: \"kubernetes.io/projected/9f5e003c-2a5f-4796-8c10-5f5492005f76-kube-api-access-cftnk\") pod \"dnsmasq-dns-79d6ccc4b7-kj747\" (UID: \"9f5e003c-2a5f-4796-8c10-5f5492005f76\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-kj747" Mar 19 12:34:17.459173 master-0 kubenswrapper[31830]: I0319 12:34:17.458481 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f5e003c-2a5f-4796-8c10-5f5492005f76-dns-svc\") pod \"dnsmasq-dns-79d6ccc4b7-kj747\" (UID: \"9f5e003c-2a5f-4796-8c10-5f5492005f76\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-kj747" Mar 19 12:34:17.459173 master-0 kubenswrapper[31830]: I0319 12:34:17.458743 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f5e003c-2a5f-4796-8c10-5f5492005f76-ovsdbserver-nb\") pod \"dnsmasq-dns-79d6ccc4b7-kj747\" (UID: \"9f5e003c-2a5f-4796-8c10-5f5492005f76\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-kj747" Mar 19 12:34:17.459975 master-0 kubenswrapper[31830]: I0319 12:34:17.459899 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f5e003c-2a5f-4796-8c10-5f5492005f76-config\") pod \"dnsmasq-dns-79d6ccc4b7-kj747\" (UID: \"9f5e003c-2a5f-4796-8c10-5f5492005f76\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-kj747" Mar 19 12:34:17.460109 master-0 kubenswrapper[31830]: I0319 12:34:17.460080 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f5e003c-2a5f-4796-8c10-5f5492005f76-ovsdbserver-nb\") pod \"dnsmasq-dns-79d6ccc4b7-kj747\" (UID: \"9f5e003c-2a5f-4796-8c10-5f5492005f76\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-kj747" Mar 19 12:34:17.481913 master-0 kubenswrapper[31830]: I0319 12:34:17.481867 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cftnk\" (UniqueName: \"kubernetes.io/projected/9f5e003c-2a5f-4796-8c10-5f5492005f76-kube-api-access-cftnk\") pod \"dnsmasq-dns-79d6ccc4b7-kj747\" (UID: \"9f5e003c-2a5f-4796-8c10-5f5492005f76\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-kj747" Mar 19 12:34:17.569492 master-0 kubenswrapper[31830]: I0319 12:34:17.569424 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ddcv\" (UniqueName: \"kubernetes.io/projected/02abb8d5-6e39-493e-bc9c-7bcd2f99b423-kube-api-access-2ddcv\") pod \"ovn-controller-metrics-5sd9s\" (UID: \"02abb8d5-6e39-493e-bc9c-7bcd2f99b423\") " pod="openstack/ovn-controller-metrics-5sd9s" Mar 19 12:34:17.569718 master-0 kubenswrapper[31830]: I0319 12:34:17.569517 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/02abb8d5-6e39-493e-bc9c-7bcd2f99b423-ovs-rundir\") pod \"ovn-controller-metrics-5sd9s\" (UID: \"02abb8d5-6e39-493e-bc9c-7bcd2f99b423\") " pod="openstack/ovn-controller-metrics-5sd9s" Mar 19 12:34:17.569718 master-0 kubenswrapper[31830]: I0319 12:34:17.569551 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02abb8d5-6e39-493e-bc9c-7bcd2f99b423-config\") pod \"ovn-controller-metrics-5sd9s\" (UID: \"02abb8d5-6e39-493e-bc9c-7bcd2f99b423\") " pod="openstack/ovn-controller-metrics-5sd9s" Mar 19 12:34:17.569718 master-0 kubenswrapper[31830]: I0319 12:34:17.569583 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/02abb8d5-6e39-493e-bc9c-7bcd2f99b423-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5sd9s\" (UID: \"02abb8d5-6e39-493e-bc9c-7bcd2f99b423\") " pod="openstack/ovn-controller-metrics-5sd9s" Mar 19 12:34:17.569718 master-0 kubenswrapper[31830]: I0319 12:34:17.569610 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/02abb8d5-6e39-493e-bc9c-7bcd2f99b423-ovn-rundir\") pod \"ovn-controller-metrics-5sd9s\" (UID: \"02abb8d5-6e39-493e-bc9c-7bcd2f99b423\") " pod="openstack/ovn-controller-metrics-5sd9s" Mar 19 12:34:17.569718 master-0 kubenswrapper[31830]: I0319 12:34:17.569660 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02abb8d5-6e39-493e-bc9c-7bcd2f99b423-combined-ca-bundle\") pod \"ovn-controller-metrics-5sd9s\" (UID: \"02abb8d5-6e39-493e-bc9c-7bcd2f99b423\") " pod="openstack/ovn-controller-metrics-5sd9s" Mar 19 12:34:17.665842 master-0 kubenswrapper[31830]: I0319 12:34:17.664685 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79d6ccc4b7-kj747"] Mar 19 12:34:17.666080 master-0 kubenswrapper[31830]: I0319 12:34:17.665977 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79d6ccc4b7-kj747" Mar 19 12:34:17.673724 master-0 kubenswrapper[31830]: I0319 12:34:17.672072 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/02abb8d5-6e39-493e-bc9c-7bcd2f99b423-ovn-rundir\") pod \"ovn-controller-metrics-5sd9s\" (UID: \"02abb8d5-6e39-493e-bc9c-7bcd2f99b423\") " pod="openstack/ovn-controller-metrics-5sd9s" Mar 19 12:34:17.673724 master-0 kubenswrapper[31830]: I0319 12:34:17.672247 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02abb8d5-6e39-493e-bc9c-7bcd2f99b423-combined-ca-bundle\") pod \"ovn-controller-metrics-5sd9s\" (UID: \"02abb8d5-6e39-493e-bc9c-7bcd2f99b423\") " pod="openstack/ovn-controller-metrics-5sd9s" Mar 19 12:34:17.673724 master-0 kubenswrapper[31830]: I0319 12:34:17.672344 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ddcv\" (UniqueName: \"kubernetes.io/projected/02abb8d5-6e39-493e-bc9c-7bcd2f99b423-kube-api-access-2ddcv\") pod \"ovn-controller-metrics-5sd9s\" (UID: \"02abb8d5-6e39-493e-bc9c-7bcd2f99b423\") " pod="openstack/ovn-controller-metrics-5sd9s" Mar 19 12:34:17.673724 master-0 kubenswrapper[31830]: I0319 12:34:17.672456 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/02abb8d5-6e39-493e-bc9c-7bcd2f99b423-ovs-rundir\") pod \"ovn-controller-metrics-5sd9s\" (UID: \"02abb8d5-6e39-493e-bc9c-7bcd2f99b423\") " pod="openstack/ovn-controller-metrics-5sd9s" Mar 19 12:34:17.673724 master-0 kubenswrapper[31830]: I0319 12:34:17.672525 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02abb8d5-6e39-493e-bc9c-7bcd2f99b423-config\") pod \"ovn-controller-metrics-5sd9s\" (UID: \"02abb8d5-6e39-493e-bc9c-7bcd2f99b423\") " pod="openstack/ovn-controller-metrics-5sd9s" Mar 19 12:34:17.673724 master-0 kubenswrapper[31830]: I0319 12:34:17.672579 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/02abb8d5-6e39-493e-bc9c-7bcd2f99b423-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5sd9s\" (UID: \"02abb8d5-6e39-493e-bc9c-7bcd2f99b423\") " pod="openstack/ovn-controller-metrics-5sd9s" Mar 19 12:34:17.675291 master-0 kubenswrapper[31830]: I0319 12:34:17.675258 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/02abb8d5-6e39-493e-bc9c-7bcd2f99b423-ovn-rundir\") pod \"ovn-controller-metrics-5sd9s\" (UID: \"02abb8d5-6e39-493e-bc9c-7bcd2f99b423\") " pod="openstack/ovn-controller-metrics-5sd9s" Mar 19 12:34:17.675558 master-0 kubenswrapper[31830]: I0319 12:34:17.675537 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/02abb8d5-6e39-493e-bc9c-7bcd2f99b423-ovs-rundir\") pod \"ovn-controller-metrics-5sd9s\" (UID: \"02abb8d5-6e39-493e-bc9c-7bcd2f99b423\") " pod="openstack/ovn-controller-metrics-5sd9s" Mar 19 12:34:17.676484 master-0 kubenswrapper[31830]: I0319 12:34:17.676462 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02abb8d5-6e39-493e-bc9c-7bcd2f99b423-config\") pod \"ovn-controller-metrics-5sd9s\" (UID: \"02abb8d5-6e39-493e-bc9c-7bcd2f99b423\") " pod="openstack/ovn-controller-metrics-5sd9s" Mar 19 12:34:17.684900 master-0 kubenswrapper[31830]: I0319 12:34:17.684814 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/02abb8d5-6e39-493e-bc9c-7bcd2f99b423-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5sd9s\" (UID: \"02abb8d5-6e39-493e-bc9c-7bcd2f99b423\") " pod="openstack/ovn-controller-metrics-5sd9s" Mar 19 12:34:17.704076 master-0 kubenswrapper[31830]: I0319 12:34:17.704030 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02abb8d5-6e39-493e-bc9c-7bcd2f99b423-combined-ca-bundle\") pod \"ovn-controller-metrics-5sd9s\" (UID: \"02abb8d5-6e39-493e-bc9c-7bcd2f99b423\") " pod="openstack/ovn-controller-metrics-5sd9s" Mar 19 12:34:17.708206 master-0 kubenswrapper[31830]: I0319 12:34:17.708098 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ddcv\" (UniqueName: \"kubernetes.io/projected/02abb8d5-6e39-493e-bc9c-7bcd2f99b423-kube-api-access-2ddcv\") pod \"ovn-controller-metrics-5sd9s\" (UID: \"02abb8d5-6e39-493e-bc9c-7bcd2f99b423\") " pod="openstack/ovn-controller-metrics-5sd9s" Mar 19 12:34:17.737215 master-0 kubenswrapper[31830]: I0319 12:34:17.737154 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76f498f559-g9gf7"] Mar 19 12:34:17.739730 master-0 kubenswrapper[31830]: I0319 12:34:17.739686 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f498f559-g9gf7" Mar 19 12:34:17.746958 master-0 kubenswrapper[31830]: I0319 12:34:17.744366 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 19 12:34:17.752718 master-0 kubenswrapper[31830]: I0319 12:34:17.752499 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76f498f559-g9gf7"] Mar 19 12:34:17.757157 master-0 kubenswrapper[31830]: I0319 12:34:17.757093 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5sd9s" Mar 19 12:34:17.789738 master-0 kubenswrapper[31830]: I0319 12:34:17.789668 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-ovsdbserver-nb\") pod \"dnsmasq-dns-76f498f559-g9gf7\" (UID: \"ddafc683-0ab0-4152-af3b-5fd025697432\") " pod="openstack/dnsmasq-dns-76f498f559-g9gf7" Mar 19 12:34:17.789975 master-0 kubenswrapper[31830]: I0319 12:34:17.789837 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-config\") pod \"dnsmasq-dns-76f498f559-g9gf7\" (UID: \"ddafc683-0ab0-4152-af3b-5fd025697432\") " pod="openstack/dnsmasq-dns-76f498f559-g9gf7" Mar 19 12:34:17.789975 master-0 kubenswrapper[31830]: I0319 12:34:17.789897 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-dns-svc\") pod \"dnsmasq-dns-76f498f559-g9gf7\" (UID: \"ddafc683-0ab0-4152-af3b-5fd025697432\") " pod="openstack/dnsmasq-dns-76f498f559-g9gf7" Mar 19 12:34:17.790077 master-0 kubenswrapper[31830]: I0319 12:34:17.789993 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-ovsdbserver-sb\") pod \"dnsmasq-dns-76f498f559-g9gf7\" (UID: \"ddafc683-0ab0-4152-af3b-5fd025697432\") " pod="openstack/dnsmasq-dns-76f498f559-g9gf7" Mar 19 12:34:17.790077 master-0 kubenswrapper[31830]: I0319 12:34:17.790023 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldhpj\" (UniqueName: \"kubernetes.io/projected/ddafc683-0ab0-4152-af3b-5fd025697432-kube-api-access-ldhpj\") pod \"dnsmasq-dns-76f498f559-g9gf7\" (UID: \"ddafc683-0ab0-4152-af3b-5fd025697432\") " pod="openstack/dnsmasq-dns-76f498f559-g9gf7" Mar 19 12:34:17.895992 master-0 kubenswrapper[31830]: I0319 12:34:17.895157 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-ovsdbserver-nb\") pod \"dnsmasq-dns-76f498f559-g9gf7\" (UID: \"ddafc683-0ab0-4152-af3b-5fd025697432\") " pod="openstack/dnsmasq-dns-76f498f559-g9gf7" Mar 19 12:34:17.895992 master-0 kubenswrapper[31830]: I0319 12:34:17.895728 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-ovsdbserver-nb\") pod \"dnsmasq-dns-76f498f559-g9gf7\" (UID: \"ddafc683-0ab0-4152-af3b-5fd025697432\") " pod="openstack/dnsmasq-dns-76f498f559-g9gf7" Mar 19 12:34:17.895992 master-0 kubenswrapper[31830]: I0319 12:34:17.895861 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-config\") pod \"dnsmasq-dns-76f498f559-g9gf7\" (UID: \"ddafc683-0ab0-4152-af3b-5fd025697432\") " pod="openstack/dnsmasq-dns-76f498f559-g9gf7" Mar 19 12:34:17.895992 master-0 kubenswrapper[31830]: I0319 12:34:17.895914 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-dns-svc\") pod \"dnsmasq-dns-76f498f559-g9gf7\" (UID: \"ddafc683-0ab0-4152-af3b-5fd025697432\") " pod="openstack/dnsmasq-dns-76f498f559-g9gf7" Mar 19 12:34:17.895992 master-0 kubenswrapper[31830]: I0319 12:34:17.895986 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-ovsdbserver-sb\") pod \"dnsmasq-dns-76f498f559-g9gf7\" (UID: \"ddafc683-0ab0-4152-af3b-5fd025697432\") " pod="openstack/dnsmasq-dns-76f498f559-g9gf7" Mar 19 12:34:17.898082 master-0 kubenswrapper[31830]: I0319 12:34:17.896008 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldhpj\" (UniqueName: \"kubernetes.io/projected/ddafc683-0ab0-4152-af3b-5fd025697432-kube-api-access-ldhpj\") pod \"dnsmasq-dns-76f498f559-g9gf7\" (UID: \"ddafc683-0ab0-4152-af3b-5fd025697432\") " pod="openstack/dnsmasq-dns-76f498f559-g9gf7" Mar 19 12:34:17.898082 master-0 kubenswrapper[31830]: I0319 12:34:17.896955 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-config\") pod \"dnsmasq-dns-76f498f559-g9gf7\" (UID: \"ddafc683-0ab0-4152-af3b-5fd025697432\") " pod="openstack/dnsmasq-dns-76f498f559-g9gf7" Mar 19 12:34:17.898082 master-0 kubenswrapper[31830]: I0319 12:34:17.897650 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-dns-svc\") pod \"dnsmasq-dns-76f498f559-g9gf7\" (UID: \"ddafc683-0ab0-4152-af3b-5fd025697432\") " pod="openstack/dnsmasq-dns-76f498f559-g9gf7" Mar 19 12:34:17.898390 master-0 kubenswrapper[31830]: I0319 12:34:17.898371 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-ovsdbserver-sb\") pod \"dnsmasq-dns-76f498f559-g9gf7\" (UID: \"ddafc683-0ab0-4152-af3b-5fd025697432\") " pod="openstack/dnsmasq-dns-76f498f559-g9gf7" Mar 19 12:34:17.914412 master-0 kubenswrapper[31830]: I0319 12:34:17.914333 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ae148a74-f9ec-4ee8-be58-c14c466f4b9f","Type":"ContainerStarted","Data":"a79335a703001863ecff8b85ffba72b020372785270786ca5730925d891385dd"} Mar 19 12:34:17.922219 master-0 kubenswrapper[31830]: I0319 12:34:17.921034 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldhpj\" (UniqueName: \"kubernetes.io/projected/ddafc683-0ab0-4152-af3b-5fd025697432-kube-api-access-ldhpj\") pod \"dnsmasq-dns-76f498f559-g9gf7\" (UID: \"ddafc683-0ab0-4152-af3b-5fd025697432\") " pod="openstack/dnsmasq-dns-76f498f559-g9gf7" Mar 19 12:34:17.923284 master-0 kubenswrapper[31830]: I0319 12:34:17.923229 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48","Type":"ContainerStarted","Data":"8aef1d040231ea4af92d1fa38e97fbdbdf0e2730a2dbc4c7625e83c00bc22621"} Mar 19 12:34:17.950025 master-0 kubenswrapper[31830]: I0319 12:34:17.949770 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=30.202288285 podStartE2EDuration="38.949751176s" podCreationTimestamp="2026-03-19 12:33:39 +0000 UTC" firstStartedPulling="2026-03-19 12:34:02.219108207 +0000 UTC m=+1180.768068911" lastFinishedPulling="2026-03-19 12:34:10.966571098 +0000 UTC m=+1189.515531802" observedRunningTime="2026-03-19 12:34:17.947467955 +0000 UTC m=+1196.496428659" watchObservedRunningTime="2026-03-19 12:34:17.949751176 +0000 UTC m=+1196.498711870" Mar 19 12:34:18.003561 master-0 kubenswrapper[31830]: I0319 12:34:18.001921 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.713374241 podStartE2EDuration="37.001900163s" podCreationTimestamp="2026-03-19 12:33:41 +0000 UTC" firstStartedPulling="2026-03-19 12:34:01.714501537 +0000 UTC m=+1180.263462241" lastFinishedPulling="2026-03-19 12:34:11.003027459 +0000 UTC m=+1189.551988163" observedRunningTime="2026-03-19 12:34:17.983188293 +0000 UTC m=+1196.532149017" watchObservedRunningTime="2026-03-19 12:34:18.001900163 +0000 UTC m=+1196.550860867" Mar 19 12:34:18.061990 master-0 kubenswrapper[31830]: I0319 12:34:18.061943 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 19 12:34:18.194270 master-0 kubenswrapper[31830]: I0319 12:34:18.192233 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f498f559-g9gf7" Mar 19 12:34:18.204457 master-0 kubenswrapper[31830]: I0319 12:34:18.204413 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 19 12:34:18.252677 master-0 kubenswrapper[31830]: I0319 12:34:18.251959 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79d6ccc4b7-kj747"] Mar 19 12:34:18.272206 master-0 kubenswrapper[31830]: I0319 12:34:18.258066 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 19 12:34:18.272206 master-0 kubenswrapper[31830]: W0319 12:34:18.271311 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f5e003c_2a5f_4796_8c10_5f5492005f76.slice/crio-5b02705ba80988adfbf3a3df427464b076b9499592564a3f1f3ef37e2f2b04b8 WatchSource:0}: Error finding container 5b02705ba80988adfbf3a3df427464b076b9499592564a3f1f3ef37e2f2b04b8: Status 404 returned error can't find the container with id 5b02705ba80988adfbf3a3df427464b076b9499592564a3f1f3ef37e2f2b04b8 Mar 19 12:34:18.420307 master-0 kubenswrapper[31830]: I0319 12:34:18.412752 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5sd9s"] Mar 19 12:34:18.756944 master-0 kubenswrapper[31830]: W0319 12:34:18.753238 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddafc683_0ab0_4152_af3b_5fd025697432.slice/crio-b3434ad48c9f23b718a27225de35ceb3933b00e3f359af5853661485b9a63f62 WatchSource:0}: Error finding container b3434ad48c9f23b718a27225de35ceb3933b00e3f359af5853661485b9a63f62: Status 404 returned error can't find the container with id b3434ad48c9f23b718a27225de35ceb3933b00e3f359af5853661485b9a63f62 Mar 19 12:34:18.756944 master-0 kubenswrapper[31830]: I0319 12:34:18.753257 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76f498f559-g9gf7"] Mar 19 12:34:18.937417 master-0 kubenswrapper[31830]: I0319 12:34:18.937330 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5sd9s" event={"ID":"02abb8d5-6e39-493e-bc9c-7bcd2f99b423","Type":"ContainerStarted","Data":"724366e0789ffacc27add17a59c5065c8cdf60df2582322c17ca0410143768af"} Mar 19 12:34:18.937417 master-0 kubenswrapper[31830]: I0319 12:34:18.937402 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5sd9s" event={"ID":"02abb8d5-6e39-493e-bc9c-7bcd2f99b423","Type":"ContainerStarted","Data":"3f1f295a1e095edfed42edff69956554e661c5362b0b7c7598fc4aae5cc0e3cc"} Mar 19 12:34:18.938742 master-0 kubenswrapper[31830]: I0319 12:34:18.938711 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f498f559-g9gf7" event={"ID":"ddafc683-0ab0-4152-af3b-5fd025697432","Type":"ContainerStarted","Data":"b3434ad48c9f23b718a27225de35ceb3933b00e3f359af5853661485b9a63f62"} Mar 19 12:34:18.944679 master-0 kubenswrapper[31830]: I0319 12:34:18.944619 31830 generic.go:334] "Generic (PLEG): container finished" podID="9f5e003c-2a5f-4796-8c10-5f5492005f76" containerID="807693777aac1d8e7c5c6a435557a72a21f16ff50b302dbd285ca3648d506252" exitCode=0 Mar 19 12:34:18.944877 master-0 kubenswrapper[31830]: I0319 12:34:18.944807 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d6ccc4b7-kj747" event={"ID":"9f5e003c-2a5f-4796-8c10-5f5492005f76","Type":"ContainerDied","Data":"807693777aac1d8e7c5c6a435557a72a21f16ff50b302dbd285ca3648d506252"} Mar 19 12:34:18.944877 master-0 kubenswrapper[31830]: I0319 12:34:18.944848 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d6ccc4b7-kj747" event={"ID":"9f5e003c-2a5f-4796-8c10-5f5492005f76","Type":"ContainerStarted","Data":"5b02705ba80988adfbf3a3df427464b076b9499592564a3f1f3ef37e2f2b04b8"} Mar 19 12:34:18.990328 master-0 kubenswrapper[31830]: I0319 12:34:18.990254 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 19 12:34:19.068720 master-0 kubenswrapper[31830]: I0319 12:34:19.064613 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-5sd9s" podStartSLOduration=2.0645844 podStartE2EDuration="2.0645844s" podCreationTimestamp="2026-03-19 12:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:34:19.04946108 +0000 UTC m=+1197.598421784" watchObservedRunningTime="2026-03-19 12:34:19.0645844 +0000 UTC m=+1197.613545104" Mar 19 12:34:19.565586 master-0 kubenswrapper[31830]: I0319 12:34:19.565543 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79d6ccc4b7-kj747" Mar 19 12:34:19.694886 master-0 kubenswrapper[31830]: I0319 12:34:19.682572 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cftnk\" (UniqueName: \"kubernetes.io/projected/9f5e003c-2a5f-4796-8c10-5f5492005f76-kube-api-access-cftnk\") pod \"9f5e003c-2a5f-4796-8c10-5f5492005f76\" (UID: \"9f5e003c-2a5f-4796-8c10-5f5492005f76\") " Mar 19 12:34:19.694886 master-0 kubenswrapper[31830]: I0319 12:34:19.682676 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f5e003c-2a5f-4796-8c10-5f5492005f76-config\") pod \"9f5e003c-2a5f-4796-8c10-5f5492005f76\" (UID: \"9f5e003c-2a5f-4796-8c10-5f5492005f76\") " Mar 19 12:34:19.694886 master-0 kubenswrapper[31830]: I0319 12:34:19.685874 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f5e003c-2a5f-4796-8c10-5f5492005f76-ovsdbserver-nb\") pod \"9f5e003c-2a5f-4796-8c10-5f5492005f76\" (UID: \"9f5e003c-2a5f-4796-8c10-5f5492005f76\") " Mar 19 12:34:19.694886 master-0 kubenswrapper[31830]: I0319 12:34:19.685990 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f5e003c-2a5f-4796-8c10-5f5492005f76-dns-svc\") pod \"9f5e003c-2a5f-4796-8c10-5f5492005f76\" (UID: \"9f5e003c-2a5f-4796-8c10-5f5492005f76\") " Mar 19 12:34:19.710593 master-0 kubenswrapper[31830]: I0319 12:34:19.710541 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f5e003c-2a5f-4796-8c10-5f5492005f76-kube-api-access-cftnk" (OuterVolumeSpecName: "kube-api-access-cftnk") pod "9f5e003c-2a5f-4796-8c10-5f5492005f76" (UID: "9f5e003c-2a5f-4796-8c10-5f5492005f76"). InnerVolumeSpecName "kube-api-access-cftnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:34:19.750938 master-0 kubenswrapper[31830]: I0319 12:34:19.742077 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f5e003c-2a5f-4796-8c10-5f5492005f76-config" (OuterVolumeSpecName: "config") pod "9f5e003c-2a5f-4796-8c10-5f5492005f76" (UID: "9f5e003c-2a5f-4796-8c10-5f5492005f76"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:19.759660 master-0 kubenswrapper[31830]: I0319 12:34:19.759358 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f5e003c-2a5f-4796-8c10-5f5492005f76-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9f5e003c-2a5f-4796-8c10-5f5492005f76" (UID: "9f5e003c-2a5f-4796-8c10-5f5492005f76"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:19.791136 master-0 kubenswrapper[31830]: I0319 12:34:19.791059 31830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f5e003c-2a5f-4796-8c10-5f5492005f76-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:19.791136 master-0 kubenswrapper[31830]: I0319 12:34:19.791121 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cftnk\" (UniqueName: \"kubernetes.io/projected/9f5e003c-2a5f-4796-8c10-5f5492005f76-kube-api-access-cftnk\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:19.791136 master-0 kubenswrapper[31830]: I0319 12:34:19.791135 31830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f5e003c-2a5f-4796-8c10-5f5492005f76-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:19.819203 master-0 kubenswrapper[31830]: I0319 12:34:19.819163 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 19 12:34:19.820317 master-0 kubenswrapper[31830]: E0319 12:34:19.820295 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f5e003c-2a5f-4796-8c10-5f5492005f76" containerName="init" Mar 19 12:34:19.820317 master-0 kubenswrapper[31830]: I0319 12:34:19.820317 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f5e003c-2a5f-4796-8c10-5f5492005f76" containerName="init" Mar 19 12:34:19.820612 master-0 kubenswrapper[31830]: I0319 12:34:19.820591 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f5e003c-2a5f-4796-8c10-5f5492005f76" containerName="init" Mar 19 12:34:19.823085 master-0 kubenswrapper[31830]: I0319 12:34:19.823008 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 19 12:34:19.823085 master-0 kubenswrapper[31830]: I0319 12:34:19.823050 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76f498f559-g9gf7"] Mar 19 12:34:19.823459 master-0 kubenswrapper[31830]: I0319 12:34:19.823432 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 19 12:34:19.828234 master-0 kubenswrapper[31830]: I0319 12:34:19.828172 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 19 12:34:19.828538 master-0 kubenswrapper[31830]: I0319 12:34:19.828499 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 19 12:34:19.828831 master-0 kubenswrapper[31830]: I0319 12:34:19.828640 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 19 12:34:19.849014 master-0 kubenswrapper[31830]: I0319 12:34:19.845111 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-djbrh"] Mar 19 12:34:19.849014 master-0 kubenswrapper[31830]: I0319 12:34:19.846730 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" Mar 19 12:34:19.849014 master-0 kubenswrapper[31830]: I0319 12:34:19.848050 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f5e003c-2a5f-4796-8c10-5f5492005f76-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9f5e003c-2a5f-4796-8c10-5f5492005f76" (UID: "9f5e003c-2a5f-4796-8c10-5f5492005f76"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:19.862876 master-0 kubenswrapper[31830]: I0319 12:34:19.862806 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-djbrh"] Mar 19 12:34:19.894022 master-0 kubenswrapper[31830]: I0319 12:34:19.892729 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f5e003c-2a5f-4796-8c10-5f5492005f76-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:19.986639 master-0 kubenswrapper[31830]: I0319 12:34:19.986584 31830 generic.go:334] "Generic (PLEG): container finished" podID="ddafc683-0ab0-4152-af3b-5fd025697432" containerID="6d3d638a629313eb4e87c898c8227079dd3fa5f60591384837cd5d7aaffbdc5c" exitCode=0 Mar 19 12:34:19.987150 master-0 kubenswrapper[31830]: I0319 12:34:19.986644 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f498f559-g9gf7" event={"ID":"ddafc683-0ab0-4152-af3b-5fd025697432","Type":"ContainerDied","Data":"6d3d638a629313eb4e87c898c8227079dd3fa5f60591384837cd5d7aaffbdc5c"} Mar 19 12:34:19.994003 master-0 kubenswrapper[31830]: I0319 12:34:19.991915 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79d6ccc4b7-kj747" Mar 19 12:34:19.994003 master-0 kubenswrapper[31830]: I0319 12:34:19.992154 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d6ccc4b7-kj747" event={"ID":"9f5e003c-2a5f-4796-8c10-5f5492005f76","Type":"ContainerDied","Data":"5b02705ba80988adfbf3a3df427464b076b9499592564a3f1f3ef37e2f2b04b8"} Mar 19 12:34:19.994003 master-0 kubenswrapper[31830]: I0319 12:34:19.992203 31830 scope.go:117] "RemoveContainer" containerID="807693777aac1d8e7c5c6a435557a72a21f16ff50b302dbd285ca3648d506252" Mar 19 12:34:19.994003 master-0 kubenswrapper[31830]: I0319 12:34:19.993631 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f\") " pod="openstack/ovn-northd-0" Mar 19 12:34:19.994003 master-0 kubenswrapper[31830]: I0319 12:34:19.993682 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f-scripts\") pod \"ovn-northd-0\" (UID: \"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f\") " pod="openstack/ovn-northd-0" Mar 19 12:34:19.994003 master-0 kubenswrapper[31830]: I0319 12:34:19.993732 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-ovsdbserver-sb\") pod \"dnsmasq-dns-5bf8b865dc-djbrh\" (UID: \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\") " pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" Mar 19 12:34:19.994003 master-0 kubenswrapper[31830]: I0319 12:34:19.993755 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-config\") pod \"dnsmasq-dns-5bf8b865dc-djbrh\" (UID: \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\") " pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" Mar 19 12:34:19.994003 master-0 kubenswrapper[31830]: I0319 12:34:19.993786 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf8b865dc-djbrh\" (UID: \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\") " pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" Mar 19 12:34:19.997509 master-0 kubenswrapper[31830]: I0319 12:34:19.996196 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-dns-svc\") pod \"dnsmasq-dns-5bf8b865dc-djbrh\" (UID: \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\") " pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" Mar 19 12:34:19.997509 master-0 kubenswrapper[31830]: I0319 12:34:19.996237 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f\") " pod="openstack/ovn-northd-0" Mar 19 12:34:19.997509 master-0 kubenswrapper[31830]: I0319 12:34:19.996292 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f-config\") pod \"ovn-northd-0\" (UID: \"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f\") " pod="openstack/ovn-northd-0" Mar 19 12:34:19.997509 master-0 kubenswrapper[31830]: I0319 12:34:19.996325 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7bmj\" (UniqueName: \"kubernetes.io/projected/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-kube-api-access-p7bmj\") pod \"dnsmasq-dns-5bf8b865dc-djbrh\" (UID: \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\") " pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" Mar 19 12:34:19.997509 master-0 kubenswrapper[31830]: I0319 12:34:19.996376 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f\") " pod="openstack/ovn-northd-0" Mar 19 12:34:19.997509 master-0 kubenswrapper[31830]: I0319 12:34:19.996407 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f\") " pod="openstack/ovn-northd-0" Mar 19 12:34:19.997509 master-0 kubenswrapper[31830]: I0319 12:34:19.996434 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwb8q\" (UniqueName: \"kubernetes.io/projected/e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f-kube-api-access-zwb8q\") pod \"ovn-northd-0\" (UID: \"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f\") " pod="openstack/ovn-northd-0" Mar 19 12:34:20.099989 master-0 kubenswrapper[31830]: I0319 12:34:20.099139 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-ovsdbserver-sb\") pod \"dnsmasq-dns-5bf8b865dc-djbrh\" (UID: \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\") " pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" Mar 19 12:34:20.100381 master-0 kubenswrapper[31830]: I0319 12:34:20.100344 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-config\") pod \"dnsmasq-dns-5bf8b865dc-djbrh\" (UID: \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\") " pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" Mar 19 12:34:20.100549 master-0 kubenswrapper[31830]: I0319 12:34:20.100528 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf8b865dc-djbrh\" (UID: \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\") " pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" Mar 19 12:34:20.100637 master-0 kubenswrapper[31830]: I0319 12:34:20.100625 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-dns-svc\") pod \"dnsmasq-dns-5bf8b865dc-djbrh\" (UID: \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\") " pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" Mar 19 12:34:20.100723 master-0 kubenswrapper[31830]: I0319 12:34:20.100711 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f\") " pod="openstack/ovn-northd-0" Mar 19 12:34:20.100840 master-0 kubenswrapper[31830]: I0319 12:34:20.100826 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f-config\") pod \"ovn-northd-0\" (UID: \"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f\") " pod="openstack/ovn-northd-0" Mar 19 12:34:20.101006 master-0 kubenswrapper[31830]: I0319 12:34:20.100992 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7bmj\" (UniqueName: \"kubernetes.io/projected/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-kube-api-access-p7bmj\") pod \"dnsmasq-dns-5bf8b865dc-djbrh\" (UID: \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\") " pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" Mar 19 12:34:20.101124 master-0 kubenswrapper[31830]: I0319 12:34:20.101111 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f\") " pod="openstack/ovn-northd-0" Mar 19 12:34:20.101220 master-0 kubenswrapper[31830]: I0319 12:34:20.101204 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f\") " pod="openstack/ovn-northd-0" Mar 19 12:34:20.101311 master-0 kubenswrapper[31830]: I0319 12:34:20.101298 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwb8q\" (UniqueName: \"kubernetes.io/projected/e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f-kube-api-access-zwb8q\") pod \"ovn-northd-0\" (UID: \"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f\") " pod="openstack/ovn-northd-0" Mar 19 12:34:20.101605 master-0 kubenswrapper[31830]: I0319 12:34:20.101585 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f\") " pod="openstack/ovn-northd-0" Mar 19 12:34:20.101789 master-0 kubenswrapper[31830]: I0319 12:34:20.101774 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f-scripts\") pod \"ovn-northd-0\" (UID: \"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f\") " pod="openstack/ovn-northd-0" Mar 19 12:34:20.102174 master-0 kubenswrapper[31830]: I0319 12:34:20.102149 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-config\") pod \"dnsmasq-dns-5bf8b865dc-djbrh\" (UID: \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\") " pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" Mar 19 12:34:20.106828 master-0 kubenswrapper[31830]: I0319 12:34:20.102944 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f-config\") pod \"ovn-northd-0\" (UID: \"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f\") " pod="openstack/ovn-northd-0" Mar 19 12:34:20.106828 master-0 kubenswrapper[31830]: I0319 12:34:20.103638 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-dns-svc\") pod \"dnsmasq-dns-5bf8b865dc-djbrh\" (UID: \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\") " pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" Mar 19 12:34:20.106828 master-0 kubenswrapper[31830]: I0319 12:34:20.105292 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f\") " pod="openstack/ovn-northd-0" Mar 19 12:34:20.106828 master-0 kubenswrapper[31830]: I0319 12:34:20.101334 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf8b865dc-djbrh\" (UID: \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\") " pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" Mar 19 12:34:20.106828 master-0 kubenswrapper[31830]: I0319 12:34:20.106434 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f\") " pod="openstack/ovn-northd-0" Mar 19 12:34:20.106828 master-0 kubenswrapper[31830]: I0319 12:34:20.106468 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f\") " pod="openstack/ovn-northd-0" Mar 19 12:34:20.106828 master-0 kubenswrapper[31830]: I0319 12:34:20.100365 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-ovsdbserver-sb\") pod \"dnsmasq-dns-5bf8b865dc-djbrh\" (UID: \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\") " pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" Mar 19 12:34:20.107156 master-0 kubenswrapper[31830]: I0319 12:34:20.107126 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f-scripts\") pod \"ovn-northd-0\" (UID: \"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f\") " pod="openstack/ovn-northd-0" Mar 19 12:34:20.112169 master-0 kubenswrapper[31830]: I0319 12:34:20.109547 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f\") " pod="openstack/ovn-northd-0" Mar 19 12:34:20.124824 master-0 kubenswrapper[31830]: I0319 12:34:20.122884 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79d6ccc4b7-kj747"] Mar 19 12:34:20.124824 master-0 kubenswrapper[31830]: I0319 12:34:20.123104 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwb8q\" (UniqueName: \"kubernetes.io/projected/e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f-kube-api-access-zwb8q\") pod \"ovn-northd-0\" (UID: \"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f\") " pod="openstack/ovn-northd-0" Mar 19 12:34:20.132821 master-0 kubenswrapper[31830]: I0319 12:34:20.125458 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79d6ccc4b7-kj747"] Mar 19 12:34:20.132821 master-0 kubenswrapper[31830]: I0319 12:34:20.127125 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7bmj\" (UniqueName: \"kubernetes.io/projected/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-kube-api-access-p7bmj\") pod \"dnsmasq-dns-5bf8b865dc-djbrh\" (UID: \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\") " pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" Mar 19 12:34:20.197412 master-0 kubenswrapper[31830]: I0319 12:34:20.197284 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 19 12:34:20.294933 master-0 kubenswrapper[31830]: I0319 12:34:20.276369 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" Mar 19 12:34:20.380172 master-0 kubenswrapper[31830]: E0319 12:34:20.365662 31830 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Mar 19 12:34:20.380172 master-0 kubenswrapper[31830]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/ddafc683-0ab0-4152-af3b-5fd025697432/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 19 12:34:20.380172 master-0 kubenswrapper[31830]: > podSandboxID="b3434ad48c9f23b718a27225de35ceb3933b00e3f359af5853661485b9a63f62" Mar 19 12:34:20.380172 master-0 kubenswrapper[31830]: E0319 12:34:20.372445 31830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 19 12:34:20.380172 master-0 kubenswrapper[31830]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd9h6h86hd5h55dh57fh559h696hd7h57fhc9h55h554hb4h5b7h649h6ch5fbhcbh646hb7h67fh99h655hc6h584h648h5d6h54ch545h5c6h676q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldhpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000800000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-76f498f559-g9gf7_openstack(ddafc683-0ab0-4152-af3b-5fd025697432): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/ddafc683-0ab0-4152-af3b-5fd025697432/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 19 12:34:20.380172 master-0 kubenswrapper[31830]: > logger="UnhandledError" Mar 19 12:34:20.380172 master-0 kubenswrapper[31830]: E0319 12:34:20.373891 31830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/ddafc683-0ab0-4152-af3b-5fd025697432/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-76f498f559-g9gf7" podUID="ddafc683-0ab0-4152-af3b-5fd025697432" Mar 19 12:34:21.269253 master-0 kubenswrapper[31830]: I0319 12:34:21.269116 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 19 12:34:21.289421 master-0 kubenswrapper[31830]: W0319 12:34:21.279025 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode94c1fdb_20f2_4b64_b0c2_2ae1ef69f04f.slice/crio-bc1333277762411bd63b153cf13d07c833aa918e415e2513d53b256cb4df3f0c WatchSource:0}: Error finding container bc1333277762411bd63b153cf13d07c833aa918e415e2513d53b256cb4df3f0c: Status 404 returned error can't find the container with id bc1333277762411bd63b153cf13d07c833aa918e415e2513d53b256cb4df3f0c Mar 19 12:34:21.328118 master-0 kubenswrapper[31830]: I0319 12:34:21.325909 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-djbrh"] Mar 19 12:34:21.339552 master-0 kubenswrapper[31830]: W0319 12:34:21.337478 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a1d3222_9623_4753_9b0a_8d8da0fb3f1e.slice/crio-95227b0d2a22e3ef7329b56298d0f0cc684d5ff3fe2d84c033813e6752be83a9 WatchSource:0}: Error finding container 95227b0d2a22e3ef7329b56298d0f0cc684d5ff3fe2d84c033813e6752be83a9: Status 404 returned error can't find the container with id 95227b0d2a22e3ef7329b56298d0f0cc684d5ff3fe2d84c033813e6752be83a9 Mar 19 12:34:21.472567 master-0 kubenswrapper[31830]: I0319 12:34:21.472499 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f498f559-g9gf7" Mar 19 12:34:21.585337 master-0 kubenswrapper[31830]: I0319 12:34:21.585274 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-ovsdbserver-sb\") pod \"ddafc683-0ab0-4152-af3b-5fd025697432\" (UID: \"ddafc683-0ab0-4152-af3b-5fd025697432\") " Mar 19 12:34:21.585536 master-0 kubenswrapper[31830]: I0319 12:34:21.585385 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-dns-svc\") pod \"ddafc683-0ab0-4152-af3b-5fd025697432\" (UID: \"ddafc683-0ab0-4152-af3b-5fd025697432\") " Mar 19 12:34:21.585536 master-0 kubenswrapper[31830]: I0319 12:34:21.585441 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-config\") pod \"ddafc683-0ab0-4152-af3b-5fd025697432\" (UID: \"ddafc683-0ab0-4152-af3b-5fd025697432\") " Mar 19 12:34:21.585536 master-0 kubenswrapper[31830]: I0319 12:34:21.585504 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldhpj\" (UniqueName: \"kubernetes.io/projected/ddafc683-0ab0-4152-af3b-5fd025697432-kube-api-access-ldhpj\") pod \"ddafc683-0ab0-4152-af3b-5fd025697432\" (UID: \"ddafc683-0ab0-4152-af3b-5fd025697432\") " Mar 19 12:34:21.585649 master-0 kubenswrapper[31830]: I0319 12:34:21.585563 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-ovsdbserver-nb\") pod \"ddafc683-0ab0-4152-af3b-5fd025697432\" (UID: \"ddafc683-0ab0-4152-af3b-5fd025697432\") " Mar 19 12:34:21.588997 master-0 kubenswrapper[31830]: I0319 12:34:21.588970 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddafc683-0ab0-4152-af3b-5fd025697432-kube-api-access-ldhpj" (OuterVolumeSpecName: "kube-api-access-ldhpj") pod "ddafc683-0ab0-4152-af3b-5fd025697432" (UID: "ddafc683-0ab0-4152-af3b-5fd025697432"). InnerVolumeSpecName "kube-api-access-ldhpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:34:21.627679 master-0 kubenswrapper[31830]: I0319 12:34:21.627637 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-config" (OuterVolumeSpecName: "config") pod "ddafc683-0ab0-4152-af3b-5fd025697432" (UID: "ddafc683-0ab0-4152-af3b-5fd025697432"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:21.630907 master-0 kubenswrapper[31830]: I0319 12:34:21.630847 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ddafc683-0ab0-4152-af3b-5fd025697432" (UID: "ddafc683-0ab0-4152-af3b-5fd025697432"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:21.638537 master-0 kubenswrapper[31830]: I0319 12:34:21.638482 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ddafc683-0ab0-4152-af3b-5fd025697432" (UID: "ddafc683-0ab0-4152-af3b-5fd025697432"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:21.638713 master-0 kubenswrapper[31830]: I0319 12:34:21.638541 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ddafc683-0ab0-4152-af3b-5fd025697432" (UID: "ddafc683-0ab0-4152-af3b-5fd025697432"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:21.688955 master-0 kubenswrapper[31830]: I0319 12:34:21.688714 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:21.688955 master-0 kubenswrapper[31830]: I0319 12:34:21.688756 31830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:21.688955 master-0 kubenswrapper[31830]: I0319 12:34:21.688765 31830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:21.688955 master-0 kubenswrapper[31830]: I0319 12:34:21.688774 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldhpj\" (UniqueName: \"kubernetes.io/projected/ddafc683-0ab0-4152-af3b-5fd025697432-kube-api-access-ldhpj\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:21.688955 master-0 kubenswrapper[31830]: I0319 12:34:21.688788 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ddafc683-0ab0-4152-af3b-5fd025697432-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:21.698637 master-0 kubenswrapper[31830]: I0319 12:34:21.696980 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f5e003c-2a5f-4796-8c10-5f5492005f76" path="/var/lib/kubelet/pods/9f5e003c-2a5f-4796-8c10-5f5492005f76/volumes" Mar 19 12:34:21.760358 master-0 kubenswrapper[31830]: I0319 12:34:21.760161 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 19 12:34:21.761001 master-0 kubenswrapper[31830]: E0319 12:34:21.760662 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddafc683-0ab0-4152-af3b-5fd025697432" containerName="init" Mar 19 12:34:21.761001 master-0 kubenswrapper[31830]: I0319 12:34:21.760680 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddafc683-0ab0-4152-af3b-5fd025697432" containerName="init" Mar 19 12:34:21.761001 master-0 kubenswrapper[31830]: I0319 12:34:21.760968 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddafc683-0ab0-4152-af3b-5fd025697432" containerName="init" Mar 19 12:34:21.769966 master-0 kubenswrapper[31830]: I0319 12:34:21.767950 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 19 12:34:21.771047 master-0 kubenswrapper[31830]: I0319 12:34:21.770978 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 19 12:34:21.771296 master-0 kubenswrapper[31830]: I0319 12:34:21.771271 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 19 12:34:21.771526 master-0 kubenswrapper[31830]: I0319 12:34:21.771500 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 19 12:34:21.964840 master-0 kubenswrapper[31830]: I0319 12:34:21.964696 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 19 12:34:21.999931 master-0 kubenswrapper[31830]: I0319 12:34:21.999628 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1cbf9dcc-3b56-42d8-ab7d-a3e4dfd5adb4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^96044017-7c55-41ab-b1e5-f63573290f61\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:21.999931 master-0 kubenswrapper[31830]: I0319 12:34:21.999702 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/736d878b-1328-4a36-873f-62849c4e2d07-lock\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:21.999931 master-0 kubenswrapper[31830]: I0319 12:34:21.999765 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736d878b-1328-4a36-873f-62849c4e2d07-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:21.999931 master-0 kubenswrapper[31830]: I0319 12:34:21.999863 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s27bn\" (UniqueName: \"kubernetes.io/projected/736d878b-1328-4a36-873f-62849c4e2d07-kube-api-access-s27bn\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:21.999931 master-0 kubenswrapper[31830]: I0319 12:34:21.999899 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/736d878b-1328-4a36-873f-62849c4e2d07-cache\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:22.000262 master-0 kubenswrapper[31830]: I0319 12:34:21.999945 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/736d878b-1328-4a36-873f-62849c4e2d07-etc-swift\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:22.037021 master-0 kubenswrapper[31830]: I0319 12:34:22.036973 31830 generic.go:334] "Generic (PLEG): container finished" podID="3a1d3222-9623-4753-9b0a-8d8da0fb3f1e" containerID="e2fa967f9394d2ce53ecc113fe6318ff52b0eb640b62d416057bdd07053453bb" exitCode=0 Mar 19 12:34:22.037205 master-0 kubenswrapper[31830]: I0319 12:34:22.037002 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" event={"ID":"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e","Type":"ContainerDied","Data":"e2fa967f9394d2ce53ecc113fe6318ff52b0eb640b62d416057bdd07053453bb"} Mar 19 12:34:22.037205 master-0 kubenswrapper[31830]: I0319 12:34:22.037080 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" event={"ID":"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e","Type":"ContainerStarted","Data":"95227b0d2a22e3ef7329b56298d0f0cc684d5ff3fe2d84c033813e6752be83a9"} Mar 19 12:34:22.038645 master-0 kubenswrapper[31830]: I0319 12:34:22.038337 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f","Type":"ContainerStarted","Data":"bc1333277762411bd63b153cf13d07c833aa918e415e2513d53b256cb4df3f0c"} Mar 19 12:34:22.040128 master-0 kubenswrapper[31830]: I0319 12:34:22.040091 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f498f559-g9gf7" event={"ID":"ddafc683-0ab0-4152-af3b-5fd025697432","Type":"ContainerDied","Data":"b3434ad48c9f23b718a27225de35ceb3933b00e3f359af5853661485b9a63f62"} Mar 19 12:34:22.040212 master-0 kubenswrapper[31830]: I0319 12:34:22.040134 31830 scope.go:117] "RemoveContainer" containerID="6d3d638a629313eb4e87c898c8227079dd3fa5f60591384837cd5d7aaffbdc5c" Mar 19 12:34:22.040276 master-0 kubenswrapper[31830]: I0319 12:34:22.040254 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f498f559-g9gf7" Mar 19 12:34:22.102110 master-0 kubenswrapper[31830]: I0319 12:34:22.102005 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1cbf9dcc-3b56-42d8-ab7d-a3e4dfd5adb4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^96044017-7c55-41ab-b1e5-f63573290f61\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:22.102623 master-0 kubenswrapper[31830]: I0319 12:34:22.102595 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/736d878b-1328-4a36-873f-62849c4e2d07-lock\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:22.102694 master-0 kubenswrapper[31830]: I0319 12:34:22.102656 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736d878b-1328-4a36-873f-62849c4e2d07-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:22.102771 master-0 kubenswrapper[31830]: I0319 12:34:22.102728 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s27bn\" (UniqueName: \"kubernetes.io/projected/736d878b-1328-4a36-873f-62849c4e2d07-kube-api-access-s27bn\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:22.102875 master-0 kubenswrapper[31830]: I0319 12:34:22.102838 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/736d878b-1328-4a36-873f-62849c4e2d07-cache\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:22.102950 master-0 kubenswrapper[31830]: I0319 12:34:22.102887 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/736d878b-1328-4a36-873f-62849c4e2d07-etc-swift\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:22.103065 master-0 kubenswrapper[31830]: E0319 12:34:22.103031 31830 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 19 12:34:22.103065 master-0 kubenswrapper[31830]: E0319 12:34:22.103059 31830 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 19 12:34:22.103188 master-0 kubenswrapper[31830]: E0319 12:34:22.103109 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/736d878b-1328-4a36-873f-62849c4e2d07-etc-swift podName:736d878b-1328-4a36-873f-62849c4e2d07 nodeName:}" failed. No retries permitted until 2026-03-19 12:34:22.603092763 +0000 UTC m=+1201.152053467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/736d878b-1328-4a36-873f-62849c4e2d07-etc-swift") pod "swift-storage-0" (UID: "736d878b-1328-4a36-873f-62849c4e2d07") : configmap "swift-ring-files" not found Mar 19 12:34:22.104859 master-0 kubenswrapper[31830]: I0319 12:34:22.104810 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/736d878b-1328-4a36-873f-62849c4e2d07-lock\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:22.104994 master-0 kubenswrapper[31830]: I0319 12:34:22.104973 31830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 12:34:22.105063 master-0 kubenswrapper[31830]: I0319 12:34:22.105003 31830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1cbf9dcc-3b56-42d8-ab7d-a3e4dfd5adb4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^96044017-7c55-41ab-b1e5-f63573290f61\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/c79d4b26aab9cbe261a9081ab0c46c904f2c5c936911d726c5cf19b856186f55/globalmount\"" pod="openstack/swift-storage-0" Mar 19 12:34:22.105063 master-0 kubenswrapper[31830]: I0319 12:34:22.105011 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/736d878b-1328-4a36-873f-62849c4e2d07-cache\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:22.108452 master-0 kubenswrapper[31830]: I0319 12:34:22.108408 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736d878b-1328-4a36-873f-62849c4e2d07-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:22.155598 master-0 kubenswrapper[31830]: I0319 12:34:22.155533 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s27bn\" (UniqueName: \"kubernetes.io/projected/736d878b-1328-4a36-873f-62849c4e2d07-kube-api-access-s27bn\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:22.612824 master-0 kubenswrapper[31830]: I0319 12:34:22.612642 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/736d878b-1328-4a36-873f-62849c4e2d07-etc-swift\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:22.613366 master-0 kubenswrapper[31830]: E0319 12:34:22.613016 31830 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 19 12:34:22.613366 master-0 kubenswrapper[31830]: E0319 12:34:22.613063 31830 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 19 12:34:22.613366 master-0 kubenswrapper[31830]: E0319 12:34:22.613171 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/736d878b-1328-4a36-873f-62849c4e2d07-etc-swift podName:736d878b-1328-4a36-873f-62849c4e2d07 nodeName:}" failed. No retries permitted until 2026-03-19 12:34:23.61314676 +0000 UTC m=+1202.162107464 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/736d878b-1328-4a36-873f-62849c4e2d07-etc-swift") pod "swift-storage-0" (UID: "736d878b-1328-4a36-873f-62849c4e2d07") : configmap "swift-ring-files" not found Mar 19 12:34:22.773293 master-0 kubenswrapper[31830]: I0319 12:34:22.770313 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76f498f559-g9gf7"] Mar 19 12:34:22.788595 master-0 kubenswrapper[31830]: I0319 12:34:22.788437 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76f498f559-g9gf7"] Mar 19 12:34:23.052602 master-0 kubenswrapper[31830]: I0319 12:34:23.052491 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" event={"ID":"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e","Type":"ContainerStarted","Data":"1bc4c0b6899dd54cdca3fe24d0eed96201228c3fc591b3ae37d2ea67cd328dd0"} Mar 19 12:34:23.053112 master-0 kubenswrapper[31830]: I0319 12:34:23.053054 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" Mar 19 12:34:23.308655 master-0 kubenswrapper[31830]: I0319 12:34:23.308134 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 19 12:34:23.311259 master-0 kubenswrapper[31830]: I0319 12:34:23.309718 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 19 12:34:23.393403 master-0 kubenswrapper[31830]: I0319 12:34:23.393343 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 19 12:34:23.638112 master-0 kubenswrapper[31830]: I0319 12:34:23.636233 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/736d878b-1328-4a36-873f-62849c4e2d07-etc-swift\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:23.638112 master-0 kubenswrapper[31830]: E0319 12:34:23.636461 31830 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 19 12:34:23.638112 master-0 kubenswrapper[31830]: E0319 12:34:23.636475 31830 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 19 12:34:23.638112 master-0 kubenswrapper[31830]: E0319 12:34:23.636521 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/736d878b-1328-4a36-873f-62849c4e2d07-etc-swift podName:736d878b-1328-4a36-873f-62849c4e2d07 nodeName:}" failed. No retries permitted until 2026-03-19 12:34:25.636504847 +0000 UTC m=+1204.185465551 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/736d878b-1328-4a36-873f-62849c4e2d07-etc-swift") pod "swift-storage-0" (UID: "736d878b-1328-4a36-873f-62849c4e2d07") : configmap "swift-ring-files" not found Mar 19 12:34:23.693305 master-0 kubenswrapper[31830]: I0319 12:34:23.693226 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddafc683-0ab0-4152-af3b-5fd025697432" path="/var/lib/kubelet/pods/ddafc683-0ab0-4152-af3b-5fd025697432/volumes" Mar 19 12:34:23.817519 master-0 kubenswrapper[31830]: I0319 12:34:23.817412 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" podStartSLOduration=4.817392097 podStartE2EDuration="4.817392097s" podCreationTimestamp="2026-03-19 12:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:34:23.612033359 +0000 UTC m=+1202.160994063" watchObservedRunningTime="2026-03-19 12:34:23.817392097 +0000 UTC m=+1202.366352801" Mar 19 12:34:24.188627 master-0 kubenswrapper[31830]: I0319 12:34:24.188568 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 19 12:34:24.212593 master-0 kubenswrapper[31830]: I0319 12:34:24.211269 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1cbf9dcc-3b56-42d8-ab7d-a3e4dfd5adb4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^96044017-7c55-41ab-b1e5-f63573290f61\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:24.406468 master-0 kubenswrapper[31830]: I0319 12:34:24.406401 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 19 12:34:24.406468 master-0 kubenswrapper[31830]: I0319 12:34:24.406463 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 19 12:34:24.498711 master-0 kubenswrapper[31830]: I0319 12:34:24.498632 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 19 12:34:24.610883 master-0 kubenswrapper[31830]: I0319 12:34:24.610814 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-cm2zc"] Mar 19 12:34:24.615530 master-0 kubenswrapper[31830]: I0319 12:34:24.615478 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.620830 master-0 kubenswrapper[31830]: I0319 12:34:24.619004 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 19 12:34:24.620830 master-0 kubenswrapper[31830]: I0319 12:34:24.619575 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 19 12:34:24.620830 master-0 kubenswrapper[31830]: I0319 12:34:24.619589 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 19 12:34:24.640057 master-0 kubenswrapper[31830]: I0319 12:34:24.639992 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-cm2zc"] Mar 19 12:34:24.777256 master-0 kubenswrapper[31830]: I0319 12:34:24.776973 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/512e045f-7b25-4992-a593-227de5818bb3-etc-swift\") pod \"swift-ring-rebalance-cm2zc\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.777256 master-0 kubenswrapper[31830]: I0319 12:34:24.777103 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/512e045f-7b25-4992-a593-227de5818bb3-swiftconf\") pod \"swift-ring-rebalance-cm2zc\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.777465 master-0 kubenswrapper[31830]: I0319 12:34:24.777432 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cttl7\" (UniqueName: \"kubernetes.io/projected/512e045f-7b25-4992-a593-227de5818bb3-kube-api-access-cttl7\") pod \"swift-ring-rebalance-cm2zc\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.777627 master-0 kubenswrapper[31830]: I0319 12:34:24.777589 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512e045f-7b25-4992-a593-227de5818bb3-combined-ca-bundle\") pod \"swift-ring-rebalance-cm2zc\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.777751 master-0 kubenswrapper[31830]: I0319 12:34:24.777636 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/512e045f-7b25-4992-a593-227de5818bb3-scripts\") pod \"swift-ring-rebalance-cm2zc\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.777784 master-0 kubenswrapper[31830]: I0319 12:34:24.777769 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/512e045f-7b25-4992-a593-227de5818bb3-ring-data-devices\") pod \"swift-ring-rebalance-cm2zc\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.778207 master-0 kubenswrapper[31830]: I0319 12:34:24.777892 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/512e045f-7b25-4992-a593-227de5818bb3-dispersionconf\") pod \"swift-ring-rebalance-cm2zc\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.880230 master-0 kubenswrapper[31830]: I0319 12:34:24.880155 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/512e045f-7b25-4992-a593-227de5818bb3-dispersionconf\") pod \"swift-ring-rebalance-cm2zc\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.880439 master-0 kubenswrapper[31830]: I0319 12:34:24.880285 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/512e045f-7b25-4992-a593-227de5818bb3-etc-swift\") pod \"swift-ring-rebalance-cm2zc\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.880439 master-0 kubenswrapper[31830]: I0319 12:34:24.880304 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/512e045f-7b25-4992-a593-227de5818bb3-swiftconf\") pod \"swift-ring-rebalance-cm2zc\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.880815 master-0 kubenswrapper[31830]: I0319 12:34:24.880763 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/512e045f-7b25-4992-a593-227de5818bb3-etc-swift\") pod \"swift-ring-rebalance-cm2zc\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.880933 master-0 kubenswrapper[31830]: I0319 12:34:24.880909 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cttl7\" (UniqueName: \"kubernetes.io/projected/512e045f-7b25-4992-a593-227de5818bb3-kube-api-access-cttl7\") pod \"swift-ring-rebalance-cm2zc\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.881321 master-0 kubenswrapper[31830]: I0319 12:34:24.881286 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512e045f-7b25-4992-a593-227de5818bb3-combined-ca-bundle\") pod \"swift-ring-rebalance-cm2zc\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.881321 master-0 kubenswrapper[31830]: I0319 12:34:24.881312 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/512e045f-7b25-4992-a593-227de5818bb3-scripts\") pod \"swift-ring-rebalance-cm2zc\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.881438 master-0 kubenswrapper[31830]: I0319 12:34:24.881365 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/512e045f-7b25-4992-a593-227de5818bb3-ring-data-devices\") pod \"swift-ring-rebalance-cm2zc\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.882159 master-0 kubenswrapper[31830]: I0319 12:34:24.882122 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/512e045f-7b25-4992-a593-227de5818bb3-scripts\") pod \"swift-ring-rebalance-cm2zc\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.882984 master-0 kubenswrapper[31830]: I0319 12:34:24.882957 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/512e045f-7b25-4992-a593-227de5818bb3-ring-data-devices\") pod \"swift-ring-rebalance-cm2zc\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.884886 master-0 kubenswrapper[31830]: I0319 12:34:24.884259 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/512e045f-7b25-4992-a593-227de5818bb3-dispersionconf\") pod \"swift-ring-rebalance-cm2zc\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.884985 master-0 kubenswrapper[31830]: I0319 12:34:24.884942 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512e045f-7b25-4992-a593-227de5818bb3-combined-ca-bundle\") pod \"swift-ring-rebalance-cm2zc\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.885472 master-0 kubenswrapper[31830]: I0319 12:34:24.885427 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/512e045f-7b25-4992-a593-227de5818bb3-swiftconf\") pod \"swift-ring-rebalance-cm2zc\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.902862 master-0 kubenswrapper[31830]: I0319 12:34:24.898499 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cttl7\" (UniqueName: \"kubernetes.io/projected/512e045f-7b25-4992-a593-227de5818bb3-kube-api-access-cttl7\") pod \"swift-ring-rebalance-cm2zc\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:24.936095 master-0 kubenswrapper[31830]: I0319 12:34:24.935893 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:25.097822 master-0 kubenswrapper[31830]: I0319 12:34:25.097768 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f","Type":"ContainerStarted","Data":"85ff02501bbafa5f2cf2381bcb429a694a050d6d944a5e6c7193962377d51d31"} Mar 19 12:34:25.098061 master-0 kubenswrapper[31830]: I0319 12:34:25.098026 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f","Type":"ContainerStarted","Data":"97341d1a51f01c4659c50424d698bf6b2ec1c6f1cbf1dbc7a217f145ba1c85cc"} Mar 19 12:34:25.098906 master-0 kubenswrapper[31830]: I0319 12:34:25.098886 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 19 12:34:25.131319 master-0 kubenswrapper[31830]: I0319 12:34:25.131257 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.517564845 podStartE2EDuration="6.131239473s" podCreationTimestamp="2026-03-19 12:34:19 +0000 UTC" firstStartedPulling="2026-03-19 12:34:21.280921074 +0000 UTC m=+1199.829881778" lastFinishedPulling="2026-03-19 12:34:23.894595702 +0000 UTC m=+1202.443556406" observedRunningTime="2026-03-19 12:34:25.119493138 +0000 UTC m=+1203.668453842" watchObservedRunningTime="2026-03-19 12:34:25.131239473 +0000 UTC m=+1203.680200167" Mar 19 12:34:25.207903 master-0 kubenswrapper[31830]: I0319 12:34:25.206988 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 19 12:34:25.425254 master-0 kubenswrapper[31830]: W0319 12:34:25.425182 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod512e045f_7b25_4992_a593_227de5818bb3.slice/crio-2f5956ceddcf4ec856181bbd854283b35b9f252f320130100ecd38d01bb33c84 WatchSource:0}: Error finding container 2f5956ceddcf4ec856181bbd854283b35b9f252f320130100ecd38d01bb33c84: Status 404 returned error can't find the container with id 2f5956ceddcf4ec856181bbd854283b35b9f252f320130100ecd38d01bb33c84 Mar 19 12:34:25.427036 master-0 kubenswrapper[31830]: I0319 12:34:25.426985 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-cm2zc"] Mar 19 12:34:25.708102 master-0 kubenswrapper[31830]: I0319 12:34:25.707602 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/736d878b-1328-4a36-873f-62849c4e2d07-etc-swift\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:25.708102 master-0 kubenswrapper[31830]: E0319 12:34:25.707864 31830 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 19 12:34:25.708102 master-0 kubenswrapper[31830]: E0319 12:34:25.707904 31830 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 19 12:34:25.708102 master-0 kubenswrapper[31830]: E0319 12:34:25.707973 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/736d878b-1328-4a36-873f-62849c4e2d07-etc-swift podName:736d878b-1328-4a36-873f-62849c4e2d07 nodeName:}" failed. No retries permitted until 2026-03-19 12:34:29.707951198 +0000 UTC m=+1208.256911902 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/736d878b-1328-4a36-873f-62849c4e2d07-etc-swift") pod "swift-storage-0" (UID: "736d878b-1328-4a36-873f-62849c4e2d07") : configmap "swift-ring-files" not found Mar 19 12:34:26.104961 master-0 kubenswrapper[31830]: I0319 12:34:26.104905 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cm2zc" event={"ID":"512e045f-7b25-4992-a593-227de5818bb3","Type":"ContainerStarted","Data":"2f5956ceddcf4ec856181bbd854283b35b9f252f320130100ecd38d01bb33c84"} Mar 19 12:34:28.158779 master-0 kubenswrapper[31830]: I0319 12:34:28.158698 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-a5bb-account-create-update-cvftl"] Mar 19 12:34:28.160699 master-0 kubenswrapper[31830]: I0319 12:34:28.160660 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a5bb-account-create-update-cvftl" Mar 19 12:34:28.175919 master-0 kubenswrapper[31830]: I0319 12:34:28.169201 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 19 12:34:28.177378 master-0 kubenswrapper[31830]: I0319 12:34:28.177255 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-2nfl5"] Mar 19 12:34:28.179069 master-0 kubenswrapper[31830]: I0319 12:34:28.178984 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2nfl5" Mar 19 12:34:28.187698 master-0 kubenswrapper[31830]: I0319 12:34:28.187122 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a5bb-account-create-update-cvftl"] Mar 19 12:34:28.213559 master-0 kubenswrapper[31830]: I0319 12:34:28.212120 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2nfl5"] Mar 19 12:34:28.272165 master-0 kubenswrapper[31830]: I0319 12:34:28.272091 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx4dp\" (UniqueName: \"kubernetes.io/projected/85fb70b5-81f7-417c-b0cf-f3c917d1bc90-kube-api-access-lx4dp\") pod \"keystone-a5bb-account-create-update-cvftl\" (UID: \"85fb70b5-81f7-417c-b0cf-f3c917d1bc90\") " pod="openstack/keystone-a5bb-account-create-update-cvftl" Mar 19 12:34:28.272165 master-0 kubenswrapper[31830]: I0319 12:34:28.272168 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/674e9cd6-bf60-4cca-951f-de66c55e8ce5-operator-scripts\") pod \"keystone-db-create-2nfl5\" (UID: \"674e9cd6-bf60-4cca-951f-de66c55e8ce5\") " pod="openstack/keystone-db-create-2nfl5" Mar 19 12:34:28.272574 master-0 kubenswrapper[31830]: I0319 12:34:28.272531 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85fb70b5-81f7-417c-b0cf-f3c917d1bc90-operator-scripts\") pod \"keystone-a5bb-account-create-update-cvftl\" (UID: \"85fb70b5-81f7-417c-b0cf-f3c917d1bc90\") " pod="openstack/keystone-a5bb-account-create-update-cvftl" Mar 19 12:34:28.272659 master-0 kubenswrapper[31830]: I0319 12:34:28.272641 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74qlc\" (UniqueName: \"kubernetes.io/projected/674e9cd6-bf60-4cca-951f-de66c55e8ce5-kube-api-access-74qlc\") pod \"keystone-db-create-2nfl5\" (UID: \"674e9cd6-bf60-4cca-951f-de66c55e8ce5\") " pod="openstack/keystone-db-create-2nfl5" Mar 19 12:34:28.349771 master-0 kubenswrapper[31830]: I0319 12:34:28.349704 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-58xtb"] Mar 19 12:34:28.352255 master-0 kubenswrapper[31830]: I0319 12:34:28.351591 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-58xtb" Mar 19 12:34:28.372128 master-0 kubenswrapper[31830]: I0319 12:34:28.367945 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-adbe-account-create-update-q4gms"] Mar 19 12:34:28.372128 master-0 kubenswrapper[31830]: I0319 12:34:28.369359 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-adbe-account-create-update-q4gms" Mar 19 12:34:28.372128 master-0 kubenswrapper[31830]: I0319 12:34:28.371128 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 19 12:34:28.374769 master-0 kubenswrapper[31830]: I0319 12:34:28.374397 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx4dp\" (UniqueName: \"kubernetes.io/projected/85fb70b5-81f7-417c-b0cf-f3c917d1bc90-kube-api-access-lx4dp\") pod \"keystone-a5bb-account-create-update-cvftl\" (UID: \"85fb70b5-81f7-417c-b0cf-f3c917d1bc90\") " pod="openstack/keystone-a5bb-account-create-update-cvftl" Mar 19 12:34:28.374769 master-0 kubenswrapper[31830]: I0319 12:34:28.374472 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/674e9cd6-bf60-4cca-951f-de66c55e8ce5-operator-scripts\") pod \"keystone-db-create-2nfl5\" (UID: \"674e9cd6-bf60-4cca-951f-de66c55e8ce5\") " pod="openstack/keystone-db-create-2nfl5" Mar 19 12:34:28.374769 master-0 kubenswrapper[31830]: I0319 12:34:28.374600 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85fb70b5-81f7-417c-b0cf-f3c917d1bc90-operator-scripts\") pod \"keystone-a5bb-account-create-update-cvftl\" (UID: \"85fb70b5-81f7-417c-b0cf-f3c917d1bc90\") " pod="openstack/keystone-a5bb-account-create-update-cvftl" Mar 19 12:34:28.374769 master-0 kubenswrapper[31830]: I0319 12:34:28.374651 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74qlc\" (UniqueName: \"kubernetes.io/projected/674e9cd6-bf60-4cca-951f-de66c55e8ce5-kube-api-access-74qlc\") pod \"keystone-db-create-2nfl5\" (UID: \"674e9cd6-bf60-4cca-951f-de66c55e8ce5\") " pod="openstack/keystone-db-create-2nfl5" Mar 19 12:34:28.388321 master-0 kubenswrapper[31830]: I0319 12:34:28.376042 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/674e9cd6-bf60-4cca-951f-de66c55e8ce5-operator-scripts\") pod \"keystone-db-create-2nfl5\" (UID: \"674e9cd6-bf60-4cca-951f-de66c55e8ce5\") " pod="openstack/keystone-db-create-2nfl5" Mar 19 12:34:28.388321 master-0 kubenswrapper[31830]: I0319 12:34:28.377034 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85fb70b5-81f7-417c-b0cf-f3c917d1bc90-operator-scripts\") pod \"keystone-a5bb-account-create-update-cvftl\" (UID: \"85fb70b5-81f7-417c-b0cf-f3c917d1bc90\") " pod="openstack/keystone-a5bb-account-create-update-cvftl" Mar 19 12:34:28.388321 master-0 kubenswrapper[31830]: I0319 12:34:28.380469 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-58xtb"] Mar 19 12:34:28.391942 master-0 kubenswrapper[31830]: I0319 12:34:28.390588 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-adbe-account-create-update-q4gms"] Mar 19 12:34:28.394160 master-0 kubenswrapper[31830]: I0319 12:34:28.394106 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74qlc\" (UniqueName: \"kubernetes.io/projected/674e9cd6-bf60-4cca-951f-de66c55e8ce5-kube-api-access-74qlc\") pod \"keystone-db-create-2nfl5\" (UID: \"674e9cd6-bf60-4cca-951f-de66c55e8ce5\") " pod="openstack/keystone-db-create-2nfl5" Mar 19 12:34:28.428596 master-0 kubenswrapper[31830]: I0319 12:34:28.428515 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx4dp\" (UniqueName: \"kubernetes.io/projected/85fb70b5-81f7-417c-b0cf-f3c917d1bc90-kube-api-access-lx4dp\") pod \"keystone-a5bb-account-create-update-cvftl\" (UID: \"85fb70b5-81f7-417c-b0cf-f3c917d1bc90\") " pod="openstack/keystone-a5bb-account-create-update-cvftl" Mar 19 12:34:28.454234 master-0 kubenswrapper[31830]: I0319 12:34:28.454173 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-xj4l5"] Mar 19 12:34:28.456530 master-0 kubenswrapper[31830]: I0319 12:34:28.455692 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xj4l5" Mar 19 12:34:28.478963 master-0 kubenswrapper[31830]: I0319 12:34:28.478515 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-xj4l5"] Mar 19 12:34:28.480738 master-0 kubenswrapper[31830]: I0319 12:34:28.480682 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7xn6\" (UniqueName: \"kubernetes.io/projected/ec706aca-7a17-4a8c-a287-80b6b964eed4-kube-api-access-c7xn6\") pod \"glance-adbe-account-create-update-q4gms\" (UID: \"ec706aca-7a17-4a8c-a287-80b6b964eed4\") " pod="openstack/glance-adbe-account-create-update-q4gms" Mar 19 12:34:28.480738 master-0 kubenswrapper[31830]: I0319 12:34:28.480730 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz7hj\" (UniqueName: \"kubernetes.io/projected/196bfd6e-b584-4ca9-a94d-f9928ae87a7f-kube-api-access-qz7hj\") pod \"glance-db-create-58xtb\" (UID: \"196bfd6e-b584-4ca9-a94d-f9928ae87a7f\") " pod="openstack/glance-db-create-58xtb" Mar 19 12:34:28.480886 master-0 kubenswrapper[31830]: I0319 12:34:28.480749 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec706aca-7a17-4a8c-a287-80b6b964eed4-operator-scripts\") pod \"glance-adbe-account-create-update-q4gms\" (UID: \"ec706aca-7a17-4a8c-a287-80b6b964eed4\") " pod="openstack/glance-adbe-account-create-update-q4gms" Mar 19 12:34:28.480886 master-0 kubenswrapper[31830]: I0319 12:34:28.480826 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/196bfd6e-b584-4ca9-a94d-f9928ae87a7f-operator-scripts\") pod \"glance-db-create-58xtb\" (UID: \"196bfd6e-b584-4ca9-a94d-f9928ae87a7f\") " pod="openstack/glance-db-create-58xtb" Mar 19 12:34:28.493811 master-0 kubenswrapper[31830]: I0319 12:34:28.492007 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-72bf-account-create-update-tzxdw"] Mar 19 12:34:28.495360 master-0 kubenswrapper[31830]: I0319 12:34:28.495163 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-72bf-account-create-update-tzxdw" Mar 19 12:34:28.498673 master-0 kubenswrapper[31830]: I0319 12:34:28.498490 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 19 12:34:28.507173 master-0 kubenswrapper[31830]: I0319 12:34:28.505184 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a5bb-account-create-update-cvftl" Mar 19 12:34:28.507173 master-0 kubenswrapper[31830]: I0319 12:34:28.505758 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-72bf-account-create-update-tzxdw"] Mar 19 12:34:28.529586 master-0 kubenswrapper[31830]: I0319 12:34:28.529199 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2nfl5" Mar 19 12:34:28.584894 master-0 kubenswrapper[31830]: I0319 12:34:28.584835 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa7e2b32-a302-4e00-8941-21b35df641fc-operator-scripts\") pod \"placement-72bf-account-create-update-tzxdw\" (UID: \"fa7e2b32-a302-4e00-8941-21b35df641fc\") " pod="openstack/placement-72bf-account-create-update-tzxdw" Mar 19 12:34:28.585122 master-0 kubenswrapper[31830]: I0319 12:34:28.585084 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7xn6\" (UniqueName: \"kubernetes.io/projected/ec706aca-7a17-4a8c-a287-80b6b964eed4-kube-api-access-c7xn6\") pod \"glance-adbe-account-create-update-q4gms\" (UID: \"ec706aca-7a17-4a8c-a287-80b6b964eed4\") " pod="openstack/glance-adbe-account-create-update-q4gms" Mar 19 12:34:28.585165 master-0 kubenswrapper[31830]: I0319 12:34:28.585134 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz7hj\" (UniqueName: \"kubernetes.io/projected/196bfd6e-b584-4ca9-a94d-f9928ae87a7f-kube-api-access-qz7hj\") pod \"glance-db-create-58xtb\" (UID: \"196bfd6e-b584-4ca9-a94d-f9928ae87a7f\") " pod="openstack/glance-db-create-58xtb" Mar 19 12:34:28.585210 master-0 kubenswrapper[31830]: I0319 12:34:28.585161 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec706aca-7a17-4a8c-a287-80b6b964eed4-operator-scripts\") pod \"glance-adbe-account-create-update-q4gms\" (UID: \"ec706aca-7a17-4a8c-a287-80b6b964eed4\") " pod="openstack/glance-adbe-account-create-update-q4gms" Mar 19 12:34:28.586983 master-0 kubenswrapper[31830]: I0319 12:34:28.585430 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg8d2\" (UniqueName: \"kubernetes.io/projected/fa7e2b32-a302-4e00-8941-21b35df641fc-kube-api-access-zg8d2\") pod \"placement-72bf-account-create-update-tzxdw\" (UID: \"fa7e2b32-a302-4e00-8941-21b35df641fc\") " pod="openstack/placement-72bf-account-create-update-tzxdw" Mar 19 12:34:28.586983 master-0 kubenswrapper[31830]: I0319 12:34:28.585536 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88c4c83b-bbcc-44f3-aa58-880fd24e1e3f-operator-scripts\") pod \"placement-db-create-xj4l5\" (UID: \"88c4c83b-bbcc-44f3-aa58-880fd24e1e3f\") " pod="openstack/placement-db-create-xj4l5" Mar 19 12:34:28.586983 master-0 kubenswrapper[31830]: I0319 12:34:28.585591 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd448\" (UniqueName: \"kubernetes.io/projected/88c4c83b-bbcc-44f3-aa58-880fd24e1e3f-kube-api-access-kd448\") pod \"placement-db-create-xj4l5\" (UID: \"88c4c83b-bbcc-44f3-aa58-880fd24e1e3f\") " pod="openstack/placement-db-create-xj4l5" Mar 19 12:34:28.586983 master-0 kubenswrapper[31830]: I0319 12:34:28.585685 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/196bfd6e-b584-4ca9-a94d-f9928ae87a7f-operator-scripts\") pod \"glance-db-create-58xtb\" (UID: \"196bfd6e-b584-4ca9-a94d-f9928ae87a7f\") " pod="openstack/glance-db-create-58xtb" Mar 19 12:34:28.586983 master-0 kubenswrapper[31830]: I0319 12:34:28.586326 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec706aca-7a17-4a8c-a287-80b6b964eed4-operator-scripts\") pod \"glance-adbe-account-create-update-q4gms\" (UID: \"ec706aca-7a17-4a8c-a287-80b6b964eed4\") " pod="openstack/glance-adbe-account-create-update-q4gms" Mar 19 12:34:28.586983 master-0 kubenswrapper[31830]: I0319 12:34:28.586727 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/196bfd6e-b584-4ca9-a94d-f9928ae87a7f-operator-scripts\") pod \"glance-db-create-58xtb\" (UID: \"196bfd6e-b584-4ca9-a94d-f9928ae87a7f\") " pod="openstack/glance-db-create-58xtb" Mar 19 12:34:28.602522 master-0 kubenswrapper[31830]: I0319 12:34:28.601300 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7xn6\" (UniqueName: \"kubernetes.io/projected/ec706aca-7a17-4a8c-a287-80b6b964eed4-kube-api-access-c7xn6\") pod \"glance-adbe-account-create-update-q4gms\" (UID: \"ec706aca-7a17-4a8c-a287-80b6b964eed4\") " pod="openstack/glance-adbe-account-create-update-q4gms" Mar 19 12:34:28.602522 master-0 kubenswrapper[31830]: I0319 12:34:28.601783 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz7hj\" (UniqueName: \"kubernetes.io/projected/196bfd6e-b584-4ca9-a94d-f9928ae87a7f-kube-api-access-qz7hj\") pod \"glance-db-create-58xtb\" (UID: \"196bfd6e-b584-4ca9-a94d-f9928ae87a7f\") " pod="openstack/glance-db-create-58xtb" Mar 19 12:34:28.687546 master-0 kubenswrapper[31830]: I0319 12:34:28.687422 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg8d2\" (UniqueName: \"kubernetes.io/projected/fa7e2b32-a302-4e00-8941-21b35df641fc-kube-api-access-zg8d2\") pod \"placement-72bf-account-create-update-tzxdw\" (UID: \"fa7e2b32-a302-4e00-8941-21b35df641fc\") " pod="openstack/placement-72bf-account-create-update-tzxdw" Mar 19 12:34:28.687721 master-0 kubenswrapper[31830]: I0319 12:34:28.687668 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88c4c83b-bbcc-44f3-aa58-880fd24e1e3f-operator-scripts\") pod \"placement-db-create-xj4l5\" (UID: \"88c4c83b-bbcc-44f3-aa58-880fd24e1e3f\") " pod="openstack/placement-db-create-xj4l5" Mar 19 12:34:28.687915 master-0 kubenswrapper[31830]: I0319 12:34:28.687730 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd448\" (UniqueName: \"kubernetes.io/projected/88c4c83b-bbcc-44f3-aa58-880fd24e1e3f-kube-api-access-kd448\") pod \"placement-db-create-xj4l5\" (UID: \"88c4c83b-bbcc-44f3-aa58-880fd24e1e3f\") " pod="openstack/placement-db-create-xj4l5" Mar 19 12:34:28.688133 master-0 kubenswrapper[31830]: I0319 12:34:28.688108 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa7e2b32-a302-4e00-8941-21b35df641fc-operator-scripts\") pod \"placement-72bf-account-create-update-tzxdw\" (UID: \"fa7e2b32-a302-4e00-8941-21b35df641fc\") " pod="openstack/placement-72bf-account-create-update-tzxdw" Mar 19 12:34:28.688807 master-0 kubenswrapper[31830]: I0319 12:34:28.688750 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88c4c83b-bbcc-44f3-aa58-880fd24e1e3f-operator-scripts\") pod \"placement-db-create-xj4l5\" (UID: \"88c4c83b-bbcc-44f3-aa58-880fd24e1e3f\") " pod="openstack/placement-db-create-xj4l5" Mar 19 12:34:28.688898 master-0 kubenswrapper[31830]: I0319 12:34:28.688864 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa7e2b32-a302-4e00-8941-21b35df641fc-operator-scripts\") pod \"placement-72bf-account-create-update-tzxdw\" (UID: \"fa7e2b32-a302-4e00-8941-21b35df641fc\") " pod="openstack/placement-72bf-account-create-update-tzxdw" Mar 19 12:34:28.705899 master-0 kubenswrapper[31830]: I0319 12:34:28.705860 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd448\" (UniqueName: \"kubernetes.io/projected/88c4c83b-bbcc-44f3-aa58-880fd24e1e3f-kube-api-access-kd448\") pod \"placement-db-create-xj4l5\" (UID: \"88c4c83b-bbcc-44f3-aa58-880fd24e1e3f\") " pod="openstack/placement-db-create-xj4l5" Mar 19 12:34:28.707510 master-0 kubenswrapper[31830]: I0319 12:34:28.707483 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg8d2\" (UniqueName: \"kubernetes.io/projected/fa7e2b32-a302-4e00-8941-21b35df641fc-kube-api-access-zg8d2\") pod \"placement-72bf-account-create-update-tzxdw\" (UID: \"fa7e2b32-a302-4e00-8941-21b35df641fc\") " pod="openstack/placement-72bf-account-create-update-tzxdw" Mar 19 12:34:28.790139 master-0 kubenswrapper[31830]: I0319 12:34:28.790078 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-58xtb" Mar 19 12:34:28.824010 master-0 kubenswrapper[31830]: I0319 12:34:28.823956 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-adbe-account-create-update-q4gms" Mar 19 12:34:28.840442 master-0 kubenswrapper[31830]: I0319 12:34:28.840394 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xj4l5" Mar 19 12:34:28.847969 master-0 kubenswrapper[31830]: I0319 12:34:28.847930 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-72bf-account-create-update-tzxdw" Mar 19 12:34:29.518243 master-0 kubenswrapper[31830]: I0319 12:34:29.518175 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-b5gp7"] Mar 19 12:34:29.519861 master-0 kubenswrapper[31830]: I0319 12:34:29.519830 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b5gp7" Mar 19 12:34:29.527266 master-0 kubenswrapper[31830]: I0319 12:34:29.527220 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 19 12:34:29.543859 master-0 kubenswrapper[31830]: I0319 12:34:29.543716 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-b5gp7"] Mar 19 12:34:29.621548 master-0 kubenswrapper[31830]: I0319 12:34:29.621469 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1574d508-d4a6-4baf-b732-ea6f8466d76c-operator-scripts\") pod \"root-account-create-update-b5gp7\" (UID: \"1574d508-d4a6-4baf-b732-ea6f8466d76c\") " pod="openstack/root-account-create-update-b5gp7" Mar 19 12:34:29.621757 master-0 kubenswrapper[31830]: I0319 12:34:29.621715 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjs2z\" (UniqueName: \"kubernetes.io/projected/1574d508-d4a6-4baf-b732-ea6f8466d76c-kube-api-access-cjs2z\") pod \"root-account-create-update-b5gp7\" (UID: \"1574d508-d4a6-4baf-b732-ea6f8466d76c\") " pod="openstack/root-account-create-update-b5gp7" Mar 19 12:34:29.724214 master-0 kubenswrapper[31830]: I0319 12:34:29.724166 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1574d508-d4a6-4baf-b732-ea6f8466d76c-operator-scripts\") pod \"root-account-create-update-b5gp7\" (UID: \"1574d508-d4a6-4baf-b732-ea6f8466d76c\") " pod="openstack/root-account-create-update-b5gp7" Mar 19 12:34:29.724520 master-0 kubenswrapper[31830]: I0319 12:34:29.724502 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/736d878b-1328-4a36-873f-62849c4e2d07-etc-swift\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:29.724685 master-0 kubenswrapper[31830]: E0319 12:34:29.724643 31830 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 19 12:34:29.724685 master-0 kubenswrapper[31830]: E0319 12:34:29.724684 31830 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 19 12:34:29.724791 master-0 kubenswrapper[31830]: I0319 12:34:29.724647 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjs2z\" (UniqueName: \"kubernetes.io/projected/1574d508-d4a6-4baf-b732-ea6f8466d76c-kube-api-access-cjs2z\") pod \"root-account-create-update-b5gp7\" (UID: \"1574d508-d4a6-4baf-b732-ea6f8466d76c\") " pod="openstack/root-account-create-update-b5gp7" Mar 19 12:34:29.724857 master-0 kubenswrapper[31830]: E0319 12:34:29.724744 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/736d878b-1328-4a36-873f-62849c4e2d07-etc-swift podName:736d878b-1328-4a36-873f-62849c4e2d07 nodeName:}" failed. No retries permitted until 2026-03-19 12:34:37.724722699 +0000 UTC m=+1216.273683403 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/736d878b-1328-4a36-873f-62849c4e2d07-etc-swift") pod "swift-storage-0" (UID: "736d878b-1328-4a36-873f-62849c4e2d07") : configmap "swift-ring-files" not found Mar 19 12:34:29.724994 master-0 kubenswrapper[31830]: I0319 12:34:29.724972 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1574d508-d4a6-4baf-b732-ea6f8466d76c-operator-scripts\") pod \"root-account-create-update-b5gp7\" (UID: \"1574d508-d4a6-4baf-b732-ea6f8466d76c\") " pod="openstack/root-account-create-update-b5gp7" Mar 19 12:34:29.757208 master-0 kubenswrapper[31830]: I0319 12:34:29.757146 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjs2z\" (UniqueName: \"kubernetes.io/projected/1574d508-d4a6-4baf-b732-ea6f8466d76c-kube-api-access-cjs2z\") pod \"root-account-create-update-b5gp7\" (UID: \"1574d508-d4a6-4baf-b732-ea6f8466d76c\") " pod="openstack/root-account-create-update-b5gp7" Mar 19 12:34:29.841954 master-0 kubenswrapper[31830]: I0319 12:34:29.841830 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b5gp7" Mar 19 12:34:29.897673 master-0 kubenswrapper[31830]: I0319 12:34:29.897604 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a5bb-account-create-update-cvftl"] Mar 19 12:34:29.911945 master-0 kubenswrapper[31830]: I0319 12:34:29.911002 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2nfl5"] Mar 19 12:34:29.913929 master-0 kubenswrapper[31830]: W0319 12:34:29.913769 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85fb70b5_81f7_417c_b0cf_f3c917d1bc90.slice/crio-853b01bbc722a8991a0bda37b353ecdcd9fe888ce746428600ae20775868a54a WatchSource:0}: Error finding container 853b01bbc722a8991a0bda37b353ecdcd9fe888ce746428600ae20775868a54a: Status 404 returned error can't find the container with id 853b01bbc722a8991a0bda37b353ecdcd9fe888ce746428600ae20775868a54a Mar 19 12:34:29.936069 master-0 kubenswrapper[31830]: I0319 12:34:29.931523 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-xj4l5"] Mar 19 12:34:30.188916 master-0 kubenswrapper[31830]: I0319 12:34:30.188436 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-58xtb"] Mar 19 12:34:30.198293 master-0 kubenswrapper[31830]: W0319 12:34:30.198245 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod196bfd6e_b584_4ca9_a94d_f9928ae87a7f.slice/crio-ace14b85e8d11d34d9417eea990892140768019ae2ce695e05f7774f721b6596 WatchSource:0}: Error finding container ace14b85e8d11d34d9417eea990892140768019ae2ce695e05f7774f721b6596: Status 404 returned error can't find the container with id ace14b85e8d11d34d9417eea990892140768019ae2ce695e05f7774f721b6596 Mar 19 12:34:30.200291 master-0 kubenswrapper[31830]: I0319 12:34:30.200161 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a5bb-account-create-update-cvftl" event={"ID":"85fb70b5-81f7-417c-b0cf-f3c917d1bc90","Type":"ContainerStarted","Data":"853b01bbc722a8991a0bda37b353ecdcd9fe888ce746428600ae20775868a54a"} Mar 19 12:34:30.202178 master-0 kubenswrapper[31830]: I0319 12:34:30.202148 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-72bf-account-create-update-tzxdw"] Mar 19 12:34:30.204201 master-0 kubenswrapper[31830]: I0319 12:34:30.204168 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xj4l5" event={"ID":"88c4c83b-bbcc-44f3-aa58-880fd24e1e3f","Type":"ContainerStarted","Data":"e5373129eda5a756a53274e649290e961eca7fb32904ed4fbff0109dab4835a0"} Mar 19 12:34:30.205675 master-0 kubenswrapper[31830]: I0319 12:34:30.205410 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2nfl5" event={"ID":"674e9cd6-bf60-4cca-951f-de66c55e8ce5","Type":"ContainerStarted","Data":"619b98877768590377a2763648e1c4db9a0bd10b823a949e3110b93eaf8588ee"} Mar 19 12:34:30.224089 master-0 kubenswrapper[31830]: I0319 12:34:30.224032 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cm2zc" event={"ID":"512e045f-7b25-4992-a593-227de5818bb3","Type":"ContainerStarted","Data":"f426b36a1829ba5e4358a90b632b8a1a3ed81b8372fc32bc6f9370bd1e605a44"} Mar 19 12:34:30.254947 master-0 kubenswrapper[31830]: I0319 12:34:30.254345 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-adbe-account-create-update-q4gms"] Mar 19 12:34:30.279040 master-0 kubenswrapper[31830]: I0319 12:34:30.278888 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" Mar 19 12:34:30.284820 master-0 kubenswrapper[31830]: I0319 12:34:30.280892 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-cm2zc" podStartSLOduration=2.6108142389999998 podStartE2EDuration="6.280869047s" podCreationTimestamp="2026-03-19 12:34:24 +0000 UTC" firstStartedPulling="2026-03-19 12:34:25.428538613 +0000 UTC m=+1203.977499317" lastFinishedPulling="2026-03-19 12:34:29.098593401 +0000 UTC m=+1207.647554125" observedRunningTime="2026-03-19 12:34:30.249693071 +0000 UTC m=+1208.798653775" watchObservedRunningTime="2026-03-19 12:34:30.280869047 +0000 UTC m=+1208.829829751" Mar 19 12:34:30.372932 master-0 kubenswrapper[31830]: I0319 12:34:30.372881 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-b5gp7"] Mar 19 12:34:30.390227 master-0 kubenswrapper[31830]: I0319 12:34:30.386951 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-sxht2"] Mar 19 12:34:30.390227 master-0 kubenswrapper[31830]: I0319 12:34:30.387229 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" podUID="5bc881ed-8448-4279-97e5-cb834cab7a64" containerName="dnsmasq-dns" containerID="cri-o://fd83cfd0a5030aa8964d4c66ff8815628c26f20ae2479ce72f5d922d8ec37a7e" gracePeriod=10 Mar 19 12:34:30.418686 master-0 kubenswrapper[31830]: W0319 12:34:30.417340 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1574d508_d4a6_4baf_b732_ea6f8466d76c.slice/crio-a46a4db70eab604bfc5d6f7e91c5b36c4c57f035f0763b7b76b611a7d8c34813 WatchSource:0}: Error finding container a46a4db70eab604bfc5d6f7e91c5b36c4c57f035f0763b7b76b611a7d8c34813: Status 404 returned error can't find the container with id a46a4db70eab604bfc5d6f7e91c5b36c4c57f035f0763b7b76b611a7d8c34813 Mar 19 12:34:30.725834 master-0 kubenswrapper[31830]: E0319 12:34:30.722369 31830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88c4c83b_bbcc_44f3_aa58_880fd24e1e3f.slice/crio-conmon-2838e46f93b6698fd5daffcb4661afff24710a7bddea2a23ded88e4f7a25b00d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bc881ed_8448_4279_97e5_cb834cab7a64.slice/crio-conmon-fd83cfd0a5030aa8964d4c66ff8815628c26f20ae2479ce72f5d922d8ec37a7e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bc881ed_8448_4279_97e5_cb834cab7a64.slice/crio-fd83cfd0a5030aa8964d4c66ff8815628c26f20ae2479ce72f5d922d8ec37a7e.scope\": RecentStats: unable to find data in memory cache]" Mar 19 12:34:31.240731 master-0 kubenswrapper[31830]: I0319 12:34:31.240668 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b5gp7" event={"ID":"1574d508-d4a6-4baf-b732-ea6f8466d76c","Type":"ContainerStarted","Data":"92fefa25e12d7b8a81ae96a8caa460bd21bac9b47e6a2e53320932f87e6b687f"} Mar 19 12:34:31.240731 master-0 kubenswrapper[31830]: I0319 12:34:31.240725 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b5gp7" event={"ID":"1574d508-d4a6-4baf-b732-ea6f8466d76c","Type":"ContainerStarted","Data":"a46a4db70eab604bfc5d6f7e91c5b36c4c57f035f0763b7b76b611a7d8c34813"} Mar 19 12:34:31.242726 master-0 kubenswrapper[31830]: I0319 12:34:31.242671 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-adbe-account-create-update-q4gms" event={"ID":"ec706aca-7a17-4a8c-a287-80b6b964eed4","Type":"ContainerStarted","Data":"4af6ce1afd031981e1c519b4dc4c4c8877fdf83ab0d2a2c31403eb0d4ddae00d"} Mar 19 12:34:31.242992 master-0 kubenswrapper[31830]: I0319 12:34:31.242733 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-adbe-account-create-update-q4gms" event={"ID":"ec706aca-7a17-4a8c-a287-80b6b964eed4","Type":"ContainerStarted","Data":"c12d827230b153cce819d5f275d434d165b2ed5f6fb5e8a4a8d30b8dd38922d0"} Mar 19 12:34:31.249923 master-0 kubenswrapper[31830]: I0319 12:34:31.249567 31830 generic.go:334] "Generic (PLEG): container finished" podID="88c4c83b-bbcc-44f3-aa58-880fd24e1e3f" containerID="2838e46f93b6698fd5daffcb4661afff24710a7bddea2a23ded88e4f7a25b00d" exitCode=0 Mar 19 12:34:31.249923 master-0 kubenswrapper[31830]: I0319 12:34:31.249683 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xj4l5" event={"ID":"88c4c83b-bbcc-44f3-aa58-880fd24e1e3f","Type":"ContainerDied","Data":"2838e46f93b6698fd5daffcb4661afff24710a7bddea2a23ded88e4f7a25b00d"} Mar 19 12:34:31.268893 master-0 kubenswrapper[31830]: I0319 12:34:31.268242 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-b5gp7" podStartSLOduration=2.268216328 podStartE2EDuration="2.268216328s" podCreationTimestamp="2026-03-19 12:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:34:31.258759924 +0000 UTC m=+1209.807720628" watchObservedRunningTime="2026-03-19 12:34:31.268216328 +0000 UTC m=+1209.817177032" Mar 19 12:34:31.278281 master-0 kubenswrapper[31830]: I0319 12:34:31.271587 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-72bf-account-create-update-tzxdw" event={"ID":"fa7e2b32-a302-4e00-8941-21b35df641fc","Type":"ContainerStarted","Data":"0c4530a62fdf3cd68c58f24eb1d2e025b47fe120675f8cbf55c79a5a2e24e77a"} Mar 19 12:34:31.278281 master-0 kubenswrapper[31830]: I0319 12:34:31.271661 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-72bf-account-create-update-tzxdw" event={"ID":"fa7e2b32-a302-4e00-8941-21b35df641fc","Type":"ContainerStarted","Data":"f7dcaf4fd5ea03c5cce944d88d87d048bafeb12796050dee464bef0cc2932dbf"} Mar 19 12:34:31.280593 master-0 kubenswrapper[31830]: I0319 12:34:31.280409 31830 generic.go:334] "Generic (PLEG): container finished" podID="674e9cd6-bf60-4cca-951f-de66c55e8ce5" containerID="a103e6a30198ac998cec4273345472dd70f07f0327e5a658845518c70f558342" exitCode=0 Mar 19 12:34:31.280593 master-0 kubenswrapper[31830]: I0319 12:34:31.280497 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2nfl5" event={"ID":"674e9cd6-bf60-4cca-951f-de66c55e8ce5","Type":"ContainerDied","Data":"a103e6a30198ac998cec4273345472dd70f07f0327e5a658845518c70f558342"} Mar 19 12:34:31.297245 master-0 kubenswrapper[31830]: I0319 12:34:31.294666 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-adbe-account-create-update-q4gms" podStartSLOduration=3.294647587 podStartE2EDuration="3.294647587s" podCreationTimestamp="2026-03-19 12:34:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:34:31.288022102 +0000 UTC m=+1209.836982806" watchObservedRunningTime="2026-03-19 12:34:31.294647587 +0000 UTC m=+1209.843608291" Mar 19 12:34:31.299272 master-0 kubenswrapper[31830]: I0319 12:34:31.299125 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-58xtb" event={"ID":"196bfd6e-b584-4ca9-a94d-f9928ae87a7f","Type":"ContainerStarted","Data":"00867437923c4518c18d7788ac7fe2d5afaa1a2f7e97ed1c1ea3dda87757f76b"} Mar 19 12:34:31.299272 master-0 kubenswrapper[31830]: I0319 12:34:31.299176 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-58xtb" event={"ID":"196bfd6e-b584-4ca9-a94d-f9928ae87a7f","Type":"ContainerStarted","Data":"ace14b85e8d11d34d9417eea990892140768019ae2ce695e05f7774f721b6596"} Mar 19 12:34:31.303236 master-0 kubenswrapper[31830]: I0319 12:34:31.303104 31830 generic.go:334] "Generic (PLEG): container finished" podID="5bc881ed-8448-4279-97e5-cb834cab7a64" containerID="fd83cfd0a5030aa8964d4c66ff8815628c26f20ae2479ce72f5d922d8ec37a7e" exitCode=0 Mar 19 12:34:31.303393 master-0 kubenswrapper[31830]: I0319 12:34:31.303201 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" event={"ID":"5bc881ed-8448-4279-97e5-cb834cab7a64","Type":"ContainerDied","Data":"fd83cfd0a5030aa8964d4c66ff8815628c26f20ae2479ce72f5d922d8ec37a7e"} Mar 19 12:34:31.303393 master-0 kubenswrapper[31830]: I0319 12:34:31.303362 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" event={"ID":"5bc881ed-8448-4279-97e5-cb834cab7a64","Type":"ContainerDied","Data":"55745afab891f80cc8d864bc450badad8772651bce7dd15d68d5017001ef8de7"} Mar 19 12:34:31.303555 master-0 kubenswrapper[31830]: I0319 12:34:31.303510 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55745afab891f80cc8d864bc450badad8772651bce7dd15d68d5017001ef8de7" Mar 19 12:34:31.305897 master-0 kubenswrapper[31830]: I0319 12:34:31.305769 31830 generic.go:334] "Generic (PLEG): container finished" podID="85fb70b5-81f7-417c-b0cf-f3c917d1bc90" containerID="3ac9d32465df04aa108558289ba5246d36f83fb17fd89e3eb122e66cab88d517" exitCode=0 Mar 19 12:34:31.306005 master-0 kubenswrapper[31830]: I0319 12:34:31.305926 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a5bb-account-create-update-cvftl" event={"ID":"85fb70b5-81f7-417c-b0cf-f3c917d1bc90","Type":"ContainerDied","Data":"3ac9d32465df04aa108558289ba5246d36f83fb17fd89e3eb122e66cab88d517"} Mar 19 12:34:31.390727 master-0 kubenswrapper[31830]: I0319 12:34:31.389174 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-58xtb" podStartSLOduration=3.389149448 podStartE2EDuration="3.389149448s" podCreationTimestamp="2026-03-19 12:34:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:34:31.373850583 +0000 UTC m=+1209.922811287" watchObservedRunningTime="2026-03-19 12:34:31.389149448 +0000 UTC m=+1209.938110152" Mar 19 12:34:31.412559 master-0 kubenswrapper[31830]: I0319 12:34:31.412417 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-72bf-account-create-update-tzxdw" podStartSLOduration=3.412396819 podStartE2EDuration="3.412396819s" podCreationTimestamp="2026-03-19 12:34:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:34:31.403542904 +0000 UTC m=+1209.952503608" watchObservedRunningTime="2026-03-19 12:34:31.412396819 +0000 UTC m=+1209.961357513" Mar 19 12:34:31.527892 master-0 kubenswrapper[31830]: I0319 12:34:31.527774 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" Mar 19 12:34:31.581657 master-0 kubenswrapper[31830]: I0319 12:34:31.581588 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9fsm\" (UniqueName: \"kubernetes.io/projected/5bc881ed-8448-4279-97e5-cb834cab7a64-kube-api-access-q9fsm\") pod \"5bc881ed-8448-4279-97e5-cb834cab7a64\" (UID: \"5bc881ed-8448-4279-97e5-cb834cab7a64\") " Mar 19 12:34:31.581948 master-0 kubenswrapper[31830]: I0319 12:34:31.581758 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bc881ed-8448-4279-97e5-cb834cab7a64-dns-svc\") pod \"5bc881ed-8448-4279-97e5-cb834cab7a64\" (UID: \"5bc881ed-8448-4279-97e5-cb834cab7a64\") " Mar 19 12:34:31.581948 master-0 kubenswrapper[31830]: I0319 12:34:31.581918 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc881ed-8448-4279-97e5-cb834cab7a64-config\") pod \"5bc881ed-8448-4279-97e5-cb834cab7a64\" (UID: \"5bc881ed-8448-4279-97e5-cb834cab7a64\") " Mar 19 12:34:31.589073 master-0 kubenswrapper[31830]: I0319 12:34:31.589009 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bc881ed-8448-4279-97e5-cb834cab7a64-kube-api-access-q9fsm" (OuterVolumeSpecName: "kube-api-access-q9fsm") pod "5bc881ed-8448-4279-97e5-cb834cab7a64" (UID: "5bc881ed-8448-4279-97e5-cb834cab7a64"). InnerVolumeSpecName "kube-api-access-q9fsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:34:31.651179 master-0 kubenswrapper[31830]: I0319 12:34:31.650639 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bc881ed-8448-4279-97e5-cb834cab7a64-config" (OuterVolumeSpecName: "config") pod "5bc881ed-8448-4279-97e5-cb834cab7a64" (UID: "5bc881ed-8448-4279-97e5-cb834cab7a64"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:31.658497 master-0 kubenswrapper[31830]: I0319 12:34:31.658448 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bc881ed-8448-4279-97e5-cb834cab7a64-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5bc881ed-8448-4279-97e5-cb834cab7a64" (UID: "5bc881ed-8448-4279-97e5-cb834cab7a64"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:31.683868 master-0 kubenswrapper[31830]: I0319 12:34:31.683813 31830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc881ed-8448-4279-97e5-cb834cab7a64-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:31.683868 master-0 kubenswrapper[31830]: I0319 12:34:31.683862 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9fsm\" (UniqueName: \"kubernetes.io/projected/5bc881ed-8448-4279-97e5-cb834cab7a64-kube-api-access-q9fsm\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:31.683868 master-0 kubenswrapper[31830]: I0319 12:34:31.683876 31830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bc881ed-8448-4279-97e5-cb834cab7a64-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:32.317754 master-0 kubenswrapper[31830]: I0319 12:34:32.317694 31830 generic.go:334] "Generic (PLEG): container finished" podID="ec706aca-7a17-4a8c-a287-80b6b964eed4" containerID="4af6ce1afd031981e1c519b4dc4c4c8877fdf83ab0d2a2c31403eb0d4ddae00d" exitCode=0 Mar 19 12:34:32.318587 master-0 kubenswrapper[31830]: I0319 12:34:32.318178 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-adbe-account-create-update-q4gms" event={"ID":"ec706aca-7a17-4a8c-a287-80b6b964eed4","Type":"ContainerDied","Data":"4af6ce1afd031981e1c519b4dc4c4c8877fdf83ab0d2a2c31403eb0d4ddae00d"} Mar 19 12:34:32.321723 master-0 kubenswrapper[31830]: I0319 12:34:32.320983 31830 generic.go:334] "Generic (PLEG): container finished" podID="fa7e2b32-a302-4e00-8941-21b35df641fc" containerID="0c4530a62fdf3cd68c58f24eb1d2e025b47fe120675f8cbf55c79a5a2e24e77a" exitCode=0 Mar 19 12:34:32.321723 master-0 kubenswrapper[31830]: I0319 12:34:32.321048 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-72bf-account-create-update-tzxdw" event={"ID":"fa7e2b32-a302-4e00-8941-21b35df641fc","Type":"ContainerDied","Data":"0c4530a62fdf3cd68c58f24eb1d2e025b47fe120675f8cbf55c79a5a2e24e77a"} Mar 19 12:34:32.322956 master-0 kubenswrapper[31830]: I0319 12:34:32.322927 31830 generic.go:334] "Generic (PLEG): container finished" podID="196bfd6e-b584-4ca9-a94d-f9928ae87a7f" containerID="00867437923c4518c18d7788ac7fe2d5afaa1a2f7e97ed1c1ea3dda87757f76b" exitCode=0 Mar 19 12:34:32.323101 master-0 kubenswrapper[31830]: I0319 12:34:32.322971 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-58xtb" event={"ID":"196bfd6e-b584-4ca9-a94d-f9928ae87a7f","Type":"ContainerDied","Data":"00867437923c4518c18d7788ac7fe2d5afaa1a2f7e97ed1c1ea3dda87757f76b"} Mar 19 12:34:32.324438 master-0 kubenswrapper[31830]: I0319 12:34:32.324407 31830 generic.go:334] "Generic (PLEG): container finished" podID="1574d508-d4a6-4baf-b732-ea6f8466d76c" containerID="92fefa25e12d7b8a81ae96a8caa460bd21bac9b47e6a2e53320932f87e6b687f" exitCode=0 Mar 19 12:34:32.324581 master-0 kubenswrapper[31830]: I0319 12:34:32.324547 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b5gp7" event={"ID":"1574d508-d4a6-4baf-b732-ea6f8466d76c","Type":"ContainerDied","Data":"92fefa25e12d7b8a81ae96a8caa460bd21bac9b47e6a2e53320932f87e6b687f"} Mar 19 12:34:32.324751 master-0 kubenswrapper[31830]: I0319 12:34:32.324720 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-sxht2" Mar 19 12:34:32.403325 master-0 kubenswrapper[31830]: I0319 12:34:32.403259 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-sxht2"] Mar 19 12:34:32.426913 master-0 kubenswrapper[31830]: I0319 12:34:32.426845 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-sxht2"] Mar 19 12:34:32.731695 master-0 kubenswrapper[31830]: I0319 12:34:32.731666 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2nfl5" Mar 19 12:34:32.815936 master-0 kubenswrapper[31830]: I0319 12:34:32.815892 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/674e9cd6-bf60-4cca-951f-de66c55e8ce5-operator-scripts\") pod \"674e9cd6-bf60-4cca-951f-de66c55e8ce5\" (UID: \"674e9cd6-bf60-4cca-951f-de66c55e8ce5\") " Mar 19 12:34:32.816630 master-0 kubenswrapper[31830]: I0319 12:34:32.816534 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/674e9cd6-bf60-4cca-951f-de66c55e8ce5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "674e9cd6-bf60-4cca-951f-de66c55e8ce5" (UID: "674e9cd6-bf60-4cca-951f-de66c55e8ce5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:32.816772 master-0 kubenswrapper[31830]: I0319 12:34:32.816757 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74qlc\" (UniqueName: \"kubernetes.io/projected/674e9cd6-bf60-4cca-951f-de66c55e8ce5-kube-api-access-74qlc\") pod \"674e9cd6-bf60-4cca-951f-de66c55e8ce5\" (UID: \"674e9cd6-bf60-4cca-951f-de66c55e8ce5\") " Mar 19 12:34:32.818830 master-0 kubenswrapper[31830]: I0319 12:34:32.818773 31830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/674e9cd6-bf60-4cca-951f-de66c55e8ce5-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:32.824120 master-0 kubenswrapper[31830]: I0319 12:34:32.824072 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/674e9cd6-bf60-4cca-951f-de66c55e8ce5-kube-api-access-74qlc" (OuterVolumeSpecName: "kube-api-access-74qlc") pod "674e9cd6-bf60-4cca-951f-de66c55e8ce5" (UID: "674e9cd6-bf60-4cca-951f-de66c55e8ce5"). InnerVolumeSpecName "kube-api-access-74qlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:34:32.921030 master-0 kubenswrapper[31830]: I0319 12:34:32.920898 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74qlc\" (UniqueName: \"kubernetes.io/projected/674e9cd6-bf60-4cca-951f-de66c55e8ce5-kube-api-access-74qlc\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:33.049837 master-0 kubenswrapper[31830]: I0319 12:34:33.049786 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xj4l5" Mar 19 12:34:33.062631 master-0 kubenswrapper[31830]: I0319 12:34:33.062581 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a5bb-account-create-update-cvftl" Mar 19 12:34:33.123604 master-0 kubenswrapper[31830]: I0319 12:34:33.123059 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88c4c83b-bbcc-44f3-aa58-880fd24e1e3f-operator-scripts\") pod \"88c4c83b-bbcc-44f3-aa58-880fd24e1e3f\" (UID: \"88c4c83b-bbcc-44f3-aa58-880fd24e1e3f\") " Mar 19 12:34:33.123604 master-0 kubenswrapper[31830]: I0319 12:34:33.123197 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx4dp\" (UniqueName: \"kubernetes.io/projected/85fb70b5-81f7-417c-b0cf-f3c917d1bc90-kube-api-access-lx4dp\") pod \"85fb70b5-81f7-417c-b0cf-f3c917d1bc90\" (UID: \"85fb70b5-81f7-417c-b0cf-f3c917d1bc90\") " Mar 19 12:34:33.123604 master-0 kubenswrapper[31830]: I0319 12:34:33.123383 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd448\" (UniqueName: \"kubernetes.io/projected/88c4c83b-bbcc-44f3-aa58-880fd24e1e3f-kube-api-access-kd448\") pod \"88c4c83b-bbcc-44f3-aa58-880fd24e1e3f\" (UID: \"88c4c83b-bbcc-44f3-aa58-880fd24e1e3f\") " Mar 19 12:34:33.123604 master-0 kubenswrapper[31830]: I0319 12:34:33.123470 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85fb70b5-81f7-417c-b0cf-f3c917d1bc90-operator-scripts\") pod \"85fb70b5-81f7-417c-b0cf-f3c917d1bc90\" (UID: \"85fb70b5-81f7-417c-b0cf-f3c917d1bc90\") " Mar 19 12:34:33.126834 master-0 kubenswrapper[31830]: I0319 12:34:33.124441 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85fb70b5-81f7-417c-b0cf-f3c917d1bc90-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85fb70b5-81f7-417c-b0cf-f3c917d1bc90" (UID: "85fb70b5-81f7-417c-b0cf-f3c917d1bc90"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:33.126834 master-0 kubenswrapper[31830]: I0319 12:34:33.124886 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88c4c83b-bbcc-44f3-aa58-880fd24e1e3f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88c4c83b-bbcc-44f3-aa58-880fd24e1e3f" (UID: "88c4c83b-bbcc-44f3-aa58-880fd24e1e3f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:33.128037 master-0 kubenswrapper[31830]: I0319 12:34:33.127763 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85fb70b5-81f7-417c-b0cf-f3c917d1bc90-kube-api-access-lx4dp" (OuterVolumeSpecName: "kube-api-access-lx4dp") pod "85fb70b5-81f7-417c-b0cf-f3c917d1bc90" (UID: "85fb70b5-81f7-417c-b0cf-f3c917d1bc90"). InnerVolumeSpecName "kube-api-access-lx4dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:34:33.129296 master-0 kubenswrapper[31830]: I0319 12:34:33.128967 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88c4c83b-bbcc-44f3-aa58-880fd24e1e3f-kube-api-access-kd448" (OuterVolumeSpecName: "kube-api-access-kd448") pod "88c4c83b-bbcc-44f3-aa58-880fd24e1e3f" (UID: "88c4c83b-bbcc-44f3-aa58-880fd24e1e3f"). InnerVolumeSpecName "kube-api-access-kd448". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:34:33.234049 master-0 kubenswrapper[31830]: I0319 12:34:33.225944 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd448\" (UniqueName: \"kubernetes.io/projected/88c4c83b-bbcc-44f3-aa58-880fd24e1e3f-kube-api-access-kd448\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:33.234049 master-0 kubenswrapper[31830]: I0319 12:34:33.225996 31830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85fb70b5-81f7-417c-b0cf-f3c917d1bc90-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:33.234049 master-0 kubenswrapper[31830]: I0319 12:34:33.226006 31830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88c4c83b-bbcc-44f3-aa58-880fd24e1e3f-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:33.234049 master-0 kubenswrapper[31830]: I0319 12:34:33.226016 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx4dp\" (UniqueName: \"kubernetes.io/projected/85fb70b5-81f7-417c-b0cf-f3c917d1bc90-kube-api-access-lx4dp\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:33.349884 master-0 kubenswrapper[31830]: I0319 12:34:33.349811 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xj4l5" event={"ID":"88c4c83b-bbcc-44f3-aa58-880fd24e1e3f","Type":"ContainerDied","Data":"e5373129eda5a756a53274e649290e961eca7fb32904ed4fbff0109dab4835a0"} Mar 19 12:34:33.349884 master-0 kubenswrapper[31830]: I0319 12:34:33.349863 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5373129eda5a756a53274e649290e961eca7fb32904ed4fbff0109dab4835a0" Mar 19 12:34:33.350412 master-0 kubenswrapper[31830]: I0319 12:34:33.349920 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xj4l5" Mar 19 12:34:33.355944 master-0 kubenswrapper[31830]: I0319 12:34:33.355903 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2nfl5" event={"ID":"674e9cd6-bf60-4cca-951f-de66c55e8ce5","Type":"ContainerDied","Data":"619b98877768590377a2763648e1c4db9a0bd10b823a949e3110b93eaf8588ee"} Mar 19 12:34:33.356149 master-0 kubenswrapper[31830]: I0319 12:34:33.356131 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="619b98877768590377a2763648e1c4db9a0bd10b823a949e3110b93eaf8588ee" Mar 19 12:34:33.356232 master-0 kubenswrapper[31830]: I0319 12:34:33.355988 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2nfl5" Mar 19 12:34:33.359912 master-0 kubenswrapper[31830]: I0319 12:34:33.359878 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a5bb-account-create-update-cvftl" event={"ID":"85fb70b5-81f7-417c-b0cf-f3c917d1bc90","Type":"ContainerDied","Data":"853b01bbc722a8991a0bda37b353ecdcd9fe888ce746428600ae20775868a54a"} Mar 19 12:34:33.360026 master-0 kubenswrapper[31830]: I0319 12:34:33.359914 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="853b01bbc722a8991a0bda37b353ecdcd9fe888ce746428600ae20775868a54a" Mar 19 12:34:33.360026 master-0 kubenswrapper[31830]: I0319 12:34:33.359891 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a5bb-account-create-update-cvftl" Mar 19 12:34:33.692967 master-0 kubenswrapper[31830]: I0319 12:34:33.692748 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bc881ed-8448-4279-97e5-cb834cab7a64" path="/var/lib/kubelet/pods/5bc881ed-8448-4279-97e5-cb834cab7a64/volumes" Mar 19 12:34:33.947673 master-0 kubenswrapper[31830]: I0319 12:34:33.947568 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-72bf-account-create-update-tzxdw" Mar 19 12:34:34.042622 master-0 kubenswrapper[31830]: I0319 12:34:34.042557 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa7e2b32-a302-4e00-8941-21b35df641fc-operator-scripts\") pod \"fa7e2b32-a302-4e00-8941-21b35df641fc\" (UID: \"fa7e2b32-a302-4e00-8941-21b35df641fc\") " Mar 19 12:34:34.042884 master-0 kubenswrapper[31830]: I0319 12:34:34.042733 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8d2\" (UniqueName: \"kubernetes.io/projected/fa7e2b32-a302-4e00-8941-21b35df641fc-kube-api-access-zg8d2\") pod \"fa7e2b32-a302-4e00-8941-21b35df641fc\" (UID: \"fa7e2b32-a302-4e00-8941-21b35df641fc\") " Mar 19 12:34:34.044204 master-0 kubenswrapper[31830]: I0319 12:34:34.044133 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa7e2b32-a302-4e00-8941-21b35df641fc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa7e2b32-a302-4e00-8941-21b35df641fc" (UID: "fa7e2b32-a302-4e00-8941-21b35df641fc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:34.045857 master-0 kubenswrapper[31830]: I0319 12:34:34.045819 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa7e2b32-a302-4e00-8941-21b35df641fc-kube-api-access-zg8d2" (OuterVolumeSpecName: "kube-api-access-zg8d2") pod "fa7e2b32-a302-4e00-8941-21b35df641fc" (UID: "fa7e2b32-a302-4e00-8941-21b35df641fc"). InnerVolumeSpecName "kube-api-access-zg8d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:34:34.145824 master-0 kubenswrapper[31830]: I0319 12:34:34.145318 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg8d2\" (UniqueName: \"kubernetes.io/projected/fa7e2b32-a302-4e00-8941-21b35df641fc-kube-api-access-zg8d2\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:34.145824 master-0 kubenswrapper[31830]: I0319 12:34:34.145354 31830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa7e2b32-a302-4e00-8941-21b35df641fc-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:34.173375 master-0 kubenswrapper[31830]: I0319 12:34:34.173198 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b5gp7" Mar 19 12:34:34.180755 master-0 kubenswrapper[31830]: I0319 12:34:34.180661 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-adbe-account-create-update-q4gms" Mar 19 12:34:34.200850 master-0 kubenswrapper[31830]: I0319 12:34:34.200090 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-58xtb" Mar 19 12:34:34.252820 master-0 kubenswrapper[31830]: I0319 12:34:34.250500 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec706aca-7a17-4a8c-a287-80b6b964eed4-operator-scripts\") pod \"ec706aca-7a17-4a8c-a287-80b6b964eed4\" (UID: \"ec706aca-7a17-4a8c-a287-80b6b964eed4\") " Mar 19 12:34:34.252820 master-0 kubenswrapper[31830]: I0319 12:34:34.250628 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7xn6\" (UniqueName: \"kubernetes.io/projected/ec706aca-7a17-4a8c-a287-80b6b964eed4-kube-api-access-c7xn6\") pod \"ec706aca-7a17-4a8c-a287-80b6b964eed4\" (UID: \"ec706aca-7a17-4a8c-a287-80b6b964eed4\") " Mar 19 12:34:34.252820 master-0 kubenswrapper[31830]: I0319 12:34:34.250698 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1574d508-d4a6-4baf-b732-ea6f8466d76c-operator-scripts\") pod \"1574d508-d4a6-4baf-b732-ea6f8466d76c\" (UID: \"1574d508-d4a6-4baf-b732-ea6f8466d76c\") " Mar 19 12:34:34.252820 master-0 kubenswrapper[31830]: I0319 12:34:34.250769 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjs2z\" (UniqueName: \"kubernetes.io/projected/1574d508-d4a6-4baf-b732-ea6f8466d76c-kube-api-access-cjs2z\") pod \"1574d508-d4a6-4baf-b732-ea6f8466d76c\" (UID: \"1574d508-d4a6-4baf-b732-ea6f8466d76c\") " Mar 19 12:34:34.252820 master-0 kubenswrapper[31830]: I0319 12:34:34.251524 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1574d508-d4a6-4baf-b732-ea6f8466d76c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1574d508-d4a6-4baf-b732-ea6f8466d76c" (UID: "1574d508-d4a6-4baf-b732-ea6f8466d76c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:34.260469 master-0 kubenswrapper[31830]: I0319 12:34:34.254110 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec706aca-7a17-4a8c-a287-80b6b964eed4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec706aca-7a17-4a8c-a287-80b6b964eed4" (UID: "ec706aca-7a17-4a8c-a287-80b6b964eed4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:34.260469 master-0 kubenswrapper[31830]: I0319 12:34:34.254327 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec706aca-7a17-4a8c-a287-80b6b964eed4-kube-api-access-c7xn6" (OuterVolumeSpecName: "kube-api-access-c7xn6") pod "ec706aca-7a17-4a8c-a287-80b6b964eed4" (UID: "ec706aca-7a17-4a8c-a287-80b6b964eed4"). InnerVolumeSpecName "kube-api-access-c7xn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:34:34.260469 master-0 kubenswrapper[31830]: I0319 12:34:34.259203 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1574d508-d4a6-4baf-b732-ea6f8466d76c-kube-api-access-cjs2z" (OuterVolumeSpecName: "kube-api-access-cjs2z") pod "1574d508-d4a6-4baf-b732-ea6f8466d76c" (UID: "1574d508-d4a6-4baf-b732-ea6f8466d76c"). InnerVolumeSpecName "kube-api-access-cjs2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:34:34.260469 master-0 kubenswrapper[31830]: I0319 12:34:34.259594 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/196bfd6e-b584-4ca9-a94d-f9928ae87a7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "196bfd6e-b584-4ca9-a94d-f9928ae87a7f" (UID: "196bfd6e-b584-4ca9-a94d-f9928ae87a7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:34.261828 master-0 kubenswrapper[31830]: I0319 12:34:34.261057 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/196bfd6e-b584-4ca9-a94d-f9928ae87a7f-operator-scripts\") pod \"196bfd6e-b584-4ca9-a94d-f9928ae87a7f\" (UID: \"196bfd6e-b584-4ca9-a94d-f9928ae87a7f\") " Mar 19 12:34:34.261828 master-0 kubenswrapper[31830]: I0319 12:34:34.261237 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz7hj\" (UniqueName: \"kubernetes.io/projected/196bfd6e-b584-4ca9-a94d-f9928ae87a7f-kube-api-access-qz7hj\") pod \"196bfd6e-b584-4ca9-a94d-f9928ae87a7f\" (UID: \"196bfd6e-b584-4ca9-a94d-f9928ae87a7f\") " Mar 19 12:34:34.262995 master-0 kubenswrapper[31830]: I0319 12:34:34.262683 31830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec706aca-7a17-4a8c-a287-80b6b964eed4-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:34.262995 master-0 kubenswrapper[31830]: I0319 12:34:34.262717 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7xn6\" (UniqueName: \"kubernetes.io/projected/ec706aca-7a17-4a8c-a287-80b6b964eed4-kube-api-access-c7xn6\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:34.262995 master-0 kubenswrapper[31830]: I0319 12:34:34.262729 31830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1574d508-d4a6-4baf-b732-ea6f8466d76c-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:34.262995 master-0 kubenswrapper[31830]: I0319 12:34:34.262738 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjs2z\" (UniqueName: \"kubernetes.io/projected/1574d508-d4a6-4baf-b732-ea6f8466d76c-kube-api-access-cjs2z\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:34.262995 master-0 kubenswrapper[31830]: I0319 12:34:34.262747 31830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/196bfd6e-b584-4ca9-a94d-f9928ae87a7f-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:34.264183 master-0 kubenswrapper[31830]: I0319 12:34:34.264118 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/196bfd6e-b584-4ca9-a94d-f9928ae87a7f-kube-api-access-qz7hj" (OuterVolumeSpecName: "kube-api-access-qz7hj") pod "196bfd6e-b584-4ca9-a94d-f9928ae87a7f" (UID: "196bfd6e-b584-4ca9-a94d-f9928ae87a7f"). InnerVolumeSpecName "kube-api-access-qz7hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:34:34.364866 master-0 kubenswrapper[31830]: I0319 12:34:34.364634 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz7hj\" (UniqueName: \"kubernetes.io/projected/196bfd6e-b584-4ca9-a94d-f9928ae87a7f-kube-api-access-qz7hj\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:34.371438 master-0 kubenswrapper[31830]: I0319 12:34:34.371385 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-adbe-account-create-update-q4gms" event={"ID":"ec706aca-7a17-4a8c-a287-80b6b964eed4","Type":"ContainerDied","Data":"c12d827230b153cce819d5f275d434d165b2ed5f6fb5e8a4a8d30b8dd38922d0"} Mar 19 12:34:34.371438 master-0 kubenswrapper[31830]: I0319 12:34:34.371447 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c12d827230b153cce819d5f275d434d165b2ed5f6fb5e8a4a8d30b8dd38922d0" Mar 19 12:34:34.371676 master-0 kubenswrapper[31830]: I0319 12:34:34.371630 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-adbe-account-create-update-q4gms" Mar 19 12:34:34.373271 master-0 kubenswrapper[31830]: I0319 12:34:34.373235 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-72bf-account-create-update-tzxdw" event={"ID":"fa7e2b32-a302-4e00-8941-21b35df641fc","Type":"ContainerDied","Data":"f7dcaf4fd5ea03c5cce944d88d87d048bafeb12796050dee464bef0cc2932dbf"} Mar 19 12:34:34.373327 master-0 kubenswrapper[31830]: I0319 12:34:34.373268 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7dcaf4fd5ea03c5cce944d88d87d048bafeb12796050dee464bef0cc2932dbf" Mar 19 12:34:34.373362 master-0 kubenswrapper[31830]: I0319 12:34:34.373331 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-72bf-account-create-update-tzxdw" Mar 19 12:34:34.379484 master-0 kubenswrapper[31830]: I0319 12:34:34.379429 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-58xtb" event={"ID":"196bfd6e-b584-4ca9-a94d-f9928ae87a7f","Type":"ContainerDied","Data":"ace14b85e8d11d34d9417eea990892140768019ae2ce695e05f7774f721b6596"} Mar 19 12:34:34.379484 master-0 kubenswrapper[31830]: I0319 12:34:34.379485 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ace14b85e8d11d34d9417eea990892140768019ae2ce695e05f7774f721b6596" Mar 19 12:34:34.379620 master-0 kubenswrapper[31830]: I0319 12:34:34.379565 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-58xtb" Mar 19 12:34:34.387551 master-0 kubenswrapper[31830]: I0319 12:34:34.387512 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b5gp7" event={"ID":"1574d508-d4a6-4baf-b732-ea6f8466d76c","Type":"ContainerDied","Data":"a46a4db70eab604bfc5d6f7e91c5b36c4c57f035f0763b7b76b611a7d8c34813"} Mar 19 12:34:34.387700 master-0 kubenswrapper[31830]: I0319 12:34:34.387687 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a46a4db70eab604bfc5d6f7e91c5b36c4c57f035f0763b7b76b611a7d8c34813" Mar 19 12:34:34.387847 master-0 kubenswrapper[31830]: I0319 12:34:34.387836 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b5gp7" Mar 19 12:34:36.030373 master-0 kubenswrapper[31830]: I0319 12:34:36.030306 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-b5gp7"] Mar 19 12:34:36.040099 master-0 kubenswrapper[31830]: I0319 12:34:36.040046 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-b5gp7"] Mar 19 12:34:37.419710 master-0 kubenswrapper[31830]: I0319 12:34:37.419623 31830 generic.go:334] "Generic (PLEG): container finished" podID="512e045f-7b25-4992-a593-227de5818bb3" containerID="f426b36a1829ba5e4358a90b632b8a1a3ed81b8372fc32bc6f9370bd1e605a44" exitCode=0 Mar 19 12:34:37.419710 master-0 kubenswrapper[31830]: I0319 12:34:37.419679 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cm2zc" event={"ID":"512e045f-7b25-4992-a593-227de5818bb3","Type":"ContainerDied","Data":"f426b36a1829ba5e4358a90b632b8a1a3ed81b8372fc32bc6f9370bd1e605a44"} Mar 19 12:34:37.690638 master-0 kubenswrapper[31830]: I0319 12:34:37.690498 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1574d508-d4a6-4baf-b732-ea6f8466d76c" path="/var/lib/kubelet/pods/1574d508-d4a6-4baf-b732-ea6f8466d76c/volumes" Mar 19 12:34:37.728997 master-0 kubenswrapper[31830]: I0319 12:34:37.728948 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/736d878b-1328-4a36-873f-62849c4e2d07-etc-swift\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:37.733087 master-0 kubenswrapper[31830]: I0319 12:34:37.733052 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/736d878b-1328-4a36-873f-62849c4e2d07-etc-swift\") pod \"swift-storage-0\" (UID: \"736d878b-1328-4a36-873f-62849c4e2d07\") " pod="openstack/swift-storage-0" Mar 19 12:34:38.022908 master-0 kubenswrapper[31830]: I0319 12:34:38.022843 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 19 12:34:38.885825 master-0 kubenswrapper[31830]: I0319 12:34:38.885744 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-dfsd7"] Mar 19 12:34:38.887419 master-0 kubenswrapper[31830]: E0319 12:34:38.887283 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bc881ed-8448-4279-97e5-cb834cab7a64" containerName="init" Mar 19 12:34:38.887419 master-0 kubenswrapper[31830]: I0319 12:34:38.887306 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc881ed-8448-4279-97e5-cb834cab7a64" containerName="init" Mar 19 12:34:38.887419 master-0 kubenswrapper[31830]: E0319 12:34:38.887329 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1574d508-d4a6-4baf-b732-ea6f8466d76c" containerName="mariadb-account-create-update" Mar 19 12:34:38.887419 master-0 kubenswrapper[31830]: I0319 12:34:38.887336 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1574d508-d4a6-4baf-b732-ea6f8466d76c" containerName="mariadb-account-create-update" Mar 19 12:34:38.887419 master-0 kubenswrapper[31830]: E0319 12:34:38.887372 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec706aca-7a17-4a8c-a287-80b6b964eed4" containerName="mariadb-account-create-update" Mar 19 12:34:38.887419 master-0 kubenswrapper[31830]: I0319 12:34:38.887381 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec706aca-7a17-4a8c-a287-80b6b964eed4" containerName="mariadb-account-create-update" Mar 19 12:34:38.887419 master-0 kubenswrapper[31830]: E0319 12:34:38.887388 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bc881ed-8448-4279-97e5-cb834cab7a64" containerName="dnsmasq-dns" Mar 19 12:34:38.887419 master-0 kubenswrapper[31830]: I0319 12:34:38.887394 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc881ed-8448-4279-97e5-cb834cab7a64" containerName="dnsmasq-dns" Mar 19 12:34:38.887419 master-0 kubenswrapper[31830]: E0319 12:34:38.887405 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="196bfd6e-b584-4ca9-a94d-f9928ae87a7f" containerName="mariadb-database-create" Mar 19 12:34:38.887419 master-0 kubenswrapper[31830]: I0319 12:34:38.887412 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="196bfd6e-b584-4ca9-a94d-f9928ae87a7f" containerName="mariadb-database-create" Mar 19 12:34:38.887419 master-0 kubenswrapper[31830]: E0319 12:34:38.887424 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674e9cd6-bf60-4cca-951f-de66c55e8ce5" containerName="mariadb-database-create" Mar 19 12:34:38.887419 master-0 kubenswrapper[31830]: I0319 12:34:38.887431 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="674e9cd6-bf60-4cca-951f-de66c55e8ce5" containerName="mariadb-database-create" Mar 19 12:34:38.887955 master-0 kubenswrapper[31830]: E0319 12:34:38.887442 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85fb70b5-81f7-417c-b0cf-f3c917d1bc90" containerName="mariadb-account-create-update" Mar 19 12:34:38.887955 master-0 kubenswrapper[31830]: I0319 12:34:38.887448 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="85fb70b5-81f7-417c-b0cf-f3c917d1bc90" containerName="mariadb-account-create-update" Mar 19 12:34:38.887955 master-0 kubenswrapper[31830]: E0319 12:34:38.887456 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88c4c83b-bbcc-44f3-aa58-880fd24e1e3f" containerName="mariadb-database-create" Mar 19 12:34:38.887955 master-0 kubenswrapper[31830]: I0319 12:34:38.887462 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="88c4c83b-bbcc-44f3-aa58-880fd24e1e3f" containerName="mariadb-database-create" Mar 19 12:34:38.887955 master-0 kubenswrapper[31830]: E0319 12:34:38.887479 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7e2b32-a302-4e00-8941-21b35df641fc" containerName="mariadb-account-create-update" Mar 19 12:34:38.887955 master-0 kubenswrapper[31830]: I0319 12:34:38.887485 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7e2b32-a302-4e00-8941-21b35df641fc" containerName="mariadb-account-create-update" Mar 19 12:34:38.887955 master-0 kubenswrapper[31830]: I0319 12:34:38.887657 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec706aca-7a17-4a8c-a287-80b6b964eed4" containerName="mariadb-account-create-update" Mar 19 12:34:38.887955 master-0 kubenswrapper[31830]: I0319 12:34:38.887680 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1574d508-d4a6-4baf-b732-ea6f8466d76c" containerName="mariadb-account-create-update" Mar 19 12:34:38.887955 master-0 kubenswrapper[31830]: I0319 12:34:38.887691 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa7e2b32-a302-4e00-8941-21b35df641fc" containerName="mariadb-account-create-update" Mar 19 12:34:38.887955 master-0 kubenswrapper[31830]: I0319 12:34:38.887702 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bc881ed-8448-4279-97e5-cb834cab7a64" containerName="dnsmasq-dns" Mar 19 12:34:38.887955 master-0 kubenswrapper[31830]: I0319 12:34:38.887717 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="88c4c83b-bbcc-44f3-aa58-880fd24e1e3f" containerName="mariadb-database-create" Mar 19 12:34:38.887955 master-0 kubenswrapper[31830]: I0319 12:34:38.887728 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="85fb70b5-81f7-417c-b0cf-f3c917d1bc90" containerName="mariadb-account-create-update" Mar 19 12:34:38.887955 master-0 kubenswrapper[31830]: I0319 12:34:38.887741 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="674e9cd6-bf60-4cca-951f-de66c55e8ce5" containerName="mariadb-database-create" Mar 19 12:34:38.887955 master-0 kubenswrapper[31830]: I0319 12:34:38.887764 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="196bfd6e-b584-4ca9-a94d-f9928ae87a7f" containerName="mariadb-database-create" Mar 19 12:34:38.888494 master-0 kubenswrapper[31830]: I0319 12:34:38.888417 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dfsd7" Mar 19 12:34:38.900919 master-0 kubenswrapper[31830]: I0319 12:34:38.895107 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-f4e38-config-data" Mar 19 12:34:38.901136 master-0 kubenswrapper[31830]: I0319 12:34:38.901027 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dfsd7"] Mar 19 12:34:39.033819 master-0 kubenswrapper[31830]: I0319 12:34:39.029132 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 19 12:34:39.053318 master-0 kubenswrapper[31830]: I0319 12:34:39.053270 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl2j4\" (UniqueName: \"kubernetes.io/projected/82a35ae5-08db-4571-977b-95d26158480e-kube-api-access-cl2j4\") pod \"glance-db-sync-dfsd7\" (UID: \"82a35ae5-08db-4571-977b-95d26158480e\") " pod="openstack/glance-db-sync-dfsd7" Mar 19 12:34:39.053682 master-0 kubenswrapper[31830]: I0319 12:34:39.053641 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82a35ae5-08db-4571-977b-95d26158480e-config-data\") pod \"glance-db-sync-dfsd7\" (UID: \"82a35ae5-08db-4571-977b-95d26158480e\") " pod="openstack/glance-db-sync-dfsd7" Mar 19 12:34:39.053770 master-0 kubenswrapper[31830]: I0319 12:34:39.053745 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82a35ae5-08db-4571-977b-95d26158480e-combined-ca-bundle\") pod \"glance-db-sync-dfsd7\" (UID: \"82a35ae5-08db-4571-977b-95d26158480e\") " pod="openstack/glance-db-sync-dfsd7" Mar 19 12:34:39.054128 master-0 kubenswrapper[31830]: I0319 12:34:39.054094 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82a35ae5-08db-4571-977b-95d26158480e-db-sync-config-data\") pod \"glance-db-sync-dfsd7\" (UID: \"82a35ae5-08db-4571-977b-95d26158480e\") " pod="openstack/glance-db-sync-dfsd7" Mar 19 12:34:39.156546 master-0 kubenswrapper[31830]: I0319 12:34:39.156470 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82a35ae5-08db-4571-977b-95d26158480e-db-sync-config-data\") pod \"glance-db-sync-dfsd7\" (UID: \"82a35ae5-08db-4571-977b-95d26158480e\") " pod="openstack/glance-db-sync-dfsd7" Mar 19 12:34:39.156546 master-0 kubenswrapper[31830]: I0319 12:34:39.156542 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl2j4\" (UniqueName: \"kubernetes.io/projected/82a35ae5-08db-4571-977b-95d26158480e-kube-api-access-cl2j4\") pod \"glance-db-sync-dfsd7\" (UID: \"82a35ae5-08db-4571-977b-95d26158480e\") " pod="openstack/glance-db-sync-dfsd7" Mar 19 12:34:39.156905 master-0 kubenswrapper[31830]: I0319 12:34:39.156709 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82a35ae5-08db-4571-977b-95d26158480e-config-data\") pod \"glance-db-sync-dfsd7\" (UID: \"82a35ae5-08db-4571-977b-95d26158480e\") " pod="openstack/glance-db-sync-dfsd7" Mar 19 12:34:39.156905 master-0 kubenswrapper[31830]: I0319 12:34:39.156756 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82a35ae5-08db-4571-977b-95d26158480e-combined-ca-bundle\") pod \"glance-db-sync-dfsd7\" (UID: \"82a35ae5-08db-4571-977b-95d26158480e\") " pod="openstack/glance-db-sync-dfsd7" Mar 19 12:34:39.160272 master-0 kubenswrapper[31830]: I0319 12:34:39.160185 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82a35ae5-08db-4571-977b-95d26158480e-combined-ca-bundle\") pod \"glance-db-sync-dfsd7\" (UID: \"82a35ae5-08db-4571-977b-95d26158480e\") " pod="openstack/glance-db-sync-dfsd7" Mar 19 12:34:39.161337 master-0 kubenswrapper[31830]: I0319 12:34:39.161297 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82a35ae5-08db-4571-977b-95d26158480e-db-sync-config-data\") pod \"glance-db-sync-dfsd7\" (UID: \"82a35ae5-08db-4571-977b-95d26158480e\") " pod="openstack/glance-db-sync-dfsd7" Mar 19 12:34:39.162336 master-0 kubenswrapper[31830]: I0319 12:34:39.162296 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82a35ae5-08db-4571-977b-95d26158480e-config-data\") pod \"glance-db-sync-dfsd7\" (UID: \"82a35ae5-08db-4571-977b-95d26158480e\") " pod="openstack/glance-db-sync-dfsd7" Mar 19 12:34:39.201955 master-0 kubenswrapper[31830]: I0319 12:34:39.201918 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl2j4\" (UniqueName: \"kubernetes.io/projected/82a35ae5-08db-4571-977b-95d26158480e-kube-api-access-cl2j4\") pod \"glance-db-sync-dfsd7\" (UID: \"82a35ae5-08db-4571-977b-95d26158480e\") " pod="openstack/glance-db-sync-dfsd7" Mar 19 12:34:39.221444 master-0 kubenswrapper[31830]: I0319 12:34:39.221359 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dfsd7" Mar 19 12:34:39.328881 master-0 kubenswrapper[31830]: I0319 12:34:39.327132 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:39.468423 master-0 kubenswrapper[31830]: I0319 12:34:39.468370 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/512e045f-7b25-4992-a593-227de5818bb3-dispersionconf\") pod \"512e045f-7b25-4992-a593-227de5818bb3\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " Mar 19 12:34:39.468661 master-0 kubenswrapper[31830]: I0319 12:34:39.468445 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/512e045f-7b25-4992-a593-227de5818bb3-scripts\") pod \"512e045f-7b25-4992-a593-227de5818bb3\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " Mar 19 12:34:39.468661 master-0 kubenswrapper[31830]: I0319 12:34:39.468472 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/512e045f-7b25-4992-a593-227de5818bb3-ring-data-devices\") pod \"512e045f-7b25-4992-a593-227de5818bb3\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " Mar 19 12:34:39.468661 master-0 kubenswrapper[31830]: I0319 12:34:39.468506 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/512e045f-7b25-4992-a593-227de5818bb3-swiftconf\") pod \"512e045f-7b25-4992-a593-227de5818bb3\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " Mar 19 12:34:39.468661 master-0 kubenswrapper[31830]: I0319 12:34:39.468542 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/512e045f-7b25-4992-a593-227de5818bb3-etc-swift\") pod \"512e045f-7b25-4992-a593-227de5818bb3\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " Mar 19 12:34:39.468661 master-0 kubenswrapper[31830]: I0319 12:34:39.468651 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512e045f-7b25-4992-a593-227de5818bb3-combined-ca-bundle\") pod \"512e045f-7b25-4992-a593-227de5818bb3\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " Mar 19 12:34:39.468904 master-0 kubenswrapper[31830]: I0319 12:34:39.468777 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cttl7\" (UniqueName: \"kubernetes.io/projected/512e045f-7b25-4992-a593-227de5818bb3-kube-api-access-cttl7\") pod \"512e045f-7b25-4992-a593-227de5818bb3\" (UID: \"512e045f-7b25-4992-a593-227de5818bb3\") " Mar 19 12:34:39.470769 master-0 kubenswrapper[31830]: I0319 12:34:39.469745 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/512e045f-7b25-4992-a593-227de5818bb3-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "512e045f-7b25-4992-a593-227de5818bb3" (UID: "512e045f-7b25-4992-a593-227de5818bb3"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:39.470769 master-0 kubenswrapper[31830]: I0319 12:34:39.470004 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/512e045f-7b25-4992-a593-227de5818bb3-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "512e045f-7b25-4992-a593-227de5818bb3" (UID: "512e045f-7b25-4992-a593-227de5818bb3"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:34:39.475058 master-0 kubenswrapper[31830]: I0319 12:34:39.475013 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/512e045f-7b25-4992-a593-227de5818bb3-kube-api-access-cttl7" (OuterVolumeSpecName: "kube-api-access-cttl7") pod "512e045f-7b25-4992-a593-227de5818bb3" (UID: "512e045f-7b25-4992-a593-227de5818bb3"). InnerVolumeSpecName "kube-api-access-cttl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:34:39.475930 master-0 kubenswrapper[31830]: I0319 12:34:39.475887 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/512e045f-7b25-4992-a593-227de5818bb3-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "512e045f-7b25-4992-a593-227de5818bb3" (UID: "512e045f-7b25-4992-a593-227de5818bb3"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:34:39.494859 master-0 kubenswrapper[31830]: I0319 12:34:39.494765 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/512e045f-7b25-4992-a593-227de5818bb3-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "512e045f-7b25-4992-a593-227de5818bb3" (UID: "512e045f-7b25-4992-a593-227de5818bb3"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:34:39.496341 master-0 kubenswrapper[31830]: I0319 12:34:39.496290 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/512e045f-7b25-4992-a593-227de5818bb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "512e045f-7b25-4992-a593-227de5818bb3" (UID: "512e045f-7b25-4992-a593-227de5818bb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:34:39.500664 master-0 kubenswrapper[31830]: I0319 12:34:39.500608 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/512e045f-7b25-4992-a593-227de5818bb3-scripts" (OuterVolumeSpecName: "scripts") pod "512e045f-7b25-4992-a593-227de5818bb3" (UID: "512e045f-7b25-4992-a593-227de5818bb3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:39.571641 master-0 kubenswrapper[31830]: I0319 12:34:39.571496 31830 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/512e045f-7b25-4992-a593-227de5818bb3-swiftconf\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:39.571641 master-0 kubenswrapper[31830]: I0319 12:34:39.571554 31830 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/512e045f-7b25-4992-a593-227de5818bb3-etc-swift\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:39.571641 master-0 kubenswrapper[31830]: I0319 12:34:39.571570 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512e045f-7b25-4992-a593-227de5818bb3-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:39.571641 master-0 kubenswrapper[31830]: I0319 12:34:39.571585 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cttl7\" (UniqueName: \"kubernetes.io/projected/512e045f-7b25-4992-a593-227de5818bb3-kube-api-access-cttl7\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:39.571641 master-0 kubenswrapper[31830]: I0319 12:34:39.571598 31830 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/512e045f-7b25-4992-a593-227de5818bb3-dispersionconf\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:39.571641 master-0 kubenswrapper[31830]: I0319 12:34:39.571637 31830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/512e045f-7b25-4992-a593-227de5818bb3-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:39.571641 master-0 kubenswrapper[31830]: I0319 12:34:39.571650 31830 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/512e045f-7b25-4992-a593-227de5818bb3-ring-data-devices\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:39.707236 master-0 kubenswrapper[31830]: I0319 12:34:39.707174 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cm2zc" event={"ID":"512e045f-7b25-4992-a593-227de5818bb3","Type":"ContainerDied","Data":"2f5956ceddcf4ec856181bbd854283b35b9f252f320130100ecd38d01bb33c84"} Mar 19 12:34:39.707236 master-0 kubenswrapper[31830]: I0319 12:34:39.707234 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f5956ceddcf4ec856181bbd854283b35b9f252f320130100ecd38d01bb33c84" Mar 19 12:34:39.707496 master-0 kubenswrapper[31830]: I0319 12:34:39.707359 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cm2zc" Mar 19 12:34:39.715814 master-0 kubenswrapper[31830]: I0319 12:34:39.715733 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"736d878b-1328-4a36-873f-62849c4e2d07","Type":"ContainerStarted","Data":"c4ebee14553adf5191eb9569ac2202dee12c4007c1129f528dd821443ab03d97"} Mar 19 12:34:39.845696 master-0 kubenswrapper[31830]: I0319 12:34:39.845260 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dfsd7"] Mar 19 12:34:39.849073 master-0 kubenswrapper[31830]: W0319 12:34:39.848838 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82a35ae5_08db_4571_977b_95d26158480e.slice/crio-4269b85e69ef9bbc84212f443dc5ee97935002b61dd3a501bc87fa94346328d0 WatchSource:0}: Error finding container 4269b85e69ef9bbc84212f443dc5ee97935002b61dd3a501bc87fa94346328d0: Status 404 returned error can't find the container with id 4269b85e69ef9bbc84212f443dc5ee97935002b61dd3a501bc87fa94346328d0 Mar 19 12:34:40.254288 master-0 kubenswrapper[31830]: I0319 12:34:40.254244 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 19 12:34:40.728725 master-0 kubenswrapper[31830]: I0319 12:34:40.728666 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dfsd7" event={"ID":"82a35ae5-08db-4571-977b-95d26158480e","Type":"ContainerStarted","Data":"4269b85e69ef9bbc84212f443dc5ee97935002b61dd3a501bc87fa94346328d0"} Mar 19 12:34:41.032589 master-0 kubenswrapper[31830]: I0319 12:34:41.032529 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vztcb"] Mar 19 12:34:41.033083 master-0 kubenswrapper[31830]: E0319 12:34:41.033062 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512e045f-7b25-4992-a593-227de5818bb3" containerName="swift-ring-rebalance" Mar 19 12:34:41.033083 master-0 kubenswrapper[31830]: I0319 12:34:41.033081 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="512e045f-7b25-4992-a593-227de5818bb3" containerName="swift-ring-rebalance" Mar 19 12:34:41.033476 master-0 kubenswrapper[31830]: I0319 12:34:41.033302 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="512e045f-7b25-4992-a593-227de5818bb3" containerName="swift-ring-rebalance" Mar 19 12:34:41.037521 master-0 kubenswrapper[31830]: I0319 12:34:41.034049 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vztcb" Mar 19 12:34:41.037521 master-0 kubenswrapper[31830]: I0319 12:34:41.035809 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 19 12:34:41.069825 master-0 kubenswrapper[31830]: I0319 12:34:41.066880 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vztcb"] Mar 19 12:34:41.209044 master-0 kubenswrapper[31830]: I0319 12:34:41.208983 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smhsq\" (UniqueName: \"kubernetes.io/projected/b0a0609b-7128-44ad-b501-9216196d8987-kube-api-access-smhsq\") pod \"root-account-create-update-vztcb\" (UID: \"b0a0609b-7128-44ad-b501-9216196d8987\") " pod="openstack/root-account-create-update-vztcb" Mar 19 12:34:41.209261 master-0 kubenswrapper[31830]: I0319 12:34:41.209106 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0a0609b-7128-44ad-b501-9216196d8987-operator-scripts\") pod \"root-account-create-update-vztcb\" (UID: \"b0a0609b-7128-44ad-b501-9216196d8987\") " pod="openstack/root-account-create-update-vztcb" Mar 19 12:34:41.256129 master-0 kubenswrapper[31830]: I0319 12:34:41.254692 31830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-kmq6z" podUID="0d516497-0523-41c4-a5cc-75fe94977ac3" containerName="ovn-controller" probeResult="failure" output=< Mar 19 12:34:41.256129 master-0 kubenswrapper[31830]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 19 12:34:41.256129 master-0 kubenswrapper[31830]: > Mar 19 12:34:41.311559 master-0 kubenswrapper[31830]: I0319 12:34:41.311519 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smhsq\" (UniqueName: \"kubernetes.io/projected/b0a0609b-7128-44ad-b501-9216196d8987-kube-api-access-smhsq\") pod \"root-account-create-update-vztcb\" (UID: \"b0a0609b-7128-44ad-b501-9216196d8987\") " pod="openstack/root-account-create-update-vztcb" Mar 19 12:34:41.312291 master-0 kubenswrapper[31830]: I0319 12:34:41.312160 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0a0609b-7128-44ad-b501-9216196d8987-operator-scripts\") pod \"root-account-create-update-vztcb\" (UID: \"b0a0609b-7128-44ad-b501-9216196d8987\") " pod="openstack/root-account-create-update-vztcb" Mar 19 12:34:41.313307 master-0 kubenswrapper[31830]: I0319 12:34:41.313258 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0a0609b-7128-44ad-b501-9216196d8987-operator-scripts\") pod \"root-account-create-update-vztcb\" (UID: \"b0a0609b-7128-44ad-b501-9216196d8987\") " pod="openstack/root-account-create-update-vztcb" Mar 19 12:34:41.327065 master-0 kubenswrapper[31830]: I0319 12:34:41.327014 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smhsq\" (UniqueName: \"kubernetes.io/projected/b0a0609b-7128-44ad-b501-9216196d8987-kube-api-access-smhsq\") pod \"root-account-create-update-vztcb\" (UID: \"b0a0609b-7128-44ad-b501-9216196d8987\") " pod="openstack/root-account-create-update-vztcb" Mar 19 12:34:41.421821 master-0 kubenswrapper[31830]: I0319 12:34:41.421241 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vztcb" Mar 19 12:34:41.758347 master-0 kubenswrapper[31830]: I0319 12:34:41.758213 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"736d878b-1328-4a36-873f-62849c4e2d07","Type":"ContainerStarted","Data":"48bca11bfc7ae5062828de77c88bc6ef6d65eb731a6d3aac62ef7a0fa5b0b3d8"} Mar 19 12:34:41.758347 master-0 kubenswrapper[31830]: I0319 12:34:41.758274 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"736d878b-1328-4a36-873f-62849c4e2d07","Type":"ContainerStarted","Data":"4ff51d35265d3408b5ffb522ed052446bcc483071873709876807319a221cb4e"} Mar 19 12:34:41.758347 master-0 kubenswrapper[31830]: I0319 12:34:41.758290 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"736d878b-1328-4a36-873f-62849c4e2d07","Type":"ContainerStarted","Data":"8a0703d404391508eb6f84b89a3f0682f993fd822bcc92d3d02adb27d52566e4"} Mar 19 12:34:41.758347 master-0 kubenswrapper[31830]: I0319 12:34:41.758298 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"736d878b-1328-4a36-873f-62849c4e2d07","Type":"ContainerStarted","Data":"ec4b263cd16710112aca9a75a822675c0e14c9f96370e64568d4f43b5ed7c238"} Mar 19 12:34:41.897473 master-0 kubenswrapper[31830]: I0319 12:34:41.897427 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vztcb"] Mar 19 12:34:42.779016 master-0 kubenswrapper[31830]: I0319 12:34:42.778611 31830 generic.go:334] "Generic (PLEG): container finished" podID="b0a0609b-7128-44ad-b501-9216196d8987" containerID="2364beb2d098a3420df16dd7c7a9d31908ea694493bacb80512b05fe0ba45bca" exitCode=0 Mar 19 12:34:42.779016 master-0 kubenswrapper[31830]: I0319 12:34:42.778670 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vztcb" event={"ID":"b0a0609b-7128-44ad-b501-9216196d8987","Type":"ContainerDied","Data":"2364beb2d098a3420df16dd7c7a9d31908ea694493bacb80512b05fe0ba45bca"} Mar 19 12:34:42.779016 master-0 kubenswrapper[31830]: I0319 12:34:42.778703 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vztcb" event={"ID":"b0a0609b-7128-44ad-b501-9216196d8987","Type":"ContainerStarted","Data":"24462921c28ce63f73feea5f9ef60ba6b8f46316860843857f8f00c295cfecd5"} Mar 19 12:34:43.802810 master-0 kubenswrapper[31830]: I0319 12:34:43.802611 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"736d878b-1328-4a36-873f-62849c4e2d07","Type":"ContainerStarted","Data":"07f8aa7d10003583f52aece0f57ab63361ecd36789e7fd2086ec5d0f9053a634"} Mar 19 12:34:43.802810 master-0 kubenswrapper[31830]: I0319 12:34:43.802666 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"736d878b-1328-4a36-873f-62849c4e2d07","Type":"ContainerStarted","Data":"c198271d213330f71901937c037ee090c96f8858fdfa78aab223a69e56a228be"} Mar 19 12:34:43.802810 master-0 kubenswrapper[31830]: I0319 12:34:43.802678 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"736d878b-1328-4a36-873f-62849c4e2d07","Type":"ContainerStarted","Data":"18d971c43e3af492df23e16986d91f0abcb4f55652df9ebf9bdb90476e27d7dd"} Mar 19 12:34:44.329822 master-0 kubenswrapper[31830]: I0319 12:34:44.328958 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vztcb" Mar 19 12:34:44.513905 master-0 kubenswrapper[31830]: I0319 12:34:44.508989 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smhsq\" (UniqueName: \"kubernetes.io/projected/b0a0609b-7128-44ad-b501-9216196d8987-kube-api-access-smhsq\") pod \"b0a0609b-7128-44ad-b501-9216196d8987\" (UID: \"b0a0609b-7128-44ad-b501-9216196d8987\") " Mar 19 12:34:44.513905 master-0 kubenswrapper[31830]: I0319 12:34:44.509092 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0a0609b-7128-44ad-b501-9216196d8987-operator-scripts\") pod \"b0a0609b-7128-44ad-b501-9216196d8987\" (UID: \"b0a0609b-7128-44ad-b501-9216196d8987\") " Mar 19 12:34:44.513905 master-0 kubenswrapper[31830]: I0319 12:34:44.510687 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0a0609b-7128-44ad-b501-9216196d8987-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b0a0609b-7128-44ad-b501-9216196d8987" (UID: "b0a0609b-7128-44ad-b501-9216196d8987"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:44.513905 master-0 kubenswrapper[31830]: I0319 12:34:44.511110 31830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0a0609b-7128-44ad-b501-9216196d8987-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:44.515201 master-0 kubenswrapper[31830]: I0319 12:34:44.515133 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a0609b-7128-44ad-b501-9216196d8987-kube-api-access-smhsq" (OuterVolumeSpecName: "kube-api-access-smhsq") pod "b0a0609b-7128-44ad-b501-9216196d8987" (UID: "b0a0609b-7128-44ad-b501-9216196d8987"). InnerVolumeSpecName "kube-api-access-smhsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:34:44.612880 master-0 kubenswrapper[31830]: I0319 12:34:44.612201 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smhsq\" (UniqueName: \"kubernetes.io/projected/b0a0609b-7128-44ad-b501-9216196d8987-kube-api-access-smhsq\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:44.826364 master-0 kubenswrapper[31830]: I0319 12:34:44.826239 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"736d878b-1328-4a36-873f-62849c4e2d07","Type":"ContainerStarted","Data":"57dbdfecc77634c2778a045afca46ee878c37f40aa1a773c793c71c93f5cd389"} Mar 19 12:34:44.829600 master-0 kubenswrapper[31830]: I0319 12:34:44.829250 31830 generic.go:334] "Generic (PLEG): container finished" podID="e496a21c-f671-402f-a15c-911b063428c5" containerID="9bf824edea56fb1c32c625a1fbd691683bcfa672f40ef0b422c5bb99fb1aa218" exitCode=0 Mar 19 12:34:44.829600 master-0 kubenswrapper[31830]: I0319 12:34:44.829353 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e496a21c-f671-402f-a15c-911b063428c5","Type":"ContainerDied","Data":"9bf824edea56fb1c32c625a1fbd691683bcfa672f40ef0b422c5bb99fb1aa218"} Mar 19 12:34:44.835184 master-0 kubenswrapper[31830]: I0319 12:34:44.833994 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vztcb" event={"ID":"b0a0609b-7128-44ad-b501-9216196d8987","Type":"ContainerDied","Data":"24462921c28ce63f73feea5f9ef60ba6b8f46316860843857f8f00c295cfecd5"} Mar 19 12:34:44.835184 master-0 kubenswrapper[31830]: I0319 12:34:44.834042 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24462921c28ce63f73feea5f9ef60ba6b8f46316860843857f8f00c295cfecd5" Mar 19 12:34:44.835184 master-0 kubenswrapper[31830]: I0319 12:34:44.834040 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vztcb" Mar 19 12:34:45.849306 master-0 kubenswrapper[31830]: I0319 12:34:45.849236 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"736d878b-1328-4a36-873f-62849c4e2d07","Type":"ContainerStarted","Data":"bc0be8777473d0327fe6ff77e984e11292b1b119d2870acac83469db54039ab6"} Mar 19 12:34:45.849306 master-0 kubenswrapper[31830]: I0319 12:34:45.849303 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"736d878b-1328-4a36-873f-62849c4e2d07","Type":"ContainerStarted","Data":"cfd0b1612f9b5054fce4f830efa27494a5eb5409d35142f35d2aeaca433375e7"} Mar 19 12:34:45.851845 master-0 kubenswrapper[31830]: I0319 12:34:45.851815 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e496a21c-f671-402f-a15c-911b063428c5","Type":"ContainerStarted","Data":"c2c5ef516e5f520b33db32d5681cd24b0adf12e11c54fb8b22d283a3cc06de17"} Mar 19 12:34:45.852106 master-0 kubenswrapper[31830]: I0319 12:34:45.852066 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:34:45.854322 master-0 kubenswrapper[31830]: I0319 12:34:45.854283 31830 generic.go:334] "Generic (PLEG): container finished" podID="aee036d1-9a03-42ac-9beb-ef7ecc09c98d" containerID="6aa82fa73a97635de0a402e10eaa6df5a6d299e00d439bcce83aa933e91b0ce1" exitCode=0 Mar 19 12:34:45.854322 master-0 kubenswrapper[31830]: I0319 12:34:45.854321 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aee036d1-9a03-42ac-9beb-ef7ecc09c98d","Type":"ContainerDied","Data":"6aa82fa73a97635de0a402e10eaa6df5a6d299e00d439bcce83aa933e91b0ce1"} Mar 19 12:34:45.915425 master-0 kubenswrapper[31830]: I0319 12:34:45.915355 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=59.866117942 podStartE2EDuration="1m8.915334133s" podCreationTimestamp="2026-03-19 12:33:37 +0000 UTC" firstStartedPulling="2026-03-19 12:34:01.672001949 +0000 UTC m=+1180.220962663" lastFinishedPulling="2026-03-19 12:34:10.72121815 +0000 UTC m=+1189.270178854" observedRunningTime="2026-03-19 12:34:45.913729453 +0000 UTC m=+1224.462690177" watchObservedRunningTime="2026-03-19 12:34:45.915334133 +0000 UTC m=+1224.464294837" Mar 19 12:34:46.270064 master-0 kubenswrapper[31830]: I0319 12:34:46.270002 31830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-kmq6z" podUID="0d516497-0523-41c4-a5cc-75fe94977ac3" containerName="ovn-controller" probeResult="failure" output=< Mar 19 12:34:46.270064 master-0 kubenswrapper[31830]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 19 12:34:46.270064 master-0 kubenswrapper[31830]: > Mar 19 12:34:46.295857 master-0 kubenswrapper[31830]: I0319 12:34:46.295809 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:34:46.308344 master-0 kubenswrapper[31830]: I0319 12:34:46.308297 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-xpwvp" Mar 19 12:34:46.722383 master-0 kubenswrapper[31830]: I0319 12:34:46.722313 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kmq6z-config-d57hb"] Mar 19 12:34:46.726162 master-0 kubenswrapper[31830]: E0319 12:34:46.724073 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a0609b-7128-44ad-b501-9216196d8987" containerName="mariadb-account-create-update" Mar 19 12:34:46.726162 master-0 kubenswrapper[31830]: I0319 12:34:46.724107 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a0609b-7128-44ad-b501-9216196d8987" containerName="mariadb-account-create-update" Mar 19 12:34:46.726162 master-0 kubenswrapper[31830]: I0319 12:34:46.724361 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a0609b-7128-44ad-b501-9216196d8987" containerName="mariadb-account-create-update" Mar 19 12:34:46.726162 master-0 kubenswrapper[31830]: I0319 12:34:46.725365 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:46.730864 master-0 kubenswrapper[31830]: I0319 12:34:46.728417 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 19 12:34:46.798382 master-0 kubenswrapper[31830]: I0319 12:34:46.798218 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kmq6z-config-d57hb"] Mar 19 12:34:46.869021 master-0 kubenswrapper[31830]: I0319 12:34:46.868968 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghnr9\" (UniqueName: \"kubernetes.io/projected/db8da89c-b608-45f2-ab33-9017ff92989b-kube-api-access-ghnr9\") pod \"ovn-controller-kmq6z-config-d57hb\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:46.871259 master-0 kubenswrapper[31830]: I0319 12:34:46.869464 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/db8da89c-b608-45f2-ab33-9017ff92989b-var-log-ovn\") pod \"ovn-controller-kmq6z-config-d57hb\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:46.871259 master-0 kubenswrapper[31830]: I0319 12:34:46.869499 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/db8da89c-b608-45f2-ab33-9017ff92989b-var-run-ovn\") pod \"ovn-controller-kmq6z-config-d57hb\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:46.871259 master-0 kubenswrapper[31830]: I0319 12:34:46.869562 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db8da89c-b608-45f2-ab33-9017ff92989b-scripts\") pod \"ovn-controller-kmq6z-config-d57hb\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:46.871259 master-0 kubenswrapper[31830]: I0319 12:34:46.869714 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/db8da89c-b608-45f2-ab33-9017ff92989b-var-run\") pod \"ovn-controller-kmq6z-config-d57hb\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:46.871259 master-0 kubenswrapper[31830]: I0319 12:34:46.869754 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/db8da89c-b608-45f2-ab33-9017ff92989b-additional-scripts\") pod \"ovn-controller-kmq6z-config-d57hb\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:46.919554 master-0 kubenswrapper[31830]: I0319 12:34:46.919489 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"736d878b-1328-4a36-873f-62849c4e2d07","Type":"ContainerStarted","Data":"acabedd1d69399360a2e33c8adc1f9ab6a84be034d7ee55799ddae7349826685"} Mar 19 12:34:46.919554 master-0 kubenswrapper[31830]: I0319 12:34:46.919557 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"736d878b-1328-4a36-873f-62849c4e2d07","Type":"ContainerStarted","Data":"b6f0a103c2c451815d11441439004188b77275b07205e167e1d49a0e83031556"} Mar 19 12:34:46.919554 master-0 kubenswrapper[31830]: I0319 12:34:46.919571 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"736d878b-1328-4a36-873f-62849c4e2d07","Type":"ContainerStarted","Data":"9c0b3ea70d72c8adb2a7e66a74d4f4e05c564b6b75ac0f13f9c62c51924484e0"} Mar 19 12:34:46.919554 master-0 kubenswrapper[31830]: I0319 12:34:46.919583 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"736d878b-1328-4a36-873f-62849c4e2d07","Type":"ContainerStarted","Data":"24b21d644ffee2bbda2cecb6d110784dd5c4ba7d173714bc24a9dce3e3d71df8"} Mar 19 12:34:46.925073 master-0 kubenswrapper[31830]: I0319 12:34:46.925024 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aee036d1-9a03-42ac-9beb-ef7ecc09c98d","Type":"ContainerStarted","Data":"1dee19fc87211438b72ed653971cbd28c18ccd4459b0e69046ec6603f68c69cc"} Mar 19 12:34:46.926640 master-0 kubenswrapper[31830]: I0319 12:34:46.926600 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 19 12:34:46.973108 master-0 kubenswrapper[31830]: I0319 12:34:46.973050 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/db8da89c-b608-45f2-ab33-9017ff92989b-var-run\") pod \"ovn-controller-kmq6z-config-d57hb\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:46.973311 master-0 kubenswrapper[31830]: I0319 12:34:46.973125 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/db8da89c-b608-45f2-ab33-9017ff92989b-additional-scripts\") pod \"ovn-controller-kmq6z-config-d57hb\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:46.973311 master-0 kubenswrapper[31830]: I0319 12:34:46.973164 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghnr9\" (UniqueName: \"kubernetes.io/projected/db8da89c-b608-45f2-ab33-9017ff92989b-kube-api-access-ghnr9\") pod \"ovn-controller-kmq6z-config-d57hb\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:46.973311 master-0 kubenswrapper[31830]: I0319 12:34:46.973226 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/db8da89c-b608-45f2-ab33-9017ff92989b-var-log-ovn\") pod \"ovn-controller-kmq6z-config-d57hb\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:46.973311 master-0 kubenswrapper[31830]: I0319 12:34:46.973249 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/db8da89c-b608-45f2-ab33-9017ff92989b-var-run-ovn\") pod \"ovn-controller-kmq6z-config-d57hb\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:46.973311 master-0 kubenswrapper[31830]: I0319 12:34:46.973302 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db8da89c-b608-45f2-ab33-9017ff92989b-scripts\") pod \"ovn-controller-kmq6z-config-d57hb\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:46.976241 master-0 kubenswrapper[31830]: I0319 12:34:46.976196 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db8da89c-b608-45f2-ab33-9017ff92989b-scripts\") pod \"ovn-controller-kmq6z-config-d57hb\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:46.976383 master-0 kubenswrapper[31830]: I0319 12:34:46.976302 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/db8da89c-b608-45f2-ab33-9017ff92989b-var-run\") pod \"ovn-controller-kmq6z-config-d57hb\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:46.976875 master-0 kubenswrapper[31830]: I0319 12:34:46.976828 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/db8da89c-b608-45f2-ab33-9017ff92989b-additional-scripts\") pod \"ovn-controller-kmq6z-config-d57hb\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:46.977246 master-0 kubenswrapper[31830]: I0319 12:34:46.977202 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/db8da89c-b608-45f2-ab33-9017ff92989b-var-log-ovn\") pod \"ovn-controller-kmq6z-config-d57hb\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:46.977333 master-0 kubenswrapper[31830]: I0319 12:34:46.977259 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/db8da89c-b608-45f2-ab33-9017ff92989b-var-run-ovn\") pod \"ovn-controller-kmq6z-config-d57hb\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:46.978540 master-0 kubenswrapper[31830]: I0319 12:34:46.978454 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=60.187776211 podStartE2EDuration="1m8.978429243s" podCreationTimestamp="2026-03-19 12:33:38 +0000 UTC" firstStartedPulling="2026-03-19 12:34:02.202704528 +0000 UTC m=+1180.751665242" lastFinishedPulling="2026-03-19 12:34:10.99335757 +0000 UTC m=+1189.542318274" observedRunningTime="2026-03-19 12:34:46.964145629 +0000 UTC m=+1225.513106333" watchObservedRunningTime="2026-03-19 12:34:46.978429243 +0000 UTC m=+1225.527389947" Mar 19 12:34:46.997632 master-0 kubenswrapper[31830]: I0319 12:34:46.997565 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghnr9\" (UniqueName: \"kubernetes.io/projected/db8da89c-b608-45f2-ab33-9017ff92989b-kube-api-access-ghnr9\") pod \"ovn-controller-kmq6z-config-d57hb\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:47.062549 master-0 kubenswrapper[31830]: I0319 12:34:47.062504 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:47.619817 master-0 kubenswrapper[31830]: I0319 12:34:47.618155 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kmq6z-config-d57hb"] Mar 19 12:34:47.943198 master-0 kubenswrapper[31830]: I0319 12:34:47.939586 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kmq6z-config-d57hb" event={"ID":"db8da89c-b608-45f2-ab33-9017ff92989b","Type":"ContainerStarted","Data":"c66c2de7cf6bae88a5bd989b6ba7c60a29a8c837b65eca6c1b668b67ddda3d4a"} Mar 19 12:34:47.949821 master-0 kubenswrapper[31830]: I0319 12:34:47.948915 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"736d878b-1328-4a36-873f-62849c4e2d07","Type":"ContainerStarted","Data":"ea62407d793115819ddfa431c9eb41145bae2a8b5ccc5aaa8db09219716e4549"} Mar 19 12:34:48.024880 master-0 kubenswrapper[31830]: I0319 12:34:48.024759 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=22.754160694 podStartE2EDuration="29.024733091s" podCreationTimestamp="2026-03-19 12:34:19 +0000 UTC" firstStartedPulling="2026-03-19 12:34:39.06188169 +0000 UTC m=+1217.610842394" lastFinishedPulling="2026-03-19 12:34:45.332454087 +0000 UTC m=+1223.881414791" observedRunningTime="2026-03-19 12:34:47.999480608 +0000 UTC m=+1226.548441322" watchObservedRunningTime="2026-03-19 12:34:48.024733091 +0000 UTC m=+1226.573693825" Mar 19 12:34:48.322695 master-0 kubenswrapper[31830]: I0319 12:34:48.320978 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76986c7db5-mtxrk"] Mar 19 12:34:48.327870 master-0 kubenswrapper[31830]: I0319 12:34:48.325763 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:34:48.341209 master-0 kubenswrapper[31830]: I0319 12:34:48.340744 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 19 12:34:48.385965 master-0 kubenswrapper[31830]: I0319 12:34:48.385824 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76986c7db5-mtxrk"] Mar 19 12:34:48.433271 master-0 kubenswrapper[31830]: I0319 12:34:48.432965 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-ovsdbserver-nb\") pod \"dnsmasq-dns-76986c7db5-mtxrk\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:34:48.433271 master-0 kubenswrapper[31830]: I0319 12:34:48.433044 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-dns-svc\") pod \"dnsmasq-dns-76986c7db5-mtxrk\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:34:48.433271 master-0 kubenswrapper[31830]: I0319 12:34:48.433086 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-ovsdbserver-sb\") pod \"dnsmasq-dns-76986c7db5-mtxrk\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:34:48.433271 master-0 kubenswrapper[31830]: I0319 12:34:48.433138 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-config\") pod \"dnsmasq-dns-76986c7db5-mtxrk\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:34:48.433271 master-0 kubenswrapper[31830]: I0319 12:34:48.433217 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-dns-swift-storage-0\") pod \"dnsmasq-dns-76986c7db5-mtxrk\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:34:48.434019 master-0 kubenswrapper[31830]: I0319 12:34:48.433288 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjwj6\" (UniqueName: \"kubernetes.io/projected/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-kube-api-access-bjwj6\") pod \"dnsmasq-dns-76986c7db5-mtxrk\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:34:48.534686 master-0 kubenswrapper[31830]: I0319 12:34:48.534630 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-dns-svc\") pod \"dnsmasq-dns-76986c7db5-mtxrk\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:34:48.534686 master-0 kubenswrapper[31830]: I0319 12:34:48.534683 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-ovsdbserver-sb\") pod \"dnsmasq-dns-76986c7db5-mtxrk\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:34:48.535003 master-0 kubenswrapper[31830]: I0319 12:34:48.534728 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-config\") pod \"dnsmasq-dns-76986c7db5-mtxrk\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:34:48.536375 master-0 kubenswrapper[31830]: I0319 12:34:48.536321 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-ovsdbserver-sb\") pod \"dnsmasq-dns-76986c7db5-mtxrk\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:34:48.536448 master-0 kubenswrapper[31830]: I0319 12:34:48.536396 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-dns-swift-storage-0\") pod \"dnsmasq-dns-76986c7db5-mtxrk\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:34:48.536675 master-0 kubenswrapper[31830]: I0319 12:34:48.536634 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-config\") pod \"dnsmasq-dns-76986c7db5-mtxrk\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:34:48.537407 master-0 kubenswrapper[31830]: I0319 12:34:48.537370 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjwj6\" (UniqueName: \"kubernetes.io/projected/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-kube-api-access-bjwj6\") pod \"dnsmasq-dns-76986c7db5-mtxrk\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:34:48.538170 master-0 kubenswrapper[31830]: I0319 12:34:48.538134 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-dns-swift-storage-0\") pod \"dnsmasq-dns-76986c7db5-mtxrk\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:34:48.538260 master-0 kubenswrapper[31830]: I0319 12:34:48.538211 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-dns-svc\") pod \"dnsmasq-dns-76986c7db5-mtxrk\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:34:48.538474 master-0 kubenswrapper[31830]: I0319 12:34:48.538440 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-ovsdbserver-nb\") pod \"dnsmasq-dns-76986c7db5-mtxrk\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:34:48.539713 master-0 kubenswrapper[31830]: I0319 12:34:48.539668 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-ovsdbserver-nb\") pod \"dnsmasq-dns-76986c7db5-mtxrk\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:34:48.554208 master-0 kubenswrapper[31830]: I0319 12:34:48.554144 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjwj6\" (UniqueName: \"kubernetes.io/projected/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-kube-api-access-bjwj6\") pod \"dnsmasq-dns-76986c7db5-mtxrk\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:34:48.678707 master-0 kubenswrapper[31830]: I0319 12:34:48.678664 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:34:48.965174 master-0 kubenswrapper[31830]: I0319 12:34:48.965105 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kmq6z-config-d57hb" event={"ID":"db8da89c-b608-45f2-ab33-9017ff92989b","Type":"ContainerStarted","Data":"ce24334c12809539e014acf572552cc188d98912ab79f2c4e36eb3def8067921"} Mar 19 12:34:48.996490 master-0 kubenswrapper[31830]: I0319 12:34:48.996409 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-kmq6z-config-d57hb" podStartSLOduration=2.9963903050000003 podStartE2EDuration="2.996390305s" podCreationTimestamp="2026-03-19 12:34:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:34:48.98976061 +0000 UTC m=+1227.538721314" watchObservedRunningTime="2026-03-19 12:34:48.996390305 +0000 UTC m=+1227.545351009" Mar 19 12:34:51.247016 master-0 kubenswrapper[31830]: I0319 12:34:51.246967 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-kmq6z" Mar 19 12:34:55.767611 master-0 kubenswrapper[31830]: I0319 12:34:55.767551 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76986c7db5-mtxrk"] Mar 19 12:34:55.774183 master-0 kubenswrapper[31830]: W0319 12:34:55.774050 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b5c61f6_d6f5_4401_9b43_4817d229c0fe.slice/crio-048a9c2eb39f3975da8bef1e95e60f81e9261d4d3b04820c7570b17ea06aea8b WatchSource:0}: Error finding container 048a9c2eb39f3975da8bef1e95e60f81e9261d4d3b04820c7570b17ea06aea8b: Status 404 returned error can't find the container with id 048a9c2eb39f3975da8bef1e95e60f81e9261d4d3b04820c7570b17ea06aea8b Mar 19 12:34:56.048243 master-0 kubenswrapper[31830]: I0319 12:34:56.048065 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dfsd7" event={"ID":"82a35ae5-08db-4571-977b-95d26158480e","Type":"ContainerStarted","Data":"496d2f74441c0012111b3d65a363d46cea5ee91d2808eb80d63004f1fafc2520"} Mar 19 12:34:56.053188 master-0 kubenswrapper[31830]: I0319 12:34:56.053138 31830 generic.go:334] "Generic (PLEG): container finished" podID="db8da89c-b608-45f2-ab33-9017ff92989b" containerID="ce24334c12809539e014acf572552cc188d98912ab79f2c4e36eb3def8067921" exitCode=0 Mar 19 12:34:56.053373 master-0 kubenswrapper[31830]: I0319 12:34:56.053224 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kmq6z-config-d57hb" event={"ID":"db8da89c-b608-45f2-ab33-9017ff92989b","Type":"ContainerDied","Data":"ce24334c12809539e014acf572552cc188d98912ab79f2c4e36eb3def8067921"} Mar 19 12:34:56.072622 master-0 kubenswrapper[31830]: I0319 12:34:56.072553 31830 generic.go:334] "Generic (PLEG): container finished" podID="2b5c61f6-d6f5-4401-9b43-4817d229c0fe" containerID="be14902d5a14c9bfeefced21bd8a53c67fe1951028b16938574e435937c7c55f" exitCode=0 Mar 19 12:34:56.076679 master-0 kubenswrapper[31830]: I0319 12:34:56.072659 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" event={"ID":"2b5c61f6-d6f5-4401-9b43-4817d229c0fe","Type":"ContainerDied","Data":"be14902d5a14c9bfeefced21bd8a53c67fe1951028b16938574e435937c7c55f"} Mar 19 12:34:56.076679 master-0 kubenswrapper[31830]: I0319 12:34:56.072714 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" event={"ID":"2b5c61f6-d6f5-4401-9b43-4817d229c0fe","Type":"ContainerStarted","Data":"048a9c2eb39f3975da8bef1e95e60f81e9261d4d3b04820c7570b17ea06aea8b"} Mar 19 12:34:56.098727 master-0 kubenswrapper[31830]: I0319 12:34:56.098604 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-dfsd7" podStartSLOduration=2.589916657 podStartE2EDuration="18.098581382s" podCreationTimestamp="2026-03-19 12:34:38 +0000 UTC" firstStartedPulling="2026-03-19 12:34:39.857821194 +0000 UTC m=+1218.406781898" lastFinishedPulling="2026-03-19 12:34:55.366485919 +0000 UTC m=+1233.915446623" observedRunningTime="2026-03-19 12:34:56.071557155 +0000 UTC m=+1234.620517869" watchObservedRunningTime="2026-03-19 12:34:56.098581382 +0000 UTC m=+1234.647542086" Mar 19 12:34:57.089763 master-0 kubenswrapper[31830]: I0319 12:34:57.089683 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" event={"ID":"2b5c61f6-d6f5-4401-9b43-4817d229c0fe","Type":"ContainerStarted","Data":"6a18f36d89e386a2d734febf3787bce1e02fa8ec48b42c67279c1f0c7fc794f5"} Mar 19 12:34:57.091993 master-0 kubenswrapper[31830]: I0319 12:34:57.091958 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:34:57.115553 master-0 kubenswrapper[31830]: I0319 12:34:57.115446 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" podStartSLOduration=9.115423577 podStartE2EDuration="9.115423577s" podCreationTimestamp="2026-03-19 12:34:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:34:57.105211731 +0000 UTC m=+1235.654172435" watchObservedRunningTime="2026-03-19 12:34:57.115423577 +0000 UTC m=+1235.664384281" Mar 19 12:34:57.508294 master-0 kubenswrapper[31830]: I0319 12:34:57.507380 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:57.582174 master-0 kubenswrapper[31830]: I0319 12:34:57.582117 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/db8da89c-b608-45f2-ab33-9017ff92989b-var-log-ovn\") pod \"db8da89c-b608-45f2-ab33-9017ff92989b\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " Mar 19 12:34:57.582174 master-0 kubenswrapper[31830]: I0319 12:34:57.582194 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/db8da89c-b608-45f2-ab33-9017ff92989b-additional-scripts\") pod \"db8da89c-b608-45f2-ab33-9017ff92989b\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " Mar 19 12:34:57.582542 master-0 kubenswrapper[31830]: I0319 12:34:57.582231 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/db8da89c-b608-45f2-ab33-9017ff92989b-var-run-ovn\") pod \"db8da89c-b608-45f2-ab33-9017ff92989b\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " Mar 19 12:34:57.582542 master-0 kubenswrapper[31830]: I0319 12:34:57.582329 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghnr9\" (UniqueName: \"kubernetes.io/projected/db8da89c-b608-45f2-ab33-9017ff92989b-kube-api-access-ghnr9\") pod \"db8da89c-b608-45f2-ab33-9017ff92989b\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " Mar 19 12:34:57.582542 master-0 kubenswrapper[31830]: I0319 12:34:57.582351 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/db8da89c-b608-45f2-ab33-9017ff92989b-var-run\") pod \"db8da89c-b608-45f2-ab33-9017ff92989b\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " Mar 19 12:34:57.582542 master-0 kubenswrapper[31830]: I0319 12:34:57.582430 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db8da89c-b608-45f2-ab33-9017ff92989b-scripts\") pod \"db8da89c-b608-45f2-ab33-9017ff92989b\" (UID: \"db8da89c-b608-45f2-ab33-9017ff92989b\") " Mar 19 12:34:57.582968 master-0 kubenswrapper[31830]: I0319 12:34:57.582934 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db8da89c-b608-45f2-ab33-9017ff92989b-var-run" (OuterVolumeSpecName: "var-run") pod "db8da89c-b608-45f2-ab33-9017ff92989b" (UID: "db8da89c-b608-45f2-ab33-9017ff92989b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:34:57.583069 master-0 kubenswrapper[31830]: I0319 12:34:57.582977 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db8da89c-b608-45f2-ab33-9017ff92989b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "db8da89c-b608-45f2-ab33-9017ff92989b" (UID: "db8da89c-b608-45f2-ab33-9017ff92989b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:34:57.583189 master-0 kubenswrapper[31830]: I0319 12:34:57.582949 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db8da89c-b608-45f2-ab33-9017ff92989b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "db8da89c-b608-45f2-ab33-9017ff92989b" (UID: "db8da89c-b608-45f2-ab33-9017ff92989b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:34:57.583478 master-0 kubenswrapper[31830]: I0319 12:34:57.583320 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db8da89c-b608-45f2-ab33-9017ff92989b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "db8da89c-b608-45f2-ab33-9017ff92989b" (UID: "db8da89c-b608-45f2-ab33-9017ff92989b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:57.583704 master-0 kubenswrapper[31830]: I0319 12:34:57.583666 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db8da89c-b608-45f2-ab33-9017ff92989b-scripts" (OuterVolumeSpecName: "scripts") pod "db8da89c-b608-45f2-ab33-9017ff92989b" (UID: "db8da89c-b608-45f2-ab33-9017ff92989b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:34:57.586248 master-0 kubenswrapper[31830]: I0319 12:34:57.586184 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db8da89c-b608-45f2-ab33-9017ff92989b-kube-api-access-ghnr9" (OuterVolumeSpecName: "kube-api-access-ghnr9") pod "db8da89c-b608-45f2-ab33-9017ff92989b" (UID: "db8da89c-b608-45f2-ab33-9017ff92989b"). InnerVolumeSpecName "kube-api-access-ghnr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:34:57.684327 master-0 kubenswrapper[31830]: I0319 12:34:57.684183 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghnr9\" (UniqueName: \"kubernetes.io/projected/db8da89c-b608-45f2-ab33-9017ff92989b-kube-api-access-ghnr9\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:57.684327 master-0 kubenswrapper[31830]: I0319 12:34:57.684233 31830 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/db8da89c-b608-45f2-ab33-9017ff92989b-var-run\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:57.684327 master-0 kubenswrapper[31830]: I0319 12:34:57.684249 31830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db8da89c-b608-45f2-ab33-9017ff92989b-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:57.684327 master-0 kubenswrapper[31830]: I0319 12:34:57.684259 31830 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/db8da89c-b608-45f2-ab33-9017ff92989b-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:57.684327 master-0 kubenswrapper[31830]: I0319 12:34:57.684268 31830 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/db8da89c-b608-45f2-ab33-9017ff92989b-additional-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:57.684327 master-0 kubenswrapper[31830]: I0319 12:34:57.684277 31830 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/db8da89c-b608-45f2-ab33-9017ff92989b-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 19 12:34:58.102583 master-0 kubenswrapper[31830]: I0319 12:34:58.102434 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kmq6z-config-d57hb" event={"ID":"db8da89c-b608-45f2-ab33-9017ff92989b","Type":"ContainerDied","Data":"c66c2de7cf6bae88a5bd989b6ba7c60a29a8c837b65eca6c1b668b67ddda3d4a"} Mar 19 12:34:58.102583 master-0 kubenswrapper[31830]: I0319 12:34:58.102491 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c66c2de7cf6bae88a5bd989b6ba7c60a29a8c837b65eca6c1b668b67ddda3d4a" Mar 19 12:34:58.102583 master-0 kubenswrapper[31830]: I0319 12:34:58.102461 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kmq6z-config-d57hb" Mar 19 12:34:58.248271 master-0 kubenswrapper[31830]: I0319 12:34:58.248211 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kmq6z-config-d57hb"] Mar 19 12:34:58.261214 master-0 kubenswrapper[31830]: I0319 12:34:58.261158 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-kmq6z-config-d57hb"] Mar 19 12:34:59.562190 master-0 kubenswrapper[31830]: I0319 12:34:59.562126 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 19 12:34:59.699904 master-0 kubenswrapper[31830]: I0319 12:34:59.699841 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db8da89c-b608-45f2-ab33-9017ff92989b" path="/var/lib/kubelet/pods/db8da89c-b608-45f2-ab33-9017ff92989b/volumes" Mar 19 12:35:02.285649 master-0 kubenswrapper[31830]: I0319 12:35:02.285580 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 19 12:35:02.565926 master-0 kubenswrapper[31830]: I0319 12:35:02.565073 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-mhxb6"] Mar 19 12:35:02.565926 master-0 kubenswrapper[31830]: E0319 12:35:02.565779 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db8da89c-b608-45f2-ab33-9017ff92989b" containerName="ovn-config" Mar 19 12:35:02.565926 master-0 kubenswrapper[31830]: I0319 12:35:02.565813 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="db8da89c-b608-45f2-ab33-9017ff92989b" containerName="ovn-config" Mar 19 12:35:02.570832 master-0 kubenswrapper[31830]: I0319 12:35:02.569383 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="db8da89c-b608-45f2-ab33-9017ff92989b" containerName="ovn-config" Mar 19 12:35:02.570832 master-0 kubenswrapper[31830]: I0319 12:35:02.570403 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mhxb6" Mar 19 12:35:02.625394 master-0 kubenswrapper[31830]: I0319 12:35:02.624496 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mhxb6"] Mar 19 12:35:02.716819 master-0 kubenswrapper[31830]: I0319 12:35:02.716616 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjhpx\" (UniqueName: \"kubernetes.io/projected/48c9a901-d8c0-453d-8525-bf69f7710e6b-kube-api-access-xjhpx\") pod \"cinder-db-create-mhxb6\" (UID: \"48c9a901-d8c0-453d-8525-bf69f7710e6b\") " pod="openstack/cinder-db-create-mhxb6" Mar 19 12:35:02.716819 master-0 kubenswrapper[31830]: I0319 12:35:02.716716 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48c9a901-d8c0-453d-8525-bf69f7710e6b-operator-scripts\") pod \"cinder-db-create-mhxb6\" (UID: \"48c9a901-d8c0-453d-8525-bf69f7710e6b\") " pod="openstack/cinder-db-create-mhxb6" Mar 19 12:35:02.725827 master-0 kubenswrapper[31830]: I0319 12:35:02.722827 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ec8b-account-create-update-bs98g"] Mar 19 12:35:02.725827 master-0 kubenswrapper[31830]: I0319 12:35:02.724523 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ec8b-account-create-update-bs98g" Mar 19 12:35:02.733823 master-0 kubenswrapper[31830]: I0319 12:35:02.731392 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 19 12:35:02.753504 master-0 kubenswrapper[31830]: I0319 12:35:02.753451 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ec8b-account-create-update-bs98g"] Mar 19 12:35:02.821326 master-0 kubenswrapper[31830]: I0319 12:35:02.821176 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjhpx\" (UniqueName: \"kubernetes.io/projected/48c9a901-d8c0-453d-8525-bf69f7710e6b-kube-api-access-xjhpx\") pod \"cinder-db-create-mhxb6\" (UID: \"48c9a901-d8c0-453d-8525-bf69f7710e6b\") " pod="openstack/cinder-db-create-mhxb6" Mar 19 12:35:02.821643 master-0 kubenswrapper[31830]: I0319 12:35:02.821621 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxm56\" (UniqueName: \"kubernetes.io/projected/373619bf-a142-44fd-b4b4-25d7cc74dda4-kube-api-access-mxm56\") pod \"cinder-ec8b-account-create-update-bs98g\" (UID: \"373619bf-a142-44fd-b4b4-25d7cc74dda4\") " pod="openstack/cinder-ec8b-account-create-update-bs98g" Mar 19 12:35:02.821821 master-0 kubenswrapper[31830]: I0319 12:35:02.821786 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48c9a901-d8c0-453d-8525-bf69f7710e6b-operator-scripts\") pod \"cinder-db-create-mhxb6\" (UID: \"48c9a901-d8c0-453d-8525-bf69f7710e6b\") " pod="openstack/cinder-db-create-mhxb6" Mar 19 12:35:02.821994 master-0 kubenswrapper[31830]: I0319 12:35:02.821976 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373619bf-a142-44fd-b4b4-25d7cc74dda4-operator-scripts\") pod \"cinder-ec8b-account-create-update-bs98g\" (UID: \"373619bf-a142-44fd-b4b4-25d7cc74dda4\") " pod="openstack/cinder-ec8b-account-create-update-bs98g" Mar 19 12:35:02.823590 master-0 kubenswrapper[31830]: I0319 12:35:02.823568 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48c9a901-d8c0-453d-8525-bf69f7710e6b-operator-scripts\") pod \"cinder-db-create-mhxb6\" (UID: \"48c9a901-d8c0-453d-8525-bf69f7710e6b\") " pod="openstack/cinder-db-create-mhxb6" Mar 19 12:35:02.870821 master-0 kubenswrapper[31830]: I0319 12:35:02.870085 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjhpx\" (UniqueName: \"kubernetes.io/projected/48c9a901-d8c0-453d-8525-bf69f7710e6b-kube-api-access-xjhpx\") pod \"cinder-db-create-mhxb6\" (UID: \"48c9a901-d8c0-453d-8525-bf69f7710e6b\") " pod="openstack/cinder-db-create-mhxb6" Mar 19 12:35:02.909419 master-0 kubenswrapper[31830]: I0319 12:35:02.909356 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mhxb6" Mar 19 12:35:02.924187 master-0 kubenswrapper[31830]: I0319 12:35:02.924132 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxm56\" (UniqueName: \"kubernetes.io/projected/373619bf-a142-44fd-b4b4-25d7cc74dda4-kube-api-access-mxm56\") pod \"cinder-ec8b-account-create-update-bs98g\" (UID: \"373619bf-a142-44fd-b4b4-25d7cc74dda4\") " pod="openstack/cinder-ec8b-account-create-update-bs98g" Mar 19 12:35:02.924394 master-0 kubenswrapper[31830]: I0319 12:35:02.924276 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373619bf-a142-44fd-b4b4-25d7cc74dda4-operator-scripts\") pod \"cinder-ec8b-account-create-update-bs98g\" (UID: \"373619bf-a142-44fd-b4b4-25d7cc74dda4\") " pod="openstack/cinder-ec8b-account-create-update-bs98g" Mar 19 12:35:02.925126 master-0 kubenswrapper[31830]: I0319 12:35:02.925087 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373619bf-a142-44fd-b4b4-25d7cc74dda4-operator-scripts\") pod \"cinder-ec8b-account-create-update-bs98g\" (UID: \"373619bf-a142-44fd-b4b4-25d7cc74dda4\") " pod="openstack/cinder-ec8b-account-create-update-bs98g" Mar 19 12:35:02.966716 master-0 kubenswrapper[31830]: I0319 12:35:02.966656 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxm56\" (UniqueName: \"kubernetes.io/projected/373619bf-a142-44fd-b4b4-25d7cc74dda4-kube-api-access-mxm56\") pod \"cinder-ec8b-account-create-update-bs98g\" (UID: \"373619bf-a142-44fd-b4b4-25d7cc74dda4\") " pod="openstack/cinder-ec8b-account-create-update-bs98g" Mar 19 12:35:03.038921 master-0 kubenswrapper[31830]: I0319 12:35:03.038852 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-t4w4t"] Mar 19 12:35:03.043819 master-0 kubenswrapper[31830]: I0319 12:35:03.040988 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-t4w4t" Mar 19 12:35:03.047241 master-0 kubenswrapper[31830]: I0319 12:35:03.047200 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 19 12:35:03.053876 master-0 kubenswrapper[31830]: I0319 12:35:03.052399 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 19 12:35:03.053876 master-0 kubenswrapper[31830]: I0319 12:35:03.052643 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 19 12:35:03.058853 master-0 kubenswrapper[31830]: I0319 12:35:03.057226 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-t4w4t"] Mar 19 12:35:03.058853 master-0 kubenswrapper[31830]: I0319 12:35:03.057680 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ec8b-account-create-update-bs98g" Mar 19 12:35:03.091648 master-0 kubenswrapper[31830]: I0319 12:35:03.089627 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-m2k5f"] Mar 19 12:35:03.091648 master-0 kubenswrapper[31830]: I0319 12:35:03.091088 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m2k5f" Mar 19 12:35:03.102051 master-0 kubenswrapper[31830]: I0319 12:35:03.101988 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-m2k5f"] Mar 19 12:35:03.131120 master-0 kubenswrapper[31830]: I0319 12:35:03.131062 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4s2j\" (UniqueName: \"kubernetes.io/projected/78caf503-3472-47a9-9107-4d260f898fb2-kube-api-access-s4s2j\") pod \"keystone-db-sync-t4w4t\" (UID: \"78caf503-3472-47a9-9107-4d260f898fb2\") " pod="openstack/keystone-db-sync-t4w4t" Mar 19 12:35:03.131245 master-0 kubenswrapper[31830]: I0319 12:35:03.131191 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78caf503-3472-47a9-9107-4d260f898fb2-combined-ca-bundle\") pod \"keystone-db-sync-t4w4t\" (UID: \"78caf503-3472-47a9-9107-4d260f898fb2\") " pod="openstack/keystone-db-sync-t4w4t" Mar 19 12:35:03.131689 master-0 kubenswrapper[31830]: I0319 12:35:03.131327 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78caf503-3472-47a9-9107-4d260f898fb2-config-data\") pod \"keystone-db-sync-t4w4t\" (UID: \"78caf503-3472-47a9-9107-4d260f898fb2\") " pod="openstack/keystone-db-sync-t4w4t" Mar 19 12:35:03.238320 master-0 kubenswrapper[31830]: I0319 12:35:03.238268 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78caf503-3472-47a9-9107-4d260f898fb2-config-data\") pod \"keystone-db-sync-t4w4t\" (UID: \"78caf503-3472-47a9-9107-4d260f898fb2\") " pod="openstack/keystone-db-sync-t4w4t" Mar 19 12:35:03.239705 master-0 kubenswrapper[31830]: I0319 12:35:03.239674 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pd8l\" (UniqueName: \"kubernetes.io/projected/9559d792-d79a-48bf-9ad0-b157b0e2684f-kube-api-access-9pd8l\") pod \"neutron-db-create-m2k5f\" (UID: \"9559d792-d79a-48bf-9ad0-b157b0e2684f\") " pod="openstack/neutron-db-create-m2k5f" Mar 19 12:35:03.239969 master-0 kubenswrapper[31830]: I0319 12:35:03.239944 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4s2j\" (UniqueName: \"kubernetes.io/projected/78caf503-3472-47a9-9107-4d260f898fb2-kube-api-access-s4s2j\") pod \"keystone-db-sync-t4w4t\" (UID: \"78caf503-3472-47a9-9107-4d260f898fb2\") " pod="openstack/keystone-db-sync-t4w4t" Mar 19 12:35:03.240386 master-0 kubenswrapper[31830]: I0319 12:35:03.240360 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9559d792-d79a-48bf-9ad0-b157b0e2684f-operator-scripts\") pod \"neutron-db-create-m2k5f\" (UID: \"9559d792-d79a-48bf-9ad0-b157b0e2684f\") " pod="openstack/neutron-db-create-m2k5f" Mar 19 12:35:03.240563 master-0 kubenswrapper[31830]: I0319 12:35:03.240545 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78caf503-3472-47a9-9107-4d260f898fb2-combined-ca-bundle\") pod \"keystone-db-sync-t4w4t\" (UID: \"78caf503-3472-47a9-9107-4d260f898fb2\") " pod="openstack/keystone-db-sync-t4w4t" Mar 19 12:35:03.260186 master-0 kubenswrapper[31830]: I0319 12:35:03.260137 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6934-account-create-update-q844m"] Mar 19 12:35:03.261402 master-0 kubenswrapper[31830]: I0319 12:35:03.261337 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78caf503-3472-47a9-9107-4d260f898fb2-config-data\") pod \"keystone-db-sync-t4w4t\" (UID: \"78caf503-3472-47a9-9107-4d260f898fb2\") " pod="openstack/keystone-db-sync-t4w4t" Mar 19 12:35:03.261728 master-0 kubenswrapper[31830]: I0319 12:35:03.261690 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6934-account-create-update-q844m" Mar 19 12:35:03.263091 master-0 kubenswrapper[31830]: I0319 12:35:03.263053 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78caf503-3472-47a9-9107-4d260f898fb2-combined-ca-bundle\") pod \"keystone-db-sync-t4w4t\" (UID: \"78caf503-3472-47a9-9107-4d260f898fb2\") " pod="openstack/keystone-db-sync-t4w4t" Mar 19 12:35:03.266278 master-0 kubenswrapper[31830]: I0319 12:35:03.266245 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 19 12:35:03.274121 master-0 kubenswrapper[31830]: I0319 12:35:03.274074 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4s2j\" (UniqueName: \"kubernetes.io/projected/78caf503-3472-47a9-9107-4d260f898fb2-kube-api-access-s4s2j\") pod \"keystone-db-sync-t4w4t\" (UID: \"78caf503-3472-47a9-9107-4d260f898fb2\") " pod="openstack/keystone-db-sync-t4w4t" Mar 19 12:35:03.283265 master-0 kubenswrapper[31830]: I0319 12:35:03.283218 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6934-account-create-update-q844m"] Mar 19 12:35:03.354080 master-0 kubenswrapper[31830]: I0319 12:35:03.342464 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pd8l\" (UniqueName: \"kubernetes.io/projected/9559d792-d79a-48bf-9ad0-b157b0e2684f-kube-api-access-9pd8l\") pod \"neutron-db-create-m2k5f\" (UID: \"9559d792-d79a-48bf-9ad0-b157b0e2684f\") " pod="openstack/neutron-db-create-m2k5f" Mar 19 12:35:03.354080 master-0 kubenswrapper[31830]: I0319 12:35:03.342514 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk5gd\" (UniqueName: \"kubernetes.io/projected/b5409fe3-cc0f-4ba4-a1f3-93f2ae986204-kube-api-access-bk5gd\") pod \"neutron-6934-account-create-update-q844m\" (UID: \"b5409fe3-cc0f-4ba4-a1f3-93f2ae986204\") " pod="openstack/neutron-6934-account-create-update-q844m" Mar 19 12:35:03.354080 master-0 kubenswrapper[31830]: I0319 12:35:03.342606 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5409fe3-cc0f-4ba4-a1f3-93f2ae986204-operator-scripts\") pod \"neutron-6934-account-create-update-q844m\" (UID: \"b5409fe3-cc0f-4ba4-a1f3-93f2ae986204\") " pod="openstack/neutron-6934-account-create-update-q844m" Mar 19 12:35:03.354080 master-0 kubenswrapper[31830]: I0319 12:35:03.342634 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9559d792-d79a-48bf-9ad0-b157b0e2684f-operator-scripts\") pod \"neutron-db-create-m2k5f\" (UID: \"9559d792-d79a-48bf-9ad0-b157b0e2684f\") " pod="openstack/neutron-db-create-m2k5f" Mar 19 12:35:03.354080 master-0 kubenswrapper[31830]: I0319 12:35:03.343580 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9559d792-d79a-48bf-9ad0-b157b0e2684f-operator-scripts\") pod \"neutron-db-create-m2k5f\" (UID: \"9559d792-d79a-48bf-9ad0-b157b0e2684f\") " pod="openstack/neutron-db-create-m2k5f" Mar 19 12:35:03.364482 master-0 kubenswrapper[31830]: I0319 12:35:03.364406 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pd8l\" (UniqueName: \"kubernetes.io/projected/9559d792-d79a-48bf-9ad0-b157b0e2684f-kube-api-access-9pd8l\") pod \"neutron-db-create-m2k5f\" (UID: \"9559d792-d79a-48bf-9ad0-b157b0e2684f\") " pod="openstack/neutron-db-create-m2k5f" Mar 19 12:35:03.444264 master-0 kubenswrapper[31830]: I0319 12:35:03.444200 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk5gd\" (UniqueName: \"kubernetes.io/projected/b5409fe3-cc0f-4ba4-a1f3-93f2ae986204-kube-api-access-bk5gd\") pod \"neutron-6934-account-create-update-q844m\" (UID: \"b5409fe3-cc0f-4ba4-a1f3-93f2ae986204\") " pod="openstack/neutron-6934-account-create-update-q844m" Mar 19 12:35:03.444514 master-0 kubenswrapper[31830]: I0319 12:35:03.444313 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5409fe3-cc0f-4ba4-a1f3-93f2ae986204-operator-scripts\") pod \"neutron-6934-account-create-update-q844m\" (UID: \"b5409fe3-cc0f-4ba4-a1f3-93f2ae986204\") " pod="openstack/neutron-6934-account-create-update-q844m" Mar 19 12:35:03.445259 master-0 kubenswrapper[31830]: I0319 12:35:03.445217 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5409fe3-cc0f-4ba4-a1f3-93f2ae986204-operator-scripts\") pod \"neutron-6934-account-create-update-q844m\" (UID: \"b5409fe3-cc0f-4ba4-a1f3-93f2ae986204\") " pod="openstack/neutron-6934-account-create-update-q844m" Mar 19 12:35:03.461963 master-0 kubenswrapper[31830]: I0319 12:35:03.461912 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk5gd\" (UniqueName: \"kubernetes.io/projected/b5409fe3-cc0f-4ba4-a1f3-93f2ae986204-kube-api-access-bk5gd\") pod \"neutron-6934-account-create-update-q844m\" (UID: \"b5409fe3-cc0f-4ba4-a1f3-93f2ae986204\") " pod="openstack/neutron-6934-account-create-update-q844m" Mar 19 12:35:03.486502 master-0 kubenswrapper[31830]: I0319 12:35:03.486453 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-t4w4t" Mar 19 12:35:03.551925 master-0 kubenswrapper[31830]: I0319 12:35:03.551790 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m2k5f" Mar 19 12:35:03.582927 master-0 kubenswrapper[31830]: I0319 12:35:03.579443 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6934-account-create-update-q844m" Mar 19 12:35:03.608121 master-0 kubenswrapper[31830]: I0319 12:35:03.606595 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mhxb6"] Mar 19 12:35:03.726266 master-0 kubenswrapper[31830]: I0319 12:35:03.726199 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:35:03.768042 master-0 kubenswrapper[31830]: I0319 12:35:03.766602 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ec8b-account-create-update-bs98g"] Mar 19 12:35:03.877089 master-0 kubenswrapper[31830]: I0319 12:35:03.877028 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-djbrh"] Mar 19 12:35:03.878339 master-0 kubenswrapper[31830]: I0319 12:35:03.877438 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" podUID="3a1d3222-9623-4753-9b0a-8d8da0fb3f1e" containerName="dnsmasq-dns" containerID="cri-o://1bc4c0b6899dd54cdca3fe24d0eed96201228c3fc591b3ae37d2ea67cd328dd0" gracePeriod=10 Mar 19 12:35:04.008855 master-0 kubenswrapper[31830]: I0319 12:35:04.006366 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-t4w4t"] Mar 19 12:35:04.035884 master-0 kubenswrapper[31830]: W0319 12:35:04.035827 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78caf503_3472_47a9_9107_4d260f898fb2.slice/crio-681f12f51acc829ca22f6aec349abe93d28636bca85b150aa6e04ee3b31c770e WatchSource:0}: Error finding container 681f12f51acc829ca22f6aec349abe93d28636bca85b150aa6e04ee3b31c770e: Status 404 returned error can't find the container with id 681f12f51acc829ca22f6aec349abe93d28636bca85b150aa6e04ee3b31c770e Mar 19 12:35:04.127912 master-0 kubenswrapper[31830]: I0319 12:35:04.122061 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cf7977b8c-m7nzm"] Mar 19 12:35:04.133566 master-0 kubenswrapper[31830]: I0319 12:35:04.132015 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.140945 master-0 kubenswrapper[31830]: I0319 12:35:04.140901 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"edpm-a" Mar 19 12:35:04.177577 master-0 kubenswrapper[31830]: I0319 12:35:04.175962 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cf7977b8c-m7nzm"] Mar 19 12:35:04.231293 master-0 kubenswrapper[31830]: I0319 12:35:04.231235 31830 generic.go:334] "Generic (PLEG): container finished" podID="3a1d3222-9623-4753-9b0a-8d8da0fb3f1e" containerID="1bc4c0b6899dd54cdca3fe24d0eed96201228c3fc591b3ae37d2ea67cd328dd0" exitCode=0 Mar 19 12:35:04.232712 master-0 kubenswrapper[31830]: I0319 12:35:04.231334 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" event={"ID":"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e","Type":"ContainerDied","Data":"1bc4c0b6899dd54cdca3fe24d0eed96201228c3fc591b3ae37d2ea67cd328dd0"} Mar 19 12:35:04.234589 master-0 kubenswrapper[31830]: I0319 12:35:04.234501 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ec8b-account-create-update-bs98g" event={"ID":"373619bf-a142-44fd-b4b4-25d7cc74dda4","Type":"ContainerStarted","Data":"eb404220364558ab3b84d9c3b70a418ba6789e2d99001850f16b2f8221ebfabc"} Mar 19 12:35:04.238669 master-0 kubenswrapper[31830]: I0319 12:35:04.237363 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mhxb6" event={"ID":"48c9a901-d8c0-453d-8525-bf69f7710e6b","Type":"ContainerStarted","Data":"a4ec721b7a729caf0e9c5cc83e7ede0e06a43afa55e6181e4b7e78f645a4f25e"} Mar 19 12:35:04.238669 master-0 kubenswrapper[31830]: I0319 12:35:04.237396 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mhxb6" event={"ID":"48c9a901-d8c0-453d-8525-bf69f7710e6b","Type":"ContainerStarted","Data":"1bc54325f7ba72267710773f39f108b36fe3efb2064bc5f5711b6b99bf95106a"} Mar 19 12:35:04.240338 master-0 kubenswrapper[31830]: I0319 12:35:04.240315 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-t4w4t" event={"ID":"78caf503-3472-47a9-9107-4d260f898fb2","Type":"ContainerStarted","Data":"681f12f51acc829ca22f6aec349abe93d28636bca85b150aa6e04ee3b31c770e"} Mar 19 12:35:04.314609 master-0 kubenswrapper[31830]: I0319 12:35:04.314554 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-dns-svc\") pod \"dnsmasq-dns-7cf7977b8c-m7nzm\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.314892 master-0 kubenswrapper[31830]: I0319 12:35:04.314727 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-edpm-a\") pod \"dnsmasq-dns-7cf7977b8c-m7nzm\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.314970 master-0 kubenswrapper[31830]: I0319 12:35:04.314938 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-config\") pod \"dnsmasq-dns-7cf7977b8c-m7nzm\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.315108 master-0 kubenswrapper[31830]: I0319 12:35:04.315085 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-ovsdbserver-sb\") pod \"dnsmasq-dns-7cf7977b8c-m7nzm\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.315280 master-0 kubenswrapper[31830]: I0319 12:35:04.315204 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-dns-swift-storage-0\") pod \"dnsmasq-dns-7cf7977b8c-m7nzm\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.315477 master-0 kubenswrapper[31830]: I0319 12:35:04.315419 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thj8b\" (UniqueName: \"kubernetes.io/projected/592ce33e-3c82-43e7-b736-ac4e5aea1250-kube-api-access-thj8b\") pod \"dnsmasq-dns-7cf7977b8c-m7nzm\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.315477 master-0 kubenswrapper[31830]: I0319 12:35:04.315488 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-ovsdbserver-nb\") pod \"dnsmasq-dns-7cf7977b8c-m7nzm\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.315964 master-0 kubenswrapper[31830]: I0319 12:35:04.315810 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cf7977b8c-m7nzm"] Mar 19 12:35:04.317617 master-0 kubenswrapper[31830]: E0319 12:35:04.317465 31830 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 edpm-a kube-api-access-thj8b ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" podUID="592ce33e-3c82-43e7-b736-ac4e5aea1250" Mar 19 12:35:04.345929 master-0 kubenswrapper[31830]: I0319 12:35:04.345850 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-mhxb6" podStartSLOduration=2.345827572 podStartE2EDuration="2.345827572s" podCreationTimestamp="2026-03-19 12:35:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:35:04.276059887 +0000 UTC m=+1242.825020591" watchObservedRunningTime="2026-03-19 12:35:04.345827572 +0000 UTC m=+1242.894788276" Mar 19 12:35:04.424831 master-0 kubenswrapper[31830]: I0319 12:35:04.424300 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thj8b\" (UniqueName: \"kubernetes.io/projected/592ce33e-3c82-43e7-b736-ac4e5aea1250-kube-api-access-thj8b\") pod \"dnsmasq-dns-7cf7977b8c-m7nzm\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.424831 master-0 kubenswrapper[31830]: I0319 12:35:04.424551 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-ovsdbserver-nb\") pod \"dnsmasq-dns-7cf7977b8c-m7nzm\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.424831 master-0 kubenswrapper[31830]: I0319 12:35:04.424664 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-dns-svc\") pod \"dnsmasq-dns-7cf7977b8c-m7nzm\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.424831 master-0 kubenswrapper[31830]: I0319 12:35:04.424761 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-edpm-a\") pod \"dnsmasq-dns-7cf7977b8c-m7nzm\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.425523 master-0 kubenswrapper[31830]: I0319 12:35:04.424988 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-config\") pod \"dnsmasq-dns-7cf7977b8c-m7nzm\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.425523 master-0 kubenswrapper[31830]: I0319 12:35:04.425087 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-ovsdbserver-sb\") pod \"dnsmasq-dns-7cf7977b8c-m7nzm\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.425523 master-0 kubenswrapper[31830]: I0319 12:35:04.425195 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-dns-swift-storage-0\") pod \"dnsmasq-dns-7cf7977b8c-m7nzm\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.431793 master-0 kubenswrapper[31830]: I0319 12:35:04.425909 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-ovsdbserver-nb\") pod \"dnsmasq-dns-7cf7977b8c-m7nzm\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.431793 master-0 kubenswrapper[31830]: I0319 12:35:04.426180 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-edpm-a\") pod \"dnsmasq-dns-7cf7977b8c-m7nzm\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.431793 master-0 kubenswrapper[31830]: I0319 12:35:04.426331 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-dns-svc\") pod \"dnsmasq-dns-7cf7977b8c-m7nzm\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.431793 master-0 kubenswrapper[31830]: I0319 12:35:04.426479 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-config\") pod \"dnsmasq-dns-7cf7977b8c-m7nzm\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.431793 master-0 kubenswrapper[31830]: I0319 12:35:04.426816 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-ovsdbserver-sb\") pod \"dnsmasq-dns-7cf7977b8c-m7nzm\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.431793 master-0 kubenswrapper[31830]: I0319 12:35:04.427464 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-dns-swift-storage-0\") pod \"dnsmasq-dns-7cf7977b8c-m7nzm\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.467063 master-0 kubenswrapper[31830]: I0319 12:35:04.465641 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6465768b8c-fp4jc"] Mar 19 12:35:04.523009 master-0 kubenswrapper[31830]: I0319 12:35:04.514278 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6465768b8c-fp4jc"] Mar 19 12:35:04.523009 master-0 kubenswrapper[31830]: I0319 12:35:04.514444 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.523009 master-0 kubenswrapper[31830]: I0319 12:35:04.515665 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thj8b\" (UniqueName: \"kubernetes.io/projected/592ce33e-3c82-43e7-b736-ac4e5aea1250-kube-api-access-thj8b\") pod \"dnsmasq-dns-7cf7977b8c-m7nzm\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:04.523009 master-0 kubenswrapper[31830]: I0319 12:35:04.515740 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6934-account-create-update-q844m"] Mar 19 12:35:04.523009 master-0 kubenswrapper[31830]: I0319 12:35:04.517824 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"edpm-b" Mar 19 12:35:04.539184 master-0 kubenswrapper[31830]: I0319 12:35:04.535466 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-m2k5f"] Mar 19 12:35:04.639450 master-0 kubenswrapper[31830]: I0319 12:35:04.639107 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-ovsdbserver-nb\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.639450 master-0 kubenswrapper[31830]: I0319 12:35:04.639256 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-config\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.639450 master-0 kubenswrapper[31830]: I0319 12:35:04.639316 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-edpm-a\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.639450 master-0 kubenswrapper[31830]: I0319 12:35:04.639370 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-dns-swift-storage-0\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.639693 master-0 kubenswrapper[31830]: I0319 12:35:04.639497 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-dns-svc\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.639693 master-0 kubenswrapper[31830]: I0319 12:35:04.639656 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-edpm-b\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.639810 master-0 kubenswrapper[31830]: I0319 12:35:04.639768 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-ovsdbserver-sb\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.639990 master-0 kubenswrapper[31830]: I0319 12:35:04.639967 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lrpj\" (UniqueName: \"kubernetes.io/projected/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-kube-api-access-2lrpj\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.736004 master-0 kubenswrapper[31830]: I0319 12:35:04.735927 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" Mar 19 12:35:04.743255 master-0 kubenswrapper[31830]: I0319 12:35:04.743207 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-edpm-a\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.743377 master-0 kubenswrapper[31830]: I0319 12:35:04.743300 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-dns-swift-storage-0\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.743377 master-0 kubenswrapper[31830]: I0319 12:35:04.743341 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-dns-svc\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.743451 master-0 kubenswrapper[31830]: I0319 12:35:04.743400 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-edpm-b\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.743487 master-0 kubenswrapper[31830]: I0319 12:35:04.743455 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-ovsdbserver-sb\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.743543 master-0 kubenswrapper[31830]: I0319 12:35:04.743524 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lrpj\" (UniqueName: \"kubernetes.io/projected/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-kube-api-access-2lrpj\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.743608 master-0 kubenswrapper[31830]: I0319 12:35:04.743582 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-ovsdbserver-nb\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.743695 master-0 kubenswrapper[31830]: I0319 12:35:04.743676 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-config\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.746592 master-0 kubenswrapper[31830]: I0319 12:35:04.745484 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-config\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.746592 master-0 kubenswrapper[31830]: I0319 12:35:04.746323 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-edpm-a\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.749003 master-0 kubenswrapper[31830]: I0319 12:35:04.746759 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-dns-svc\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.749003 master-0 kubenswrapper[31830]: I0319 12:35:04.747063 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-ovsdbserver-sb\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.749003 master-0 kubenswrapper[31830]: I0319 12:35:04.747200 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-ovsdbserver-nb\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.749003 master-0 kubenswrapper[31830]: I0319 12:35:04.747466 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-edpm-b\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.749003 master-0 kubenswrapper[31830]: I0319 12:35:04.747753 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-dns-swift-storage-0\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.795746 master-0 kubenswrapper[31830]: I0319 12:35:04.787064 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lrpj\" (UniqueName: \"kubernetes.io/projected/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-kube-api-access-2lrpj\") pod \"dnsmasq-dns-6465768b8c-fp4jc\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.848429 master-0 kubenswrapper[31830]: I0319 12:35:04.845253 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7bmj\" (UniqueName: \"kubernetes.io/projected/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-kube-api-access-p7bmj\") pod \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\" (UID: \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\") " Mar 19 12:35:04.848429 master-0 kubenswrapper[31830]: I0319 12:35:04.845335 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-ovsdbserver-sb\") pod \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\" (UID: \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\") " Mar 19 12:35:04.848429 master-0 kubenswrapper[31830]: I0319 12:35:04.845367 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-ovsdbserver-nb\") pod \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\" (UID: \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\") " Mar 19 12:35:04.848429 master-0 kubenswrapper[31830]: I0319 12:35:04.845525 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-dns-svc\") pod \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\" (UID: \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\") " Mar 19 12:35:04.848429 master-0 kubenswrapper[31830]: I0319 12:35:04.845661 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-config\") pod \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\" (UID: \"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e\") " Mar 19 12:35:04.852627 master-0 kubenswrapper[31830]: I0319 12:35:04.850763 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb"] Mar 19 12:35:04.852627 master-0 kubenswrapper[31830]: E0319 12:35:04.851747 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a1d3222-9623-4753-9b0a-8d8da0fb3f1e" containerName="dnsmasq-dns" Mar 19 12:35:04.852627 master-0 kubenswrapper[31830]: I0319 12:35:04.851770 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a1d3222-9623-4753-9b0a-8d8da0fb3f1e" containerName="dnsmasq-dns" Mar 19 12:35:04.852627 master-0 kubenswrapper[31830]: E0319 12:35:04.851786 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a1d3222-9623-4753-9b0a-8d8da0fb3f1e" containerName="init" Mar 19 12:35:04.852627 master-0 kubenswrapper[31830]: I0319 12:35:04.851807 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a1d3222-9623-4753-9b0a-8d8da0fb3f1e" containerName="init" Mar 19 12:35:04.852627 master-0 kubenswrapper[31830]: I0319 12:35:04.852100 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a1d3222-9623-4753-9b0a-8d8da0fb3f1e" containerName="dnsmasq-dns" Mar 19 12:35:04.858918 master-0 kubenswrapper[31830]: I0319 12:35:04.853643 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb" Mar 19 12:35:04.859222 master-0 kubenswrapper[31830]: I0319 12:35:04.859124 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-kube-api-access-p7bmj" (OuterVolumeSpecName: "kube-api-access-p7bmj") pod "3a1d3222-9623-4753-9b0a-8d8da0fb3f1e" (UID: "3a1d3222-9623-4753-9b0a-8d8da0fb3f1e"). InnerVolumeSpecName "kube-api-access-p7bmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:35:04.869079 master-0 kubenswrapper[31830]: I0319 12:35:04.869022 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"edpm-a-provisionserver-httpd-config" Mar 19 12:35:04.909912 master-0 kubenswrapper[31830]: I0319 12:35:04.907753 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:04.956503 master-0 kubenswrapper[31830]: I0319 12:35:04.956401 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/configmap/b7a848b2-11a9-47c9-881c-6ed12d3e3d1b-httpd-config\") pod \"edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb\" (UID: \"b7a848b2-11a9-47c9-881c-6ed12d3e3d1b\") " pod="openstack/edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb" Mar 19 12:35:04.956739 master-0 kubenswrapper[31830]: I0319 12:35:04.956456 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-data\" (UniqueName: \"kubernetes.io/empty-dir/b7a848b2-11a9-47c9-881c-6ed12d3e3d1b-image-data\") pod \"edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb\" (UID: \"b7a848b2-11a9-47c9-881c-6ed12d3e3d1b\") " pod="openstack/edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb" Mar 19 12:35:04.956912 master-0 kubenswrapper[31830]: I0319 12:35:04.956867 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scb56\" (UniqueName: \"kubernetes.io/projected/b7a848b2-11a9-47c9-881c-6ed12d3e3d1b-kube-api-access-scb56\") pod \"edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb\" (UID: \"b7a848b2-11a9-47c9-881c-6ed12d3e3d1b\") " pod="openstack/edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb" Mar 19 12:35:04.957186 master-0 kubenswrapper[31830]: I0319 12:35:04.957140 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7bmj\" (UniqueName: \"kubernetes.io/projected/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-kube-api-access-p7bmj\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:04.966659 master-0 kubenswrapper[31830]: I0319 12:35:04.966432 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3a1d3222-9623-4753-9b0a-8d8da0fb3f1e" (UID: "3a1d3222-9623-4753-9b0a-8d8da0fb3f1e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:04.990730 master-0 kubenswrapper[31830]: I0319 12:35:04.990397 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-config" (OuterVolumeSpecName: "config") pod "3a1d3222-9623-4753-9b0a-8d8da0fb3f1e" (UID: "3a1d3222-9623-4753-9b0a-8d8da0fb3f1e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:04.991187 master-0 kubenswrapper[31830]: I0319 12:35:04.991117 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3a1d3222-9623-4753-9b0a-8d8da0fb3f1e" (UID: "3a1d3222-9623-4753-9b0a-8d8da0fb3f1e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:05.007830 master-0 kubenswrapper[31830]: I0319 12:35:05.007715 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3a1d3222-9623-4753-9b0a-8d8da0fb3f1e" (UID: "3a1d3222-9623-4753-9b0a-8d8da0fb3f1e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:05.066500 master-0 kubenswrapper[31830]: I0319 12:35:05.059385 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/configmap/b7a848b2-11a9-47c9-881c-6ed12d3e3d1b-httpd-config\") pod \"edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb\" (UID: \"b7a848b2-11a9-47c9-881c-6ed12d3e3d1b\") " pod="openstack/edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb" Mar 19 12:35:05.066500 master-0 kubenswrapper[31830]: I0319 12:35:05.059445 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-data\" (UniqueName: \"kubernetes.io/empty-dir/b7a848b2-11a9-47c9-881c-6ed12d3e3d1b-image-data\") pod \"edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb\" (UID: \"b7a848b2-11a9-47c9-881c-6ed12d3e3d1b\") " pod="openstack/edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb" Mar 19 12:35:05.066500 master-0 kubenswrapper[31830]: I0319 12:35:05.059544 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scb56\" (UniqueName: \"kubernetes.io/projected/b7a848b2-11a9-47c9-881c-6ed12d3e3d1b-kube-api-access-scb56\") pod \"edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb\" (UID: \"b7a848b2-11a9-47c9-881c-6ed12d3e3d1b\") " pod="openstack/edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb" Mar 19 12:35:05.066500 master-0 kubenswrapper[31830]: I0319 12:35:05.059674 31830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:05.066500 master-0 kubenswrapper[31830]: I0319 12:35:05.059688 31830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:05.066500 master-0 kubenswrapper[31830]: I0319 12:35:05.059697 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:05.066500 master-0 kubenswrapper[31830]: I0319 12:35:05.059707 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:05.066500 master-0 kubenswrapper[31830]: I0319 12:35:05.066099 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-data\" (UniqueName: \"kubernetes.io/empty-dir/b7a848b2-11a9-47c9-881c-6ed12d3e3d1b-image-data\") pod \"edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb\" (UID: \"b7a848b2-11a9-47c9-881c-6ed12d3e3d1b\") " pod="openstack/edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb" Mar 19 12:35:05.069186 master-0 kubenswrapper[31830]: I0319 12:35:05.069153 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/configmap/b7a848b2-11a9-47c9-881c-6ed12d3e3d1b-httpd-config\") pod \"edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb\" (UID: \"b7a848b2-11a9-47c9-881c-6ed12d3e3d1b\") " pod="openstack/edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb" Mar 19 12:35:05.085404 master-0 kubenswrapper[31830]: I0319 12:35:05.083869 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scb56\" (UniqueName: \"kubernetes.io/projected/b7a848b2-11a9-47c9-881c-6ed12d3e3d1b-kube-api-access-scb56\") pod \"edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb\" (UID: \"b7a848b2-11a9-47c9-881c-6ed12d3e3d1b\") " pod="openstack/edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb" Mar 19 12:35:05.211504 master-0 kubenswrapper[31830]: I0319 12:35:05.197888 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb" Mar 19 12:35:05.211504 master-0 kubenswrapper[31830]: I0319 12:35:05.199050 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h"] Mar 19 12:35:05.211504 master-0 kubenswrapper[31830]: I0319 12:35:05.203656 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h" Mar 19 12:35:05.211504 master-0 kubenswrapper[31830]: I0319 12:35:05.207730 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"edpm-b-provisionserver-httpd-config" Mar 19 12:35:05.280174 master-0 kubenswrapper[31830]: W0319 12:35:05.280093 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7a848b2_11a9_47c9_881c_6ed12d3e3d1b.slice/crio-98b6b3fa1162abe4f78235e4d212d9f474ffed67e0c1382133eee793e4f90c57 WatchSource:0}: Error finding container 98b6b3fa1162abe4f78235e4d212d9f474ffed67e0c1382133eee793e4f90c57: Status 404 returned error can't find the container with id 98b6b3fa1162abe4f78235e4d212d9f474ffed67e0c1382133eee793e4f90c57 Mar 19 12:35:05.294430 master-0 kubenswrapper[31830]: I0319 12:35:05.293960 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" event={"ID":"3a1d3222-9623-4753-9b0a-8d8da0fb3f1e","Type":"ContainerDied","Data":"95227b0d2a22e3ef7329b56298d0f0cc684d5ff3fe2d84c033813e6752be83a9"} Mar 19 12:35:05.294430 master-0 kubenswrapper[31830]: I0319 12:35:05.294136 31830 scope.go:117] "RemoveContainer" containerID="1bc4c0b6899dd54cdca3fe24d0eed96201228c3fc591b3ae37d2ea67cd328dd0" Mar 19 12:35:05.294430 master-0 kubenswrapper[31830]: I0319 12:35:05.294300 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8b865dc-djbrh" Mar 19 12:35:05.313536 master-0 kubenswrapper[31830]: I0319 12:35:05.312977 31830 generic.go:334] "Generic (PLEG): container finished" podID="373619bf-a142-44fd-b4b4-25d7cc74dda4" containerID="1124965895bf1b35cf84997dac5a09c36254fdfa7302efcdbbb34dba7c6419a2" exitCode=0 Mar 19 12:35:05.313536 master-0 kubenswrapper[31830]: I0319 12:35:05.313471 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ec8b-account-create-update-bs98g" event={"ID":"373619bf-a142-44fd-b4b4-25d7cc74dda4","Type":"ContainerDied","Data":"1124965895bf1b35cf84997dac5a09c36254fdfa7302efcdbbb34dba7c6419a2"} Mar 19 12:35:05.322900 master-0 kubenswrapper[31830]: I0319 12:35:05.322852 31830 generic.go:334] "Generic (PLEG): container finished" podID="48c9a901-d8c0-453d-8525-bf69f7710e6b" containerID="a4ec721b7a729caf0e9c5cc83e7ede0e06a43afa55e6181e4b7e78f645a4f25e" exitCode=0 Mar 19 12:35:05.323026 master-0 kubenswrapper[31830]: I0319 12:35:05.322955 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mhxb6" event={"ID":"48c9a901-d8c0-453d-8525-bf69f7710e6b","Type":"ContainerDied","Data":"a4ec721b7a729caf0e9c5cc83e7ede0e06a43afa55e6181e4b7e78f645a4f25e"} Mar 19 12:35:05.327246 master-0 kubenswrapper[31830]: I0319 12:35:05.327195 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6934-account-create-update-q844m" event={"ID":"b5409fe3-cc0f-4ba4-a1f3-93f2ae986204","Type":"ContainerStarted","Data":"390461eb1a45005396151b9e9ee89c0e9643d193cd2f51a5a9bfac9fc52f4a20"} Mar 19 12:35:05.327246 master-0 kubenswrapper[31830]: I0319 12:35:05.327232 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6934-account-create-update-q844m" event={"ID":"b5409fe3-cc0f-4ba4-a1f3-93f2ae986204","Type":"ContainerStarted","Data":"dcba308d14e938ad38f61ef72d1cc99b6d0ebc1bff987442d35b9b6d8d45322d"} Mar 19 12:35:05.332166 master-0 kubenswrapper[31830]: I0319 12:35:05.332119 31830 scope.go:117] "RemoveContainer" containerID="e2fa967f9394d2ce53ecc113fe6318ff52b0eb640b62d416057bdd07053453bb" Mar 19 12:35:05.333442 master-0 kubenswrapper[31830]: I0319 12:35:05.333401 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-m2k5f" event={"ID":"9559d792-d79a-48bf-9ad0-b157b0e2684f","Type":"ContainerStarted","Data":"c8541bac8f6f1c48bacf942e26bbee206e33e3a2dd966b97b11dc0a4e13012a3"} Mar 19 12:35:05.333442 master-0 kubenswrapper[31830]: I0319 12:35:05.333430 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:05.333442 master-0 kubenswrapper[31830]: I0319 12:35:05.333438 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-m2k5f" event={"ID":"9559d792-d79a-48bf-9ad0-b157b0e2684f","Type":"ContainerStarted","Data":"3a365fa18246a97b06b4658bcf8d0e31b450f2f7fee9678477287400574f11c7"} Mar 19 12:35:05.355693 master-0 kubenswrapper[31830]: I0319 12:35:05.355315 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:05.379546 master-0 kubenswrapper[31830]: I0319 12:35:05.377542 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/configmap/3bb563fb-d536-4cb0-9614-d331baa95e1b-httpd-config\") pod \"edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h\" (UID: \"3bb563fb-d536-4cb0-9614-d331baa95e1b\") " pod="openstack/edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h" Mar 19 12:35:05.379546 master-0 kubenswrapper[31830]: I0319 12:35:05.377647 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shxxm\" (UniqueName: \"kubernetes.io/projected/3bb563fb-d536-4cb0-9614-d331baa95e1b-kube-api-access-shxxm\") pod \"edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h\" (UID: \"3bb563fb-d536-4cb0-9614-d331baa95e1b\") " pod="openstack/edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h" Mar 19 12:35:05.379546 master-0 kubenswrapper[31830]: I0319 12:35:05.377690 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-data\" (UniqueName: \"kubernetes.io/empty-dir/3bb563fb-d536-4cb0-9614-d331baa95e1b-image-data\") pod \"edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h\" (UID: \"3bb563fb-d536-4cb0-9614-d331baa95e1b\") " pod="openstack/edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h" Mar 19 12:35:05.387109 master-0 kubenswrapper[31830]: I0319 12:35:05.384289 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6934-account-create-update-q844m" podStartSLOduration=2.384271057 podStartE2EDuration="2.384271057s" podCreationTimestamp="2026-03-19 12:35:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:35:05.369167228 +0000 UTC m=+1243.918127922" watchObservedRunningTime="2026-03-19 12:35:05.384271057 +0000 UTC m=+1243.933231761" Mar 19 12:35:05.410024 master-0 kubenswrapper[31830]: I0319 12:35:05.409930 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-m2k5f" podStartSLOduration=3.409908711 podStartE2EDuration="3.409908711s" podCreationTimestamp="2026-03-19 12:35:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:35:05.39117094 +0000 UTC m=+1243.940131654" watchObservedRunningTime="2026-03-19 12:35:05.409908711 +0000 UTC m=+1243.958869415" Mar 19 12:35:05.464992 master-0 kubenswrapper[31830]: I0319 12:35:05.464922 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-djbrh"] Mar 19 12:35:05.479362 master-0 kubenswrapper[31830]: I0319 12:35:05.479312 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-djbrh"] Mar 19 12:35:05.479458 master-0 kubenswrapper[31830]: I0319 12:35:05.479390 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-ovsdbserver-nb\") pod \"592ce33e-3c82-43e7-b736-ac4e5aea1250\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " Mar 19 12:35:05.479532 master-0 kubenswrapper[31830]: I0319 12:35:05.479504 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-edpm-a\") pod \"592ce33e-3c82-43e7-b736-ac4e5aea1250\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " Mar 19 12:35:05.479614 master-0 kubenswrapper[31830]: I0319 12:35:05.479588 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thj8b\" (UniqueName: \"kubernetes.io/projected/592ce33e-3c82-43e7-b736-ac4e5aea1250-kube-api-access-thj8b\") pod \"592ce33e-3c82-43e7-b736-ac4e5aea1250\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " Mar 19 12:35:05.479656 master-0 kubenswrapper[31830]: I0319 12:35:05.479633 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-ovsdbserver-sb\") pod \"592ce33e-3c82-43e7-b736-ac4e5aea1250\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " Mar 19 12:35:05.479691 master-0 kubenswrapper[31830]: I0319 12:35:05.479658 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-dns-swift-storage-0\") pod \"592ce33e-3c82-43e7-b736-ac4e5aea1250\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " Mar 19 12:35:05.479691 master-0 kubenswrapper[31830]: I0319 12:35:05.479682 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-config\") pod \"592ce33e-3c82-43e7-b736-ac4e5aea1250\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " Mar 19 12:35:05.479748 master-0 kubenswrapper[31830]: I0319 12:35:05.479734 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-dns-svc\") pod \"592ce33e-3c82-43e7-b736-ac4e5aea1250\" (UID: \"592ce33e-3c82-43e7-b736-ac4e5aea1250\") " Mar 19 12:35:05.480146 master-0 kubenswrapper[31830]: I0319 12:35:05.480112 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/configmap/3bb563fb-d536-4cb0-9614-d331baa95e1b-httpd-config\") pod \"edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h\" (UID: \"3bb563fb-d536-4cb0-9614-d331baa95e1b\") " pod="openstack/edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h" Mar 19 12:35:05.480252 master-0 kubenswrapper[31830]: I0319 12:35:05.480218 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shxxm\" (UniqueName: \"kubernetes.io/projected/3bb563fb-d536-4cb0-9614-d331baa95e1b-kube-api-access-shxxm\") pod \"edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h\" (UID: \"3bb563fb-d536-4cb0-9614-d331baa95e1b\") " pod="openstack/edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h" Mar 19 12:35:05.480302 master-0 kubenswrapper[31830]: I0319 12:35:05.480276 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-data\" (UniqueName: \"kubernetes.io/empty-dir/3bb563fb-d536-4cb0-9614-d331baa95e1b-image-data\") pod \"edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h\" (UID: \"3bb563fb-d536-4cb0-9614-d331baa95e1b\") " pod="openstack/edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h" Mar 19 12:35:05.480891 master-0 kubenswrapper[31830]: I0319 12:35:05.480863 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-data\" (UniqueName: \"kubernetes.io/empty-dir/3bb563fb-d536-4cb0-9614-d331baa95e1b-image-data\") pod \"edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h\" (UID: \"3bb563fb-d536-4cb0-9614-d331baa95e1b\") " pod="openstack/edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h" Mar 19 12:35:05.481283 master-0 kubenswrapper[31830]: I0319 12:35:05.481248 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "592ce33e-3c82-43e7-b736-ac4e5aea1250" (UID: "592ce33e-3c82-43e7-b736-ac4e5aea1250"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:05.481627 master-0 kubenswrapper[31830]: I0319 12:35:05.481596 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-edpm-a" (OuterVolumeSpecName: "edpm-a") pod "592ce33e-3c82-43e7-b736-ac4e5aea1250" (UID: "592ce33e-3c82-43e7-b736-ac4e5aea1250"). InnerVolumeSpecName "edpm-a". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:05.482479 master-0 kubenswrapper[31830]: I0319 12:35:05.482369 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "592ce33e-3c82-43e7-b736-ac4e5aea1250" (UID: "592ce33e-3c82-43e7-b736-ac4e5aea1250"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:05.482872 master-0 kubenswrapper[31830]: I0319 12:35:05.482556 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-config" (OuterVolumeSpecName: "config") pod "592ce33e-3c82-43e7-b736-ac4e5aea1250" (UID: "592ce33e-3c82-43e7-b736-ac4e5aea1250"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:05.482872 master-0 kubenswrapper[31830]: I0319 12:35:05.482782 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "592ce33e-3c82-43e7-b736-ac4e5aea1250" (UID: "592ce33e-3c82-43e7-b736-ac4e5aea1250"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:05.483448 master-0 kubenswrapper[31830]: I0319 12:35:05.483414 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "592ce33e-3c82-43e7-b736-ac4e5aea1250" (UID: "592ce33e-3c82-43e7-b736-ac4e5aea1250"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:05.483655 master-0 kubenswrapper[31830]: I0319 12:35:05.483620 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/configmap/3bb563fb-d536-4cb0-9614-d331baa95e1b-httpd-config\") pod \"edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h\" (UID: \"3bb563fb-d536-4cb0-9614-d331baa95e1b\") " pod="openstack/edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h" Mar 19 12:35:05.533761 master-0 kubenswrapper[31830]: I0319 12:35:05.533656 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/592ce33e-3c82-43e7-b736-ac4e5aea1250-kube-api-access-thj8b" (OuterVolumeSpecName: "kube-api-access-thj8b") pod "592ce33e-3c82-43e7-b736-ac4e5aea1250" (UID: "592ce33e-3c82-43e7-b736-ac4e5aea1250"). InnerVolumeSpecName "kube-api-access-thj8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:35:05.559867 master-0 kubenswrapper[31830]: I0319 12:35:05.559069 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shxxm\" (UniqueName: \"kubernetes.io/projected/3bb563fb-d536-4cb0-9614-d331baa95e1b-kube-api-access-shxxm\") pod \"edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h\" (UID: \"3bb563fb-d536-4cb0-9614-d331baa95e1b\") " pod="openstack/edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h" Mar 19 12:35:05.581949 master-0 kubenswrapper[31830]: I0319 12:35:05.581901 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:05.581949 master-0 kubenswrapper[31830]: I0319 12:35:05.581944 31830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:05.581949 master-0 kubenswrapper[31830]: I0319 12:35:05.581958 31830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:05.581949 master-0 kubenswrapper[31830]: I0319 12:35:05.581965 31830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:05.581949 master-0 kubenswrapper[31830]: I0319 12:35:05.581975 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:05.582298 master-0 kubenswrapper[31830]: I0319 12:35:05.581984 31830 reconciler_common.go:293] "Volume detached for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/592ce33e-3c82-43e7-b736-ac4e5aea1250-edpm-a\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:05.582298 master-0 kubenswrapper[31830]: I0319 12:35:05.581993 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thj8b\" (UniqueName: \"kubernetes.io/projected/592ce33e-3c82-43e7-b736-ac4e5aea1250-kube-api-access-thj8b\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:05.593916 master-0 kubenswrapper[31830]: I0319 12:35:05.593858 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6465768b8c-fp4jc"] Mar 19 12:35:05.594055 master-0 kubenswrapper[31830]: W0319 12:35:05.594014 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f825bf1_6d44_4e78_85db_bc6c7371a9d9.slice/crio-992d445430916840aa57622c37656719f29eb9a8aae519a77a22704ed0cf5a41 WatchSource:0}: Error finding container 992d445430916840aa57622c37656719f29eb9a8aae519a77a22704ed0cf5a41: Status 404 returned error can't find the container with id 992d445430916840aa57622c37656719f29eb9a8aae519a77a22704ed0cf5a41 Mar 19 12:35:05.692121 master-0 kubenswrapper[31830]: I0319 12:35:05.691809 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a1d3222-9623-4753-9b0a-8d8da0fb3f1e" path="/var/lib/kubelet/pods/3a1d3222-9623-4753-9b0a-8d8da0fb3f1e/volumes" Mar 19 12:35:05.856541 master-0 kubenswrapper[31830]: I0319 12:35:05.856479 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h" Mar 19 12:35:06.344688 master-0 kubenswrapper[31830]: I0319 12:35:06.344607 31830 generic.go:334] "Generic (PLEG): container finished" podID="b5409fe3-cc0f-4ba4-a1f3-93f2ae986204" containerID="390461eb1a45005396151b9e9ee89c0e9643d193cd2f51a5a9bfac9fc52f4a20" exitCode=0 Mar 19 12:35:06.344987 master-0 kubenswrapper[31830]: I0319 12:35:06.344733 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6934-account-create-update-q844m" event={"ID":"b5409fe3-cc0f-4ba4-a1f3-93f2ae986204","Type":"ContainerDied","Data":"390461eb1a45005396151b9e9ee89c0e9643d193cd2f51a5a9bfac9fc52f4a20"} Mar 19 12:35:06.347251 master-0 kubenswrapper[31830]: I0319 12:35:06.347107 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h" event={"ID":"3bb563fb-d536-4cb0-9614-d331baa95e1b","Type":"ContainerStarted","Data":"693757f681f977f90478a8559209f847147a7cda33490289df70ce4810e128f0"} Mar 19 12:35:06.349659 master-0 kubenswrapper[31830]: I0319 12:35:06.349633 31830 generic.go:334] "Generic (PLEG): container finished" podID="9559d792-d79a-48bf-9ad0-b157b0e2684f" containerID="c8541bac8f6f1c48bacf942e26bbee206e33e3a2dd966b97b11dc0a4e13012a3" exitCode=0 Mar 19 12:35:06.349753 master-0 kubenswrapper[31830]: I0319 12:35:06.349675 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-m2k5f" event={"ID":"9559d792-d79a-48bf-9ad0-b157b0e2684f","Type":"ContainerDied","Data":"c8541bac8f6f1c48bacf942e26bbee206e33e3a2dd966b97b11dc0a4e13012a3"} Mar 19 12:35:06.353898 master-0 kubenswrapper[31830]: I0319 12:35:06.352862 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb" event={"ID":"b7a848b2-11a9-47c9-881c-6ed12d3e3d1b","Type":"ContainerStarted","Data":"98b6b3fa1162abe4f78235e4d212d9f474ffed67e0c1382133eee793e4f90c57"} Mar 19 12:35:06.354133 master-0 kubenswrapper[31830]: I0319 12:35:06.353954 31830 generic.go:334] "Generic (PLEG): container finished" podID="5f825bf1-6d44-4e78-85db-bc6c7371a9d9" containerID="9885ad45a1ac75058d6dbc090d642d8e3d13e6a511b1a39effa22b69a519ddf4" exitCode=0 Mar 19 12:35:06.354824 master-0 kubenswrapper[31830]: I0319 12:35:06.354774 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" event={"ID":"5f825bf1-6d44-4e78-85db-bc6c7371a9d9","Type":"ContainerDied","Data":"9885ad45a1ac75058d6dbc090d642d8e3d13e6a511b1a39effa22b69a519ddf4"} Mar 19 12:35:06.354824 master-0 kubenswrapper[31830]: I0319 12:35:06.354820 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" event={"ID":"5f825bf1-6d44-4e78-85db-bc6c7371a9d9","Type":"ContainerStarted","Data":"992d445430916840aa57622c37656719f29eb9a8aae519a77a22704ed0cf5a41"} Mar 19 12:35:06.354930 master-0 kubenswrapper[31830]: I0319 12:35:06.354852 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cf7977b8c-m7nzm" Mar 19 12:35:06.586317 master-0 kubenswrapper[31830]: I0319 12:35:06.586190 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cf7977b8c-m7nzm"] Mar 19 12:35:06.596839 master-0 kubenswrapper[31830]: I0319 12:35:06.596701 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cf7977b8c-m7nzm"] Mar 19 12:35:06.734460 master-0 kubenswrapper[31830]: I0319 12:35:06.734411 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ec8b-account-create-update-bs98g" Mar 19 12:35:06.883658 master-0 kubenswrapper[31830]: I0319 12:35:06.808654 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373619bf-a142-44fd-b4b4-25d7cc74dda4-operator-scripts\") pod \"373619bf-a142-44fd-b4b4-25d7cc74dda4\" (UID: \"373619bf-a142-44fd-b4b4-25d7cc74dda4\") " Mar 19 12:35:06.883658 master-0 kubenswrapper[31830]: I0319 12:35:06.808777 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxm56\" (UniqueName: \"kubernetes.io/projected/373619bf-a142-44fd-b4b4-25d7cc74dda4-kube-api-access-mxm56\") pod \"373619bf-a142-44fd-b4b4-25d7cc74dda4\" (UID: \"373619bf-a142-44fd-b4b4-25d7cc74dda4\") " Mar 19 12:35:06.883658 master-0 kubenswrapper[31830]: I0319 12:35:06.810248 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/373619bf-a142-44fd-b4b4-25d7cc74dda4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "373619bf-a142-44fd-b4b4-25d7cc74dda4" (UID: "373619bf-a142-44fd-b4b4-25d7cc74dda4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:06.883851 master-0 kubenswrapper[31830]: I0319 12:35:06.883678 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/373619bf-a142-44fd-b4b4-25d7cc74dda4-kube-api-access-mxm56" (OuterVolumeSpecName: "kube-api-access-mxm56") pod "373619bf-a142-44fd-b4b4-25d7cc74dda4" (UID: "373619bf-a142-44fd-b4b4-25d7cc74dda4"). InnerVolumeSpecName "kube-api-access-mxm56". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:35:06.911367 master-0 kubenswrapper[31830]: I0319 12:35:06.911281 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxm56\" (UniqueName: \"kubernetes.io/projected/373619bf-a142-44fd-b4b4-25d7cc74dda4-kube-api-access-mxm56\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:06.911619 master-0 kubenswrapper[31830]: I0319 12:35:06.911521 31830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373619bf-a142-44fd-b4b4-25d7cc74dda4-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:07.018899 master-0 kubenswrapper[31830]: I0319 12:35:07.018781 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mhxb6" Mar 19 12:35:07.115784 master-0 kubenswrapper[31830]: I0319 12:35:07.115698 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjhpx\" (UniqueName: \"kubernetes.io/projected/48c9a901-d8c0-453d-8525-bf69f7710e6b-kube-api-access-xjhpx\") pod \"48c9a901-d8c0-453d-8525-bf69f7710e6b\" (UID: \"48c9a901-d8c0-453d-8525-bf69f7710e6b\") " Mar 19 12:35:07.116016 master-0 kubenswrapper[31830]: I0319 12:35:07.115915 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48c9a901-d8c0-453d-8525-bf69f7710e6b-operator-scripts\") pod \"48c9a901-d8c0-453d-8525-bf69f7710e6b\" (UID: \"48c9a901-d8c0-453d-8525-bf69f7710e6b\") " Mar 19 12:35:07.117124 master-0 kubenswrapper[31830]: I0319 12:35:07.117085 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48c9a901-d8c0-453d-8525-bf69f7710e6b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48c9a901-d8c0-453d-8525-bf69f7710e6b" (UID: "48c9a901-d8c0-453d-8525-bf69f7710e6b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:07.121270 master-0 kubenswrapper[31830]: I0319 12:35:07.121239 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48c9a901-d8c0-453d-8525-bf69f7710e6b-kube-api-access-xjhpx" (OuterVolumeSpecName: "kube-api-access-xjhpx") pod "48c9a901-d8c0-453d-8525-bf69f7710e6b" (UID: "48c9a901-d8c0-453d-8525-bf69f7710e6b"). InnerVolumeSpecName "kube-api-access-xjhpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:35:07.221904 master-0 kubenswrapper[31830]: I0319 12:35:07.220926 31830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48c9a901-d8c0-453d-8525-bf69f7710e6b-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:07.221904 master-0 kubenswrapper[31830]: I0319 12:35:07.220970 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjhpx\" (UniqueName: \"kubernetes.io/projected/48c9a901-d8c0-453d-8525-bf69f7710e6b-kube-api-access-xjhpx\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:07.372777 master-0 kubenswrapper[31830]: I0319 12:35:07.372333 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ec8b-account-create-update-bs98g" event={"ID":"373619bf-a142-44fd-b4b4-25d7cc74dda4","Type":"ContainerDied","Data":"eb404220364558ab3b84d9c3b70a418ba6789e2d99001850f16b2f8221ebfabc"} Mar 19 12:35:07.372777 master-0 kubenswrapper[31830]: I0319 12:35:07.372378 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb404220364558ab3b84d9c3b70a418ba6789e2d99001850f16b2f8221ebfabc" Mar 19 12:35:07.372777 master-0 kubenswrapper[31830]: I0319 12:35:07.372430 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ec8b-account-create-update-bs98g" Mar 19 12:35:07.376099 master-0 kubenswrapper[31830]: I0319 12:35:07.376048 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mhxb6" event={"ID":"48c9a901-d8c0-453d-8525-bf69f7710e6b","Type":"ContainerDied","Data":"1bc54325f7ba72267710773f39f108b36fe3efb2064bc5f5711b6b99bf95106a"} Mar 19 12:35:07.376623 master-0 kubenswrapper[31830]: I0319 12:35:07.376545 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bc54325f7ba72267710773f39f108b36fe3efb2064bc5f5711b6b99bf95106a" Mar 19 12:35:07.376623 master-0 kubenswrapper[31830]: I0319 12:35:07.376061 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mhxb6" Mar 19 12:35:07.378559 master-0 kubenswrapper[31830]: I0319 12:35:07.378510 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" event={"ID":"5f825bf1-6d44-4e78-85db-bc6c7371a9d9","Type":"ContainerStarted","Data":"f2d895ffc56acb0ca2fa97c4253d51276da8c0d2302ee6b07d917a8a003cffa7"} Mar 19 12:35:07.378877 master-0 kubenswrapper[31830]: I0319 12:35:07.378853 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:07.439969 master-0 kubenswrapper[31830]: I0319 12:35:07.435811 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" podStartSLOduration=3.435773819 podStartE2EDuration="3.435773819s" podCreationTimestamp="2026-03-19 12:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:35:07.429152554 +0000 UTC m=+1245.978113268" watchObservedRunningTime="2026-03-19 12:35:07.435773819 +0000 UTC m=+1245.984734523" Mar 19 12:35:07.694880 master-0 kubenswrapper[31830]: I0319 12:35:07.694744 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="592ce33e-3c82-43e7-b736-ac4e5aea1250" path="/var/lib/kubelet/pods/592ce33e-3c82-43e7-b736-ac4e5aea1250/volumes" Mar 19 12:35:07.958510 master-0 kubenswrapper[31830]: I0319 12:35:07.958408 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m2k5f" Mar 19 12:35:07.967953 master-0 kubenswrapper[31830]: I0319 12:35:07.967002 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6934-account-create-update-q844m" Mar 19 12:35:08.056043 master-0 kubenswrapper[31830]: I0319 12:35:08.055980 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pd8l\" (UniqueName: \"kubernetes.io/projected/9559d792-d79a-48bf-9ad0-b157b0e2684f-kube-api-access-9pd8l\") pod \"9559d792-d79a-48bf-9ad0-b157b0e2684f\" (UID: \"9559d792-d79a-48bf-9ad0-b157b0e2684f\") " Mar 19 12:35:08.061582 master-0 kubenswrapper[31830]: I0319 12:35:08.061527 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9559d792-d79a-48bf-9ad0-b157b0e2684f-operator-scripts\") pod \"9559d792-d79a-48bf-9ad0-b157b0e2684f\" (UID: \"9559d792-d79a-48bf-9ad0-b157b0e2684f\") " Mar 19 12:35:08.061689 master-0 kubenswrapper[31830]: I0319 12:35:08.061634 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk5gd\" (UniqueName: \"kubernetes.io/projected/b5409fe3-cc0f-4ba4-a1f3-93f2ae986204-kube-api-access-bk5gd\") pod \"b5409fe3-cc0f-4ba4-a1f3-93f2ae986204\" (UID: \"b5409fe3-cc0f-4ba4-a1f3-93f2ae986204\") " Mar 19 12:35:08.061898 master-0 kubenswrapper[31830]: I0319 12:35:08.061848 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5409fe3-cc0f-4ba4-a1f3-93f2ae986204-operator-scripts\") pod \"b5409fe3-cc0f-4ba4-a1f3-93f2ae986204\" (UID: \"b5409fe3-cc0f-4ba4-a1f3-93f2ae986204\") " Mar 19 12:35:08.063326 master-0 kubenswrapper[31830]: I0319 12:35:08.063284 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5409fe3-cc0f-4ba4-a1f3-93f2ae986204-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b5409fe3-cc0f-4ba4-a1f3-93f2ae986204" (UID: "b5409fe3-cc0f-4ba4-a1f3-93f2ae986204"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:08.063745 master-0 kubenswrapper[31830]: I0319 12:35:08.063706 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9559d792-d79a-48bf-9ad0-b157b0e2684f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9559d792-d79a-48bf-9ad0-b157b0e2684f" (UID: "9559d792-d79a-48bf-9ad0-b157b0e2684f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:08.082013 master-0 kubenswrapper[31830]: I0319 12:35:08.081962 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9559d792-d79a-48bf-9ad0-b157b0e2684f-kube-api-access-9pd8l" (OuterVolumeSpecName: "kube-api-access-9pd8l") pod "9559d792-d79a-48bf-9ad0-b157b0e2684f" (UID: "9559d792-d79a-48bf-9ad0-b157b0e2684f"). InnerVolumeSpecName "kube-api-access-9pd8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:35:08.083154 master-0 kubenswrapper[31830]: I0319 12:35:08.083025 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5409fe3-cc0f-4ba4-a1f3-93f2ae986204-kube-api-access-bk5gd" (OuterVolumeSpecName: "kube-api-access-bk5gd") pod "b5409fe3-cc0f-4ba4-a1f3-93f2ae986204" (UID: "b5409fe3-cc0f-4ba4-a1f3-93f2ae986204"). InnerVolumeSpecName "kube-api-access-bk5gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:35:08.164777 master-0 kubenswrapper[31830]: I0319 12:35:08.164730 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pd8l\" (UniqueName: \"kubernetes.io/projected/9559d792-d79a-48bf-9ad0-b157b0e2684f-kube-api-access-9pd8l\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:08.165110 master-0 kubenswrapper[31830]: I0319 12:35:08.165098 31830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9559d792-d79a-48bf-9ad0-b157b0e2684f-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:08.165331 master-0 kubenswrapper[31830]: I0319 12:35:08.165319 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk5gd\" (UniqueName: \"kubernetes.io/projected/b5409fe3-cc0f-4ba4-a1f3-93f2ae986204-kube-api-access-bk5gd\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:08.165690 master-0 kubenswrapper[31830]: I0319 12:35:08.165677 31830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5409fe3-cc0f-4ba4-a1f3-93f2ae986204-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:08.389440 master-0 kubenswrapper[31830]: I0319 12:35:08.389385 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-m2k5f" event={"ID":"9559d792-d79a-48bf-9ad0-b157b0e2684f","Type":"ContainerDied","Data":"3a365fa18246a97b06b4658bcf8d0e31b450f2f7fee9678477287400574f11c7"} Mar 19 12:35:08.389440 master-0 kubenswrapper[31830]: I0319 12:35:08.389438 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a365fa18246a97b06b4658bcf8d0e31b450f2f7fee9678477287400574f11c7" Mar 19 12:35:08.389732 master-0 kubenswrapper[31830]: I0319 12:35:08.389407 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m2k5f" Mar 19 12:35:08.391686 master-0 kubenswrapper[31830]: I0319 12:35:08.391643 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6934-account-create-update-q844m" event={"ID":"b5409fe3-cc0f-4ba4-a1f3-93f2ae986204","Type":"ContainerDied","Data":"dcba308d14e938ad38f61ef72d1cc99b6d0ebc1bff987442d35b9b6d8d45322d"} Mar 19 12:35:08.391833 master-0 kubenswrapper[31830]: I0319 12:35:08.391693 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcba308d14e938ad38f61ef72d1cc99b6d0ebc1bff987442d35b9b6d8d45322d" Mar 19 12:35:08.392530 master-0 kubenswrapper[31830]: I0319 12:35:08.392504 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6934-account-create-update-q844m" Mar 19 12:35:11.733046 master-0 kubenswrapper[31830]: I0319 12:35:11.732842 31830 generic.go:334] "Generic (PLEG): container finished" podID="82a35ae5-08db-4571-977b-95d26158480e" containerID="496d2f74441c0012111b3d65a363d46cea5ee91d2808eb80d63004f1fafc2520" exitCode=0 Mar 19 12:35:11.733046 master-0 kubenswrapper[31830]: I0319 12:35:11.732936 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dfsd7" event={"ID":"82a35ae5-08db-4571-977b-95d26158480e","Type":"ContainerDied","Data":"496d2f74441c0012111b3d65a363d46cea5ee91d2808eb80d63004f1fafc2520"} Mar 19 12:35:13.765738 master-0 kubenswrapper[31830]: I0319 12:35:13.765134 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dfsd7" event={"ID":"82a35ae5-08db-4571-977b-95d26158480e","Type":"ContainerDied","Data":"4269b85e69ef9bbc84212f443dc5ee97935002b61dd3a501bc87fa94346328d0"} Mar 19 12:35:13.765738 master-0 kubenswrapper[31830]: I0319 12:35:13.765183 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4269b85e69ef9bbc84212f443dc5ee97935002b61dd3a501bc87fa94346328d0" Mar 19 12:35:13.783730 master-0 kubenswrapper[31830]: I0319 12:35:13.783680 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dfsd7" Mar 19 12:35:13.869168 master-0 kubenswrapper[31830]: I0319 12:35:13.869099 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82a35ae5-08db-4571-977b-95d26158480e-combined-ca-bundle\") pod \"82a35ae5-08db-4571-977b-95d26158480e\" (UID: \"82a35ae5-08db-4571-977b-95d26158480e\") " Mar 19 12:35:13.869439 master-0 kubenswrapper[31830]: I0319 12:35:13.869236 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl2j4\" (UniqueName: \"kubernetes.io/projected/82a35ae5-08db-4571-977b-95d26158480e-kube-api-access-cl2j4\") pod \"82a35ae5-08db-4571-977b-95d26158480e\" (UID: \"82a35ae5-08db-4571-977b-95d26158480e\") " Mar 19 12:35:13.869439 master-0 kubenswrapper[31830]: I0319 12:35:13.869382 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82a35ae5-08db-4571-977b-95d26158480e-db-sync-config-data\") pod \"82a35ae5-08db-4571-977b-95d26158480e\" (UID: \"82a35ae5-08db-4571-977b-95d26158480e\") " Mar 19 12:35:13.869439 master-0 kubenswrapper[31830]: I0319 12:35:13.869430 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82a35ae5-08db-4571-977b-95d26158480e-config-data\") pod \"82a35ae5-08db-4571-977b-95d26158480e\" (UID: \"82a35ae5-08db-4571-977b-95d26158480e\") " Mar 19 12:35:13.895955 master-0 kubenswrapper[31830]: I0319 12:35:13.895734 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82a35ae5-08db-4571-977b-95d26158480e-kube-api-access-cl2j4" (OuterVolumeSpecName: "kube-api-access-cl2j4") pod "82a35ae5-08db-4571-977b-95d26158480e" (UID: "82a35ae5-08db-4571-977b-95d26158480e"). InnerVolumeSpecName "kube-api-access-cl2j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:35:13.896666 master-0 kubenswrapper[31830]: I0319 12:35:13.896244 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82a35ae5-08db-4571-977b-95d26158480e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "82a35ae5-08db-4571-977b-95d26158480e" (UID: "82a35ae5-08db-4571-977b-95d26158480e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:35:13.910414 master-0 kubenswrapper[31830]: I0319 12:35:13.909984 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82a35ae5-08db-4571-977b-95d26158480e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82a35ae5-08db-4571-977b-95d26158480e" (UID: "82a35ae5-08db-4571-977b-95d26158480e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:35:13.946456 master-0 kubenswrapper[31830]: I0319 12:35:13.946321 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82a35ae5-08db-4571-977b-95d26158480e-config-data" (OuterVolumeSpecName: "config-data") pod "82a35ae5-08db-4571-977b-95d26158480e" (UID: "82a35ae5-08db-4571-977b-95d26158480e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:35:13.974837 master-0 kubenswrapper[31830]: I0319 12:35:13.973513 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl2j4\" (UniqueName: \"kubernetes.io/projected/82a35ae5-08db-4571-977b-95d26158480e-kube-api-access-cl2j4\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:13.974837 master-0 kubenswrapper[31830]: I0319 12:35:13.973597 31830 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82a35ae5-08db-4571-977b-95d26158480e-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:13.974837 master-0 kubenswrapper[31830]: I0319 12:35:13.973613 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82a35ae5-08db-4571-977b-95d26158480e-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:13.974837 master-0 kubenswrapper[31830]: I0319 12:35:13.973626 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82a35ae5-08db-4571-977b-95d26158480e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:14.784751 master-0 kubenswrapper[31830]: I0319 12:35:14.784688 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dfsd7" Mar 19 12:35:14.785956 master-0 kubenswrapper[31830]: I0319 12:35:14.784703 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-t4w4t" event={"ID":"78caf503-3472-47a9-9107-4d260f898fb2","Type":"ContainerStarted","Data":"7436c639eb9f8491ca4d4f335c8422f72e856a969b84b5eef85431950e8c53ad"} Mar 19 12:35:14.914558 master-0 kubenswrapper[31830]: I0319 12:35:14.914479 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:14.976012 master-0 kubenswrapper[31830]: I0319 12:35:14.975889 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-t4w4t" podStartSLOduration=3.4060703820000002 podStartE2EDuration="12.975850017s" podCreationTimestamp="2026-03-19 12:35:02 +0000 UTC" firstStartedPulling="2026-03-19 12:35:04.03817997 +0000 UTC m=+1242.587140674" lastFinishedPulling="2026-03-19 12:35:13.607959605 +0000 UTC m=+1252.156920309" observedRunningTime="2026-03-19 12:35:14.952204604 +0000 UTC m=+1253.501165328" watchObservedRunningTime="2026-03-19 12:35:14.975850017 +0000 UTC m=+1253.524810721" Mar 19 12:35:15.423755 master-0 kubenswrapper[31830]: I0319 12:35:15.423567 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76986c7db5-mtxrk"] Mar 19 12:35:15.424009 master-0 kubenswrapper[31830]: I0319 12:35:15.423915 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" podUID="2b5c61f6-d6f5-4401-9b43-4817d229c0fe" containerName="dnsmasq-dns" containerID="cri-o://6a18f36d89e386a2d734febf3787bce1e02fa8ec48b42c67279c1f0c7fc794f5" gracePeriod=10 Mar 19 12:35:15.804077 master-0 kubenswrapper[31830]: I0319 12:35:15.804011 31830 generic.go:334] "Generic (PLEG): container finished" podID="2b5c61f6-d6f5-4401-9b43-4817d229c0fe" containerID="6a18f36d89e386a2d734febf3787bce1e02fa8ec48b42c67279c1f0c7fc794f5" exitCode=0 Mar 19 12:35:15.805034 master-0 kubenswrapper[31830]: I0319 12:35:15.804850 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" event={"ID":"2b5c61f6-d6f5-4401-9b43-4817d229c0fe","Type":"ContainerDied","Data":"6a18f36d89e386a2d734febf3787bce1e02fa8ec48b42c67279c1f0c7fc794f5"} Mar 19 12:35:16.055177 master-0 kubenswrapper[31830]: I0319 12:35:16.055129 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:35:16.177143 master-0 kubenswrapper[31830]: I0319 12:35:16.177086 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-config\") pod \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " Mar 19 12:35:16.177382 master-0 kubenswrapper[31830]: I0319 12:35:16.177205 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjwj6\" (UniqueName: \"kubernetes.io/projected/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-kube-api-access-bjwj6\") pod \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " Mar 19 12:35:16.177678 master-0 kubenswrapper[31830]: I0319 12:35:16.177462 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-dns-swift-storage-0\") pod \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " Mar 19 12:35:16.177811 master-0 kubenswrapper[31830]: I0319 12:35:16.177782 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-ovsdbserver-sb\") pod \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " Mar 19 12:35:16.177920 master-0 kubenswrapper[31830]: I0319 12:35:16.177857 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-dns-svc\") pod \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " Mar 19 12:35:16.177920 master-0 kubenswrapper[31830]: I0319 12:35:16.177884 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-ovsdbserver-nb\") pod \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\" (UID: \"2b5c61f6-d6f5-4401-9b43-4817d229c0fe\") " Mar 19 12:35:16.183496 master-0 kubenswrapper[31830]: I0319 12:35:16.183431 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-kube-api-access-bjwj6" (OuterVolumeSpecName: "kube-api-access-bjwj6") pod "2b5c61f6-d6f5-4401-9b43-4817d229c0fe" (UID: "2b5c61f6-d6f5-4401-9b43-4817d229c0fe"). InnerVolumeSpecName "kube-api-access-bjwj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:35:16.236748 master-0 kubenswrapper[31830]: I0319 12:35:16.236695 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2b5c61f6-d6f5-4401-9b43-4817d229c0fe" (UID: "2b5c61f6-d6f5-4401-9b43-4817d229c0fe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:16.237035 master-0 kubenswrapper[31830]: I0319 12:35:16.236832 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-config" (OuterVolumeSpecName: "config") pod "2b5c61f6-d6f5-4401-9b43-4817d229c0fe" (UID: "2b5c61f6-d6f5-4401-9b43-4817d229c0fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:16.249665 master-0 kubenswrapper[31830]: I0319 12:35:16.249106 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2b5c61f6-d6f5-4401-9b43-4817d229c0fe" (UID: "2b5c61f6-d6f5-4401-9b43-4817d229c0fe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:16.250205 master-0 kubenswrapper[31830]: I0319 12:35:16.250126 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2b5c61f6-d6f5-4401-9b43-4817d229c0fe" (UID: "2b5c61f6-d6f5-4401-9b43-4817d229c0fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:16.251616 master-0 kubenswrapper[31830]: I0319 12:35:16.251567 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2b5c61f6-d6f5-4401-9b43-4817d229c0fe" (UID: "2b5c61f6-d6f5-4401-9b43-4817d229c0fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:16.281709 master-0 kubenswrapper[31830]: I0319 12:35:16.281637 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjwj6\" (UniqueName: \"kubernetes.io/projected/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-kube-api-access-bjwj6\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:16.282026 master-0 kubenswrapper[31830]: I0319 12:35:16.281984 31830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:16.282026 master-0 kubenswrapper[31830]: I0319 12:35:16.282013 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:16.282026 master-0 kubenswrapper[31830]: I0319 12:35:16.282027 31830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:16.282173 master-0 kubenswrapper[31830]: I0319 12:35:16.282040 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:16.282173 master-0 kubenswrapper[31830]: I0319 12:35:16.282052 31830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5c61f6-d6f5-4401-9b43-4817d229c0fe-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:16.821319 master-0 kubenswrapper[31830]: I0319 12:35:16.821203 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" event={"ID":"2b5c61f6-d6f5-4401-9b43-4817d229c0fe","Type":"ContainerDied","Data":"048a9c2eb39f3975da8bef1e95e60f81e9261d4d3b04820c7570b17ea06aea8b"} Mar 19 12:35:16.821319 master-0 kubenswrapper[31830]: I0319 12:35:16.821285 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76986c7db5-mtxrk" Mar 19 12:35:16.821911 master-0 kubenswrapper[31830]: I0319 12:35:16.821297 31830 scope.go:117] "RemoveContainer" containerID="6a18f36d89e386a2d734febf3787bce1e02fa8ec48b42c67279c1f0c7fc794f5" Mar 19 12:35:16.864475 master-0 kubenswrapper[31830]: I0319 12:35:16.863007 31830 scope.go:117] "RemoveContainer" containerID="be14902d5a14c9bfeefced21bd8a53c67fe1951028b16938574e435937c7c55f" Mar 19 12:35:17.083232 master-0 kubenswrapper[31830]: I0319 12:35:17.083092 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76986c7db5-mtxrk"] Mar 19 12:35:17.285088 master-0 kubenswrapper[31830]: I0319 12:35:17.284269 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76986c7db5-mtxrk"] Mar 19 12:35:17.705878 master-0 kubenswrapper[31830]: I0319 12:35:17.705761 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b5c61f6-d6f5-4401-9b43-4817d229c0fe" path="/var/lib/kubelet/pods/2b5c61f6-d6f5-4401-9b43-4817d229c0fe/volumes" Mar 19 12:35:18.912319 master-0 kubenswrapper[31830]: I0319 12:35:18.912191 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-577795c79c-8r4lh"] Mar 19 12:35:18.913348 master-0 kubenswrapper[31830]: E0319 12:35:18.913118 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5c61f6-d6f5-4401-9b43-4817d229c0fe" containerName="init" Mar 19 12:35:18.913348 master-0 kubenswrapper[31830]: I0319 12:35:18.913144 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5c61f6-d6f5-4401-9b43-4817d229c0fe" containerName="init" Mar 19 12:35:18.913348 master-0 kubenswrapper[31830]: E0319 12:35:18.913205 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a35ae5-08db-4571-977b-95d26158480e" containerName="glance-db-sync" Mar 19 12:35:18.913348 master-0 kubenswrapper[31830]: I0319 12:35:18.913215 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a35ae5-08db-4571-977b-95d26158480e" containerName="glance-db-sync" Mar 19 12:35:18.913348 master-0 kubenswrapper[31830]: E0319 12:35:18.913228 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9559d792-d79a-48bf-9ad0-b157b0e2684f" containerName="mariadb-database-create" Mar 19 12:35:18.913348 master-0 kubenswrapper[31830]: I0319 12:35:18.913237 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9559d792-d79a-48bf-9ad0-b157b0e2684f" containerName="mariadb-database-create" Mar 19 12:35:18.913348 master-0 kubenswrapper[31830]: E0319 12:35:18.913260 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48c9a901-d8c0-453d-8525-bf69f7710e6b" containerName="mariadb-database-create" Mar 19 12:35:18.913348 master-0 kubenswrapper[31830]: I0319 12:35:18.913268 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c9a901-d8c0-453d-8525-bf69f7710e6b" containerName="mariadb-database-create" Mar 19 12:35:18.913348 master-0 kubenswrapper[31830]: E0319 12:35:18.913285 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5c61f6-d6f5-4401-9b43-4817d229c0fe" containerName="dnsmasq-dns" Mar 19 12:35:18.913348 master-0 kubenswrapper[31830]: I0319 12:35:18.913293 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5c61f6-d6f5-4401-9b43-4817d229c0fe" containerName="dnsmasq-dns" Mar 19 12:35:18.913348 master-0 kubenswrapper[31830]: E0319 12:35:18.913313 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5409fe3-cc0f-4ba4-a1f3-93f2ae986204" containerName="mariadb-account-create-update" Mar 19 12:35:18.913348 master-0 kubenswrapper[31830]: I0319 12:35:18.913324 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5409fe3-cc0f-4ba4-a1f3-93f2ae986204" containerName="mariadb-account-create-update" Mar 19 12:35:18.913348 master-0 kubenswrapper[31830]: E0319 12:35:18.913363 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="373619bf-a142-44fd-b4b4-25d7cc74dda4" containerName="mariadb-account-create-update" Mar 19 12:35:18.913348 master-0 kubenswrapper[31830]: I0319 12:35:18.913375 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="373619bf-a142-44fd-b4b4-25d7cc74dda4" containerName="mariadb-account-create-update" Mar 19 12:35:18.914094 master-0 kubenswrapper[31830]: I0319 12:35:18.913715 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="373619bf-a142-44fd-b4b4-25d7cc74dda4" containerName="mariadb-account-create-update" Mar 19 12:35:18.914094 master-0 kubenswrapper[31830]: I0319 12:35:18.913763 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5409fe3-cc0f-4ba4-a1f3-93f2ae986204" containerName="mariadb-account-create-update" Mar 19 12:35:18.914094 master-0 kubenswrapper[31830]: I0319 12:35:18.913788 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="48c9a901-d8c0-453d-8525-bf69f7710e6b" containerName="mariadb-database-create" Mar 19 12:35:18.914094 master-0 kubenswrapper[31830]: I0319 12:35:18.913830 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b5c61f6-d6f5-4401-9b43-4817d229c0fe" containerName="dnsmasq-dns" Mar 19 12:35:18.914094 master-0 kubenswrapper[31830]: I0319 12:35:18.913845 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="82a35ae5-08db-4571-977b-95d26158480e" containerName="glance-db-sync" Mar 19 12:35:18.914094 master-0 kubenswrapper[31830]: I0319 12:35:18.913858 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="9559d792-d79a-48bf-9ad0-b157b0e2684f" containerName="mariadb-database-create" Mar 19 12:35:18.915933 master-0 kubenswrapper[31830]: I0319 12:35:18.915875 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.065038 master-0 kubenswrapper[31830]: I0319 12:35:19.062715 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-config\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.065038 master-0 kubenswrapper[31830]: I0319 12:35:19.062781 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-ovsdbserver-sb\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.065038 master-0 kubenswrapper[31830]: I0319 12:35:19.062992 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-ovsdbserver-nb\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.065038 master-0 kubenswrapper[31830]: I0319 12:35:19.063022 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-edpm-a\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.065038 master-0 kubenswrapper[31830]: I0319 12:35:19.063143 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-dns-swift-storage-0\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.065038 master-0 kubenswrapper[31830]: I0319 12:35:19.063317 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-dns-svc\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.065038 master-0 kubenswrapper[31830]: I0319 12:35:19.063427 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfd8p\" (UniqueName: \"kubernetes.io/projected/28ea1273-426e-46b7-b57f-7d6bbadddd08-kube-api-access-bfd8p\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.065038 master-0 kubenswrapper[31830]: I0319 12:35:19.063479 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-edpm-b\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.167681 master-0 kubenswrapper[31830]: I0319 12:35:19.167519 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-577795c79c-8r4lh"] Mar 19 12:35:19.168248 master-0 kubenswrapper[31830]: I0319 12:35:19.168189 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-ovsdbserver-nb\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.169478 master-0 kubenswrapper[31830]: I0319 12:35:19.169439 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-ovsdbserver-nb\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.169562 master-0 kubenswrapper[31830]: I0319 12:35:19.169488 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-edpm-a\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.169562 master-0 kubenswrapper[31830]: I0319 12:35:19.169515 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-dns-swift-storage-0\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.169562 master-0 kubenswrapper[31830]: I0319 12:35:19.169547 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-dns-svc\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.169664 master-0 kubenswrapper[31830]: I0319 12:35:19.169623 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfd8p\" (UniqueName: \"kubernetes.io/projected/28ea1273-426e-46b7-b57f-7d6bbadddd08-kube-api-access-bfd8p\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.169773 master-0 kubenswrapper[31830]: I0319 12:35:19.169693 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-edpm-b\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.169837 master-0 kubenswrapper[31830]: I0319 12:35:19.169777 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-config\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.169837 master-0 kubenswrapper[31830]: I0319 12:35:19.169821 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-ovsdbserver-sb\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.170990 master-0 kubenswrapper[31830]: I0319 12:35:19.170947 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-ovsdbserver-sb\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.171585 master-0 kubenswrapper[31830]: I0319 12:35:19.171549 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-edpm-a\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.172563 master-0 kubenswrapper[31830]: I0319 12:35:19.172534 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-config\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.172638 master-0 kubenswrapper[31830]: I0319 12:35:19.172603 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-edpm-b\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.173270 master-0 kubenswrapper[31830]: I0319 12:35:19.173237 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-dns-svc\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.173669 master-0 kubenswrapper[31830]: I0319 12:35:19.173628 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-dns-swift-storage-0\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.627232 master-0 kubenswrapper[31830]: I0319 12:35:19.627120 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfd8p\" (UniqueName: \"kubernetes.io/projected/28ea1273-426e-46b7-b57f-7d6bbadddd08-kube-api-access-bfd8p\") pod \"dnsmasq-dns-577795c79c-8r4lh\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:19.844823 master-0 kubenswrapper[31830]: I0319 12:35:19.841185 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:20.378814 master-0 kubenswrapper[31830]: I0319 12:35:20.376359 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-577795c79c-8r4lh"] Mar 19 12:35:20.416065 master-0 kubenswrapper[31830]: W0319 12:35:20.415503 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28ea1273_426e_46b7_b57f_7d6bbadddd08.slice/crio-a997ce4ba7f16c17e8c8b37397fb5a87126d060f3ff874b60920cf5ad5156b18 WatchSource:0}: Error finding container a997ce4ba7f16c17e8c8b37397fb5a87126d060f3ff874b60920cf5ad5156b18: Status 404 returned error can't find the container with id a997ce4ba7f16c17e8c8b37397fb5a87126d060f3ff874b60920cf5ad5156b18 Mar 19 12:35:20.902561 master-0 kubenswrapper[31830]: I0319 12:35:20.902340 31830 generic.go:334] "Generic (PLEG): container finished" podID="28ea1273-426e-46b7-b57f-7d6bbadddd08" containerID="8d1ac8ee1f8360dc06e0adf1aeb52ec9f0d7891f07e9f1014d748c748b70b638" exitCode=0 Mar 19 12:35:20.902561 master-0 kubenswrapper[31830]: I0319 12:35:20.902406 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-577795c79c-8r4lh" event={"ID":"28ea1273-426e-46b7-b57f-7d6bbadddd08","Type":"ContainerDied","Data":"8d1ac8ee1f8360dc06e0adf1aeb52ec9f0d7891f07e9f1014d748c748b70b638"} Mar 19 12:35:20.902561 master-0 kubenswrapper[31830]: I0319 12:35:20.902484 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-577795c79c-8r4lh" event={"ID":"28ea1273-426e-46b7-b57f-7d6bbadddd08","Type":"ContainerStarted","Data":"a997ce4ba7f16c17e8c8b37397fb5a87126d060f3ff874b60920cf5ad5156b18"} Mar 19 12:35:20.906778 master-0 kubenswrapper[31830]: I0319 12:35:20.906741 31830 generic.go:334] "Generic (PLEG): container finished" podID="78caf503-3472-47a9-9107-4d260f898fb2" containerID="7436c639eb9f8491ca4d4f335c8422f72e856a969b84b5eef85431950e8c53ad" exitCode=0 Mar 19 12:35:20.906877 master-0 kubenswrapper[31830]: I0319 12:35:20.906777 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-t4w4t" event={"ID":"78caf503-3472-47a9-9107-4d260f898fb2","Type":"ContainerDied","Data":"7436c639eb9f8491ca4d4f335c8422f72e856a969b84b5eef85431950e8c53ad"} Mar 19 12:35:21.925881 master-0 kubenswrapper[31830]: I0319 12:35:21.925501 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-577795c79c-8r4lh" event={"ID":"28ea1273-426e-46b7-b57f-7d6bbadddd08","Type":"ContainerStarted","Data":"c6b6c2dfd53f8ce71a8ac8df5e32bb29b6a0598341925ba5374d674dbcfd0c09"} Mar 19 12:35:21.951516 master-0 kubenswrapper[31830]: I0319 12:35:21.951441 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-577795c79c-8r4lh" podStartSLOduration=3.9514174779999998 podStartE2EDuration="3.951417478s" podCreationTimestamp="2026-03-19 12:35:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:35:21.945884747 +0000 UTC m=+1260.494845461" watchObservedRunningTime="2026-03-19 12:35:21.951417478 +0000 UTC m=+1260.500378182" Mar 19 12:35:22.489143 master-0 kubenswrapper[31830]: I0319 12:35:22.489092 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-t4w4t" Mar 19 12:35:22.616491 master-0 kubenswrapper[31830]: I0319 12:35:22.616442 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4s2j\" (UniqueName: \"kubernetes.io/projected/78caf503-3472-47a9-9107-4d260f898fb2-kube-api-access-s4s2j\") pod \"78caf503-3472-47a9-9107-4d260f898fb2\" (UID: \"78caf503-3472-47a9-9107-4d260f898fb2\") " Mar 19 12:35:22.616905 master-0 kubenswrapper[31830]: I0319 12:35:22.616880 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78caf503-3472-47a9-9107-4d260f898fb2-combined-ca-bundle\") pod \"78caf503-3472-47a9-9107-4d260f898fb2\" (UID: \"78caf503-3472-47a9-9107-4d260f898fb2\") " Mar 19 12:35:22.617063 master-0 kubenswrapper[31830]: I0319 12:35:22.617050 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78caf503-3472-47a9-9107-4d260f898fb2-config-data\") pod \"78caf503-3472-47a9-9107-4d260f898fb2\" (UID: \"78caf503-3472-47a9-9107-4d260f898fb2\") " Mar 19 12:35:22.630744 master-0 kubenswrapper[31830]: I0319 12:35:22.630553 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78caf503-3472-47a9-9107-4d260f898fb2-kube-api-access-s4s2j" (OuterVolumeSpecName: "kube-api-access-s4s2j") pod "78caf503-3472-47a9-9107-4d260f898fb2" (UID: "78caf503-3472-47a9-9107-4d260f898fb2"). InnerVolumeSpecName "kube-api-access-s4s2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:35:22.647049 master-0 kubenswrapper[31830]: I0319 12:35:22.646830 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78caf503-3472-47a9-9107-4d260f898fb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78caf503-3472-47a9-9107-4d260f898fb2" (UID: "78caf503-3472-47a9-9107-4d260f898fb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:35:22.686936 master-0 kubenswrapper[31830]: I0319 12:35:22.686868 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78caf503-3472-47a9-9107-4d260f898fb2-config-data" (OuterVolumeSpecName: "config-data") pod "78caf503-3472-47a9-9107-4d260f898fb2" (UID: "78caf503-3472-47a9-9107-4d260f898fb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:35:22.733218 master-0 kubenswrapper[31830]: I0319 12:35:22.733165 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78caf503-3472-47a9-9107-4d260f898fb2-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:22.733218 master-0 kubenswrapper[31830]: I0319 12:35:22.733214 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4s2j\" (UniqueName: \"kubernetes.io/projected/78caf503-3472-47a9-9107-4d260f898fb2-kube-api-access-s4s2j\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:22.733218 master-0 kubenswrapper[31830]: I0319 12:35:22.733227 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78caf503-3472-47a9-9107-4d260f898fb2-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:22.942224 master-0 kubenswrapper[31830]: I0319 12:35:22.941986 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-t4w4t" Mar 19 12:35:22.943033 master-0 kubenswrapper[31830]: I0319 12:35:22.942719 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-t4w4t" event={"ID":"78caf503-3472-47a9-9107-4d260f898fb2","Type":"ContainerDied","Data":"681f12f51acc829ca22f6aec349abe93d28636bca85b150aa6e04ee3b31c770e"} Mar 19 12:35:22.943033 master-0 kubenswrapper[31830]: I0319 12:35:22.942780 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="681f12f51acc829ca22f6aec349abe93d28636bca85b150aa6e04ee3b31c770e" Mar 19 12:35:22.943033 master-0 kubenswrapper[31830]: I0319 12:35:22.942968 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:24.204605 master-0 kubenswrapper[31830]: I0319 12:35:24.204537 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-pg69z"] Mar 19 12:35:24.209578 master-0 kubenswrapper[31830]: E0319 12:35:24.205296 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78caf503-3472-47a9-9107-4d260f898fb2" containerName="keystone-db-sync" Mar 19 12:35:24.209578 master-0 kubenswrapper[31830]: I0319 12:35:24.205328 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="78caf503-3472-47a9-9107-4d260f898fb2" containerName="keystone-db-sync" Mar 19 12:35:24.209578 master-0 kubenswrapper[31830]: I0319 12:35:24.205807 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="78caf503-3472-47a9-9107-4d260f898fb2" containerName="keystone-db-sync" Mar 19 12:35:24.209578 master-0 kubenswrapper[31830]: I0319 12:35:24.207512 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:24.217334 master-0 kubenswrapper[31830]: I0319 12:35:24.216437 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 19 12:35:24.217334 master-0 kubenswrapper[31830]: I0319 12:35:24.216504 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 19 12:35:24.217994 master-0 kubenswrapper[31830]: I0319 12:35:24.217763 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 19 12:35:24.247851 master-0 kubenswrapper[31830]: I0319 12:35:24.243477 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 19 12:35:24.261654 master-0 kubenswrapper[31830]: I0319 12:35:24.257905 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pg69z"] Mar 19 12:35:24.293271 master-0 kubenswrapper[31830]: I0319 12:35:24.292726 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-577795c79c-8r4lh"] Mar 19 12:35:24.303871 master-0 kubenswrapper[31830]: I0319 12:35:24.303623 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-fernet-keys\") pod \"keystone-bootstrap-pg69z\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:24.303871 master-0 kubenswrapper[31830]: I0319 12:35:24.303725 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr2gx\" (UniqueName: \"kubernetes.io/projected/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-kube-api-access-qr2gx\") pod \"keystone-bootstrap-pg69z\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:24.303871 master-0 kubenswrapper[31830]: I0319 12:35:24.303747 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-config-data\") pod \"keystone-bootstrap-pg69z\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:24.303871 master-0 kubenswrapper[31830]: I0319 12:35:24.303772 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-combined-ca-bundle\") pod \"keystone-bootstrap-pg69z\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:24.303871 master-0 kubenswrapper[31830]: I0319 12:35:24.303789 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-credential-keys\") pod \"keystone-bootstrap-pg69z\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:24.303871 master-0 kubenswrapper[31830]: I0319 12:35:24.303831 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-scripts\") pod \"keystone-bootstrap-pg69z\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:24.400695 master-0 kubenswrapper[31830]: I0319 12:35:24.400635 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5644f8c597-hdr4z"] Mar 19 12:35:24.406231 master-0 kubenswrapper[31830]: I0319 12:35:24.405651 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-fernet-keys\") pod \"keystone-bootstrap-pg69z\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:24.406231 master-0 kubenswrapper[31830]: I0319 12:35:24.405816 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr2gx\" (UniqueName: \"kubernetes.io/projected/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-kube-api-access-qr2gx\") pod \"keystone-bootstrap-pg69z\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:24.406231 master-0 kubenswrapper[31830]: I0319 12:35:24.405846 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-config-data\") pod \"keystone-bootstrap-pg69z\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:24.406231 master-0 kubenswrapper[31830]: I0319 12:35:24.405878 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-combined-ca-bundle\") pod \"keystone-bootstrap-pg69z\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:24.406231 master-0 kubenswrapper[31830]: I0319 12:35:24.405900 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-credential-keys\") pod \"keystone-bootstrap-pg69z\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:24.406231 master-0 kubenswrapper[31830]: I0319 12:35:24.405934 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-scripts\") pod \"keystone-bootstrap-pg69z\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:24.417824 master-0 kubenswrapper[31830]: I0319 12:35:24.410391 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-scripts\") pod \"keystone-bootstrap-pg69z\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:24.417824 master-0 kubenswrapper[31830]: I0319 12:35:24.411623 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-fernet-keys\") pod \"keystone-bootstrap-pg69z\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:24.417824 master-0 kubenswrapper[31830]: I0319 12:35:24.415007 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.425828 master-0 kubenswrapper[31830]: I0319 12:35:24.423125 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-config-data\") pod \"keystone-bootstrap-pg69z\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:24.434715 master-0 kubenswrapper[31830]: I0319 12:35:24.431213 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5644f8c597-hdr4z"] Mar 19 12:35:24.438819 master-0 kubenswrapper[31830]: I0319 12:35:24.435412 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-credential-keys\") pod \"keystone-bootstrap-pg69z\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:24.445824 master-0 kubenswrapper[31830]: I0319 12:35:24.440973 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-combined-ca-bundle\") pod \"keystone-bootstrap-pg69z\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:24.446604 master-0 kubenswrapper[31830]: I0319 12:35:24.446552 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-zl2rr"] Mar 19 12:35:24.447964 master-0 kubenswrapper[31830]: I0319 12:35:24.447936 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zl2rr" Mar 19 12:35:24.448562 master-0 kubenswrapper[31830]: I0319 12:35:24.448521 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr2gx\" (UniqueName: \"kubernetes.io/projected/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-kube-api-access-qr2gx\") pod \"keystone-bootstrap-pg69z\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:24.455408 master-0 kubenswrapper[31830]: I0319 12:35:24.452755 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 19 12:35:24.455408 master-0 kubenswrapper[31830]: I0319 12:35:24.453155 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 19 12:35:24.473772 master-0 kubenswrapper[31830]: I0319 12:35:24.473643 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zl2rr"] Mar 19 12:35:24.582826 master-0 kubenswrapper[31830]: I0319 12:35:24.582349 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8csr8\" (UniqueName: \"kubernetes.io/projected/b3fdc7f4-19ec-41b8-8eea-9f735f221281-kube-api-access-8csr8\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.582826 master-0 kubenswrapper[31830]: I0319 12:35:24.582460 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-config\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.582826 master-0 kubenswrapper[31830]: I0319 12:35:24.582513 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-edpm-b\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.582826 master-0 kubenswrapper[31830]: I0319 12:35:24.582671 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-ovsdbserver-nb\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.582826 master-0 kubenswrapper[31830]: I0319 12:35:24.582704 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14936556-fa0b-48fb-91e5-0ca806871a6c-combined-ca-bundle\") pod \"neutron-db-sync-zl2rr\" (UID: \"14936556-fa0b-48fb-91e5-0ca806871a6c\") " pod="openstack/neutron-db-sync-zl2rr" Mar 19 12:35:24.582826 master-0 kubenswrapper[31830]: I0319 12:35:24.582831 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-ovsdbserver-sb\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.583283 master-0 kubenswrapper[31830]: I0319 12:35:24.582948 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-dns-swift-storage-0\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.584938 master-0 kubenswrapper[31830]: I0319 12:35:24.583151 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd6hc\" (UniqueName: \"kubernetes.io/projected/14936556-fa0b-48fb-91e5-0ca806871a6c-kube-api-access-vd6hc\") pod \"neutron-db-sync-zl2rr\" (UID: \"14936556-fa0b-48fb-91e5-0ca806871a6c\") " pod="openstack/neutron-db-sync-zl2rr" Mar 19 12:35:24.589606 master-0 kubenswrapper[31830]: I0319 12:35:24.589363 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/14936556-fa0b-48fb-91e5-0ca806871a6c-config\") pod \"neutron-db-sync-zl2rr\" (UID: \"14936556-fa0b-48fb-91e5-0ca806871a6c\") " pod="openstack/neutron-db-sync-zl2rr" Mar 19 12:35:24.602129 master-0 kubenswrapper[31830]: I0319 12:35:24.599042 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-edpm-a\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.602129 master-0 kubenswrapper[31830]: I0319 12:35:24.599208 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-dns-svc\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.703831 master-0 kubenswrapper[31830]: I0319 12:35:24.698498 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:24.703831 master-0 kubenswrapper[31830]: I0319 12:35:24.701042 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8csr8\" (UniqueName: \"kubernetes.io/projected/b3fdc7f4-19ec-41b8-8eea-9f735f221281-kube-api-access-8csr8\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.703831 master-0 kubenswrapper[31830]: I0319 12:35:24.701118 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-config\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.703831 master-0 kubenswrapper[31830]: I0319 12:35:24.701148 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-edpm-b\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.703831 master-0 kubenswrapper[31830]: I0319 12:35:24.701230 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-ovsdbserver-nb\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.703831 master-0 kubenswrapper[31830]: I0319 12:35:24.701265 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14936556-fa0b-48fb-91e5-0ca806871a6c-combined-ca-bundle\") pod \"neutron-db-sync-zl2rr\" (UID: \"14936556-fa0b-48fb-91e5-0ca806871a6c\") " pod="openstack/neutron-db-sync-zl2rr" Mar 19 12:35:24.703831 master-0 kubenswrapper[31830]: I0319 12:35:24.701308 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-ovsdbserver-sb\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.703831 master-0 kubenswrapper[31830]: I0319 12:35:24.701364 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-dns-swift-storage-0\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.703831 master-0 kubenswrapper[31830]: I0319 12:35:24.701424 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd6hc\" (UniqueName: \"kubernetes.io/projected/14936556-fa0b-48fb-91e5-0ca806871a6c-kube-api-access-vd6hc\") pod \"neutron-db-sync-zl2rr\" (UID: \"14936556-fa0b-48fb-91e5-0ca806871a6c\") " pod="openstack/neutron-db-sync-zl2rr" Mar 19 12:35:24.703831 master-0 kubenswrapper[31830]: I0319 12:35:24.701519 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/14936556-fa0b-48fb-91e5-0ca806871a6c-config\") pod \"neutron-db-sync-zl2rr\" (UID: \"14936556-fa0b-48fb-91e5-0ca806871a6c\") " pod="openstack/neutron-db-sync-zl2rr" Mar 19 12:35:24.703831 master-0 kubenswrapper[31830]: I0319 12:35:24.701537 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-edpm-a\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.703831 master-0 kubenswrapper[31830]: I0319 12:35:24.701606 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-dns-svc\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.712597 master-0 kubenswrapper[31830]: I0319 12:35:24.705055 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-ovsdbserver-sb\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.712597 master-0 kubenswrapper[31830]: I0319 12:35:24.706013 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-config\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.712597 master-0 kubenswrapper[31830]: I0319 12:35:24.710737 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-ovsdbserver-nb\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.712597 master-0 kubenswrapper[31830]: I0319 12:35:24.712550 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-edpm-b\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.714607 master-0 kubenswrapper[31830]: I0319 12:35:24.714551 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-dns-swift-storage-0\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.721468 master-0 kubenswrapper[31830]: I0319 12:35:24.720386 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/14936556-fa0b-48fb-91e5-0ca806871a6c-config\") pod \"neutron-db-sync-zl2rr\" (UID: \"14936556-fa0b-48fb-91e5-0ca806871a6c\") " pod="openstack/neutron-db-sync-zl2rr" Mar 19 12:35:24.721714 master-0 kubenswrapper[31830]: I0319 12:35:24.721573 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-dns-svc\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.737847 master-0 kubenswrapper[31830]: I0319 12:35:24.726258 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14936556-fa0b-48fb-91e5-0ca806871a6c-combined-ca-bundle\") pod \"neutron-db-sync-zl2rr\" (UID: \"14936556-fa0b-48fb-91e5-0ca806871a6c\") " pod="openstack/neutron-db-sync-zl2rr" Mar 19 12:35:24.737847 master-0 kubenswrapper[31830]: I0319 12:35:24.733458 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-cce1e-db-sync-n5228"] Mar 19 12:35:24.737847 master-0 kubenswrapper[31830]: I0319 12:35:24.734021 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-edpm-a\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.737847 master-0 kubenswrapper[31830]: I0319 12:35:24.735120 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:35:24.737847 master-0 kubenswrapper[31830]: I0319 12:35:24.737512 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cce1e-config-data" Mar 19 12:35:24.752232 master-0 kubenswrapper[31830]: I0319 12:35:24.745374 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cce1e-scripts" Mar 19 12:35:24.752232 master-0 kubenswrapper[31830]: I0319 12:35:24.746776 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cce1e-db-sync-n5228"] Mar 19 12:35:24.784917 master-0 kubenswrapper[31830]: I0319 12:35:24.777601 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8csr8\" (UniqueName: \"kubernetes.io/projected/b3fdc7f4-19ec-41b8-8eea-9f735f221281-kube-api-access-8csr8\") pod \"dnsmasq-dns-5644f8c597-hdr4z\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.784917 master-0 kubenswrapper[31830]: I0319 12:35:24.782352 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd6hc\" (UniqueName: \"kubernetes.io/projected/14936556-fa0b-48fb-91e5-0ca806871a6c-kube-api-access-vd6hc\") pod \"neutron-db-sync-zl2rr\" (UID: \"14936556-fa0b-48fb-91e5-0ca806871a6c\") " pod="openstack/neutron-db-sync-zl2rr" Mar 19 12:35:24.807863 master-0 kubenswrapper[31830]: I0319 12:35:24.804394 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-etc-machine-id\") pod \"cinder-cce1e-db-sync-n5228\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:35:24.807863 master-0 kubenswrapper[31830]: I0319 12:35:24.804501 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gmpq\" (UniqueName: \"kubernetes.io/projected/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-kube-api-access-4gmpq\") pod \"cinder-cce1e-db-sync-n5228\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:35:24.807863 master-0 kubenswrapper[31830]: I0319 12:35:24.804534 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-scripts\") pod \"cinder-cce1e-db-sync-n5228\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:35:24.807863 master-0 kubenswrapper[31830]: I0319 12:35:24.804575 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-db-sync-config-data\") pod \"cinder-cce1e-db-sync-n5228\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:35:24.807863 master-0 kubenswrapper[31830]: I0319 12:35:24.804649 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-config-data\") pod \"cinder-cce1e-db-sync-n5228\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:35:24.807863 master-0 kubenswrapper[31830]: I0319 12:35:24.804695 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-combined-ca-bundle\") pod \"cinder-cce1e-db-sync-n5228\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:35:24.807863 master-0 kubenswrapper[31830]: I0319 12:35:24.806244 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5644f8c597-hdr4z"] Mar 19 12:35:24.825343 master-0 kubenswrapper[31830]: I0319 12:35:24.808593 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:24.867688 master-0 kubenswrapper[31830]: I0319 12:35:24.867492 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-lpz7t"] Mar 19 12:35:24.870816 master-0 kubenswrapper[31830]: I0319 12:35:24.870740 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lpz7t" Mar 19 12:35:24.885360 master-0 kubenswrapper[31830]: I0319 12:35:24.876129 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 19 12:35:24.885360 master-0 kubenswrapper[31830]: I0319 12:35:24.877157 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 19 12:35:24.908595 master-0 kubenswrapper[31830]: I0319 12:35:24.904134 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77777b4857-hrt6t"] Mar 19 12:35:24.908595 master-0 kubenswrapper[31830]: I0319 12:35:24.906648 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:24.908595 master-0 kubenswrapper[31830]: I0319 12:35:24.907023 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48e729f7-b182-49a0-8d92-174b44693dad-scripts\") pod \"placement-db-sync-lpz7t\" (UID: \"48e729f7-b182-49a0-8d92-174b44693dad\") " pod="openstack/placement-db-sync-lpz7t" Mar 19 12:35:24.908595 master-0 kubenswrapper[31830]: I0319 12:35:24.907112 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gmpq\" (UniqueName: \"kubernetes.io/projected/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-kube-api-access-4gmpq\") pod \"cinder-cce1e-db-sync-n5228\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:35:24.908595 master-0 kubenswrapper[31830]: I0319 12:35:24.907164 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jccrz\" (UniqueName: \"kubernetes.io/projected/48e729f7-b182-49a0-8d92-174b44693dad-kube-api-access-jccrz\") pod \"placement-db-sync-lpz7t\" (UID: \"48e729f7-b182-49a0-8d92-174b44693dad\") " pod="openstack/placement-db-sync-lpz7t" Mar 19 12:35:24.908595 master-0 kubenswrapper[31830]: I0319 12:35:24.907315 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-scripts\") pod \"cinder-cce1e-db-sync-n5228\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:35:24.908595 master-0 kubenswrapper[31830]: I0319 12:35:24.907385 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48e729f7-b182-49a0-8d92-174b44693dad-combined-ca-bundle\") pod \"placement-db-sync-lpz7t\" (UID: \"48e729f7-b182-49a0-8d92-174b44693dad\") " pod="openstack/placement-db-sync-lpz7t" Mar 19 12:35:24.908595 master-0 kubenswrapper[31830]: I0319 12:35:24.907470 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-db-sync-config-data\") pod \"cinder-cce1e-db-sync-n5228\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:35:24.908595 master-0 kubenswrapper[31830]: I0319 12:35:24.907533 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48e729f7-b182-49a0-8d92-174b44693dad-logs\") pod \"placement-db-sync-lpz7t\" (UID: \"48e729f7-b182-49a0-8d92-174b44693dad\") " pod="openstack/placement-db-sync-lpz7t" Mar 19 12:35:24.908595 master-0 kubenswrapper[31830]: I0319 12:35:24.907652 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-config-data\") pod \"cinder-cce1e-db-sync-n5228\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:35:24.908595 master-0 kubenswrapper[31830]: I0319 12:35:24.907738 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-combined-ca-bundle\") pod \"cinder-cce1e-db-sync-n5228\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:35:24.908595 master-0 kubenswrapper[31830]: I0319 12:35:24.908004 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48e729f7-b182-49a0-8d92-174b44693dad-config-data\") pod \"placement-db-sync-lpz7t\" (UID: \"48e729f7-b182-49a0-8d92-174b44693dad\") " pod="openstack/placement-db-sync-lpz7t" Mar 19 12:35:24.908595 master-0 kubenswrapper[31830]: I0319 12:35:24.908089 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-etc-machine-id\") pod \"cinder-cce1e-db-sync-n5228\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:35:24.908595 master-0 kubenswrapper[31830]: I0319 12:35:24.908228 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-etc-machine-id\") pod \"cinder-cce1e-db-sync-n5228\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:35:24.921900 master-0 kubenswrapper[31830]: I0319 12:35:24.912851 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-db-sync-config-data\") pod \"cinder-cce1e-db-sync-n5228\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:35:24.921900 master-0 kubenswrapper[31830]: I0319 12:35:24.915772 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-combined-ca-bundle\") pod \"cinder-cce1e-db-sync-n5228\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:35:24.921900 master-0 kubenswrapper[31830]: I0319 12:35:24.918527 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-scripts\") pod \"cinder-cce1e-db-sync-n5228\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:35:24.936887 master-0 kubenswrapper[31830]: I0319 12:35:24.928311 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zl2rr" Mar 19 12:35:24.936887 master-0 kubenswrapper[31830]: I0319 12:35:24.935676 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-config-data\") pod \"cinder-cce1e-db-sync-n5228\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:35:24.936887 master-0 kubenswrapper[31830]: I0319 12:35:24.936341 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gmpq\" (UniqueName: \"kubernetes.io/projected/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-kube-api-access-4gmpq\") pod \"cinder-cce1e-db-sync-n5228\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:35:24.958402 master-0 kubenswrapper[31830]: I0319 12:35:24.946042 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:35:25.051820 master-0 kubenswrapper[31830]: I0319 12:35:25.039054 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lpz7t"] Mar 19 12:35:25.116064 master-0 kubenswrapper[31830]: I0319 12:35:25.092551 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-577795c79c-8r4lh" podUID="28ea1273-426e-46b7-b57f-7d6bbadddd08" containerName="dnsmasq-dns" containerID="cri-o://c6b6c2dfd53f8ce71a8ac8df5e32bb29b6a0598341925ba5374d674dbcfd0c09" gracePeriod=10 Mar 19 12:35:25.126942 master-0 kubenswrapper[31830]: I0319 12:35:25.126774 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48e729f7-b182-49a0-8d92-174b44693dad-config-data\") pod \"placement-db-sync-lpz7t\" (UID: \"48e729f7-b182-49a0-8d92-174b44693dad\") " pod="openstack/placement-db-sync-lpz7t" Mar 19 12:35:25.127057 master-0 kubenswrapper[31830]: I0319 12:35:25.127008 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48e729f7-b182-49a0-8d92-174b44693dad-scripts\") pod \"placement-db-sync-lpz7t\" (UID: \"48e729f7-b182-49a0-8d92-174b44693dad\") " pod="openstack/placement-db-sync-lpz7t" Mar 19 12:35:25.127114 master-0 kubenswrapper[31830]: I0319 12:35:25.127070 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jccrz\" (UniqueName: \"kubernetes.io/projected/48e729f7-b182-49a0-8d92-174b44693dad-kube-api-access-jccrz\") pod \"placement-db-sync-lpz7t\" (UID: \"48e729f7-b182-49a0-8d92-174b44693dad\") " pod="openstack/placement-db-sync-lpz7t" Mar 19 12:35:25.127114 master-0 kubenswrapper[31830]: I0319 12:35:25.127108 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48e729f7-b182-49a0-8d92-174b44693dad-combined-ca-bundle\") pod \"placement-db-sync-lpz7t\" (UID: \"48e729f7-b182-49a0-8d92-174b44693dad\") " pod="openstack/placement-db-sync-lpz7t" Mar 19 12:35:25.127226 master-0 kubenswrapper[31830]: I0319 12:35:25.127181 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48e729f7-b182-49a0-8d92-174b44693dad-logs\") pod \"placement-db-sync-lpz7t\" (UID: \"48e729f7-b182-49a0-8d92-174b44693dad\") " pod="openstack/placement-db-sync-lpz7t" Mar 19 12:35:25.127913 master-0 kubenswrapper[31830]: I0319 12:35:25.127863 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48e729f7-b182-49a0-8d92-174b44693dad-logs\") pod \"placement-db-sync-lpz7t\" (UID: \"48e729f7-b182-49a0-8d92-174b44693dad\") " pod="openstack/placement-db-sync-lpz7t" Mar 19 12:35:25.156165 master-0 kubenswrapper[31830]: I0319 12:35:25.155256 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48e729f7-b182-49a0-8d92-174b44693dad-combined-ca-bundle\") pod \"placement-db-sync-lpz7t\" (UID: \"48e729f7-b182-49a0-8d92-174b44693dad\") " pod="openstack/placement-db-sync-lpz7t" Mar 19 12:35:25.160949 master-0 kubenswrapper[31830]: I0319 12:35:25.160897 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48e729f7-b182-49a0-8d92-174b44693dad-config-data\") pod \"placement-db-sync-lpz7t\" (UID: \"48e729f7-b182-49a0-8d92-174b44693dad\") " pod="openstack/placement-db-sync-lpz7t" Mar 19 12:35:25.172987 master-0 kubenswrapper[31830]: I0319 12:35:25.172068 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jccrz\" (UniqueName: \"kubernetes.io/projected/48e729f7-b182-49a0-8d92-174b44693dad-kube-api-access-jccrz\") pod \"placement-db-sync-lpz7t\" (UID: \"48e729f7-b182-49a0-8d92-174b44693dad\") " pod="openstack/placement-db-sync-lpz7t" Mar 19 12:35:25.179510 master-0 kubenswrapper[31830]: I0319 12:35:25.179439 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48e729f7-b182-49a0-8d92-174b44693dad-scripts\") pod \"placement-db-sync-lpz7t\" (UID: \"48e729f7-b182-49a0-8d92-174b44693dad\") " pod="openstack/placement-db-sync-lpz7t" Mar 19 12:35:25.258935 master-0 kubenswrapper[31830]: I0319 12:35:25.258047 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77777b4857-hrt6t"] Mar 19 12:35:25.261763 master-0 kubenswrapper[31830]: I0319 12:35:25.261277 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnclb\" (UniqueName: \"kubernetes.io/projected/a3dcd24b-6811-4650-adb2-352c99e50b99-kube-api-access-fnclb\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.261763 master-0 kubenswrapper[31830]: I0319 12:35:25.261405 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-ovsdbserver-sb\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.261763 master-0 kubenswrapper[31830]: I0319 12:35:25.261483 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-config\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.261763 master-0 kubenswrapper[31830]: I0319 12:35:25.261502 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-ovsdbserver-nb\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.261763 master-0 kubenswrapper[31830]: I0319 12:35:25.261527 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-edpm-a\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.261763 master-0 kubenswrapper[31830]: I0319 12:35:25.261582 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-dns-svc\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.261763 master-0 kubenswrapper[31830]: I0319 12:35:25.261623 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-edpm-b\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.261763 master-0 kubenswrapper[31830]: I0319 12:35:25.261647 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-dns-swift-storage-0\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.334318 master-0 kubenswrapper[31830]: I0319 12:35:25.334203 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lpz7t" Mar 19 12:35:25.362575 master-0 kubenswrapper[31830]: I0319 12:35:25.362491 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-ovsdbserver-sb\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.362857 master-0 kubenswrapper[31830]: I0319 12:35:25.362610 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-config\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.362857 master-0 kubenswrapper[31830]: I0319 12:35:25.362639 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-ovsdbserver-nb\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.362857 master-0 kubenswrapper[31830]: I0319 12:35:25.362669 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-edpm-a\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.362857 master-0 kubenswrapper[31830]: I0319 12:35:25.362715 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-dns-svc\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.362857 master-0 kubenswrapper[31830]: I0319 12:35:25.362758 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-edpm-b\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.362857 master-0 kubenswrapper[31830]: I0319 12:35:25.362785 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-dns-swift-storage-0\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.364443 master-0 kubenswrapper[31830]: I0319 12:35:25.362861 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnclb\" (UniqueName: \"kubernetes.io/projected/a3dcd24b-6811-4650-adb2-352c99e50b99-kube-api-access-fnclb\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.364443 master-0 kubenswrapper[31830]: I0319 12:35:25.364099 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-ovsdbserver-sb\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.364946 master-0 kubenswrapper[31830]: I0319 12:35:25.364924 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-config\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.365922 master-0 kubenswrapper[31830]: I0319 12:35:25.365883 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-dns-svc\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.366689 master-0 kubenswrapper[31830]: I0319 12:35:25.366656 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-edpm-b\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.371177 master-0 kubenswrapper[31830]: I0319 12:35:25.370924 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-ovsdbserver-nb\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.371291 master-0 kubenswrapper[31830]: I0319 12:35:25.371215 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-dns-swift-storage-0\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.371611 master-0 kubenswrapper[31830]: I0319 12:35:25.371577 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-edpm-a\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.386936 master-0 kubenswrapper[31830]: I0319 12:35:25.386885 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnclb\" (UniqueName: \"kubernetes.io/projected/a3dcd24b-6811-4650-adb2-352c99e50b99-kube-api-access-fnclb\") pod \"dnsmasq-dns-77777b4857-hrt6t\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.577303 master-0 kubenswrapper[31830]: I0319 12:35:25.574938 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pg69z"] Mar 19 12:35:25.577303 master-0 kubenswrapper[31830]: W0319 12:35:25.575493 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4952f965_eb25_4397_bdd3_b8a75e9eb4ed.slice/crio-647c4f3ace511298076b3f544835bcf3aab86d60b90661b9b9ce9e843bf510a4 WatchSource:0}: Error finding container 647c4f3ace511298076b3f544835bcf3aab86d60b90661b9b9ce9e843bf510a4: Status 404 returned error can't find the container with id 647c4f3ace511298076b3f544835bcf3aab86d60b90661b9b9ce9e843bf510a4 Mar 19 12:35:25.686154 master-0 kubenswrapper[31830]: I0319 12:35:25.685397 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:25.995929 master-0 kubenswrapper[31830]: I0319 12:35:25.991115 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zl2rr"] Mar 19 12:35:26.021879 master-0 kubenswrapper[31830]: I0319 12:35:26.013014 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5644f8c597-hdr4z"] Mar 19 12:35:26.040626 master-0 kubenswrapper[31830]: W0319 12:35:26.040024 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3fdc7f4_19ec_41b8_8eea_9f735f221281.slice/crio-9116c96df3ba255200aefa9a69478d1a31dee4cb817cf60928299023dcdb5f82 WatchSource:0}: Error finding container 9116c96df3ba255200aefa9a69478d1a31dee4cb817cf60928299023dcdb5f82: Status 404 returned error can't find the container with id 9116c96df3ba255200aefa9a69478d1a31dee4cb817cf60928299023dcdb5f82 Mar 19 12:35:26.087479 master-0 kubenswrapper[31830]: I0319 12:35:26.087341 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cce1e-db-sync-n5228"] Mar 19 12:35:26.125071 master-0 kubenswrapper[31830]: I0319 12:35:26.122123 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pg69z" event={"ID":"4952f965-eb25-4397-bdd3-b8a75e9eb4ed","Type":"ContainerStarted","Data":"94720d41e4fa7e96d5c44a8a22c4f5e6ec5e00bb25811056d8a600795c74540b"} Mar 19 12:35:26.125071 master-0 kubenswrapper[31830]: I0319 12:35:26.122169 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pg69z" event={"ID":"4952f965-eb25-4397-bdd3-b8a75e9eb4ed","Type":"ContainerStarted","Data":"647c4f3ace511298076b3f544835bcf3aab86d60b90661b9b9ce9e843bf510a4"} Mar 19 12:35:26.128520 master-0 kubenswrapper[31830]: I0319 12:35:26.125752 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" event={"ID":"b3fdc7f4-19ec-41b8-8eea-9f735f221281","Type":"ContainerStarted","Data":"9116c96df3ba255200aefa9a69478d1a31dee4cb817cf60928299023dcdb5f82"} Mar 19 12:35:26.170466 master-0 kubenswrapper[31830]: I0319 12:35:26.170364 31830 generic.go:334] "Generic (PLEG): container finished" podID="28ea1273-426e-46b7-b57f-7d6bbadddd08" containerID="c6b6c2dfd53f8ce71a8ac8df5e32bb29b6a0598341925ba5374d674dbcfd0c09" exitCode=0 Mar 19 12:35:26.170578 master-0 kubenswrapper[31830]: I0319 12:35:26.170528 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-577795c79c-8r4lh" event={"ID":"28ea1273-426e-46b7-b57f-7d6bbadddd08","Type":"ContainerDied","Data":"c6b6c2dfd53f8ce71a8ac8df5e32bb29b6a0598341925ba5374d674dbcfd0c09"} Mar 19 12:35:26.170648 master-0 kubenswrapper[31830]: I0319 12:35:26.170587 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-577795c79c-8r4lh" event={"ID":"28ea1273-426e-46b7-b57f-7d6bbadddd08","Type":"ContainerDied","Data":"a997ce4ba7f16c17e8c8b37397fb5a87126d060f3ff874b60920cf5ad5156b18"} Mar 19 12:35:26.170648 master-0 kubenswrapper[31830]: I0319 12:35:26.170599 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a997ce4ba7f16c17e8c8b37397fb5a87126d060f3ff874b60920cf5ad5156b18" Mar 19 12:35:26.208235 master-0 kubenswrapper[31830]: I0319 12:35:26.192107 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-pg69z" podStartSLOduration=2.192088482 podStartE2EDuration="2.192088482s" podCreationTimestamp="2026-03-19 12:35:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:35:26.180646748 +0000 UTC m=+1264.729607472" watchObservedRunningTime="2026-03-19 12:35:26.192088482 +0000 UTC m=+1264.741049186" Mar 19 12:35:26.244969 master-0 kubenswrapper[31830]: I0319 12:35:26.241657 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:26.248912 master-0 kubenswrapper[31830]: I0319 12:35:26.245861 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zl2rr" event={"ID":"14936556-fa0b-48fb-91e5-0ca806871a6c","Type":"ContainerStarted","Data":"ce14fef57edc05c0be17f2e05287e048a72b687d1de36ea9b5b8c2cb1b3a2f80"} Mar 19 12:35:26.384270 master-0 kubenswrapper[31830]: I0319 12:35:26.379499 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-f4e38-default-external-api-0"] Mar 19 12:35:26.384270 master-0 kubenswrapper[31830]: E0319 12:35:26.380047 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ea1273-426e-46b7-b57f-7d6bbadddd08" containerName="init" Mar 19 12:35:26.384270 master-0 kubenswrapper[31830]: I0319 12:35:26.380063 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ea1273-426e-46b7-b57f-7d6bbadddd08" containerName="init" Mar 19 12:35:26.384270 master-0 kubenswrapper[31830]: E0319 12:35:26.380123 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ea1273-426e-46b7-b57f-7d6bbadddd08" containerName="dnsmasq-dns" Mar 19 12:35:26.384270 master-0 kubenswrapper[31830]: I0319 12:35:26.380132 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ea1273-426e-46b7-b57f-7d6bbadddd08" containerName="dnsmasq-dns" Mar 19 12:35:26.384270 master-0 kubenswrapper[31830]: I0319 12:35:26.380398 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ea1273-426e-46b7-b57f-7d6bbadddd08" containerName="dnsmasq-dns" Mar 19 12:35:26.384270 master-0 kubenswrapper[31830]: I0319 12:35:26.381890 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.393963 master-0 kubenswrapper[31830]: I0319 12:35:26.388401 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 19 12:35:26.393963 master-0 kubenswrapper[31830]: I0319 12:35:26.389437 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 19 12:35:26.393963 master-0 kubenswrapper[31830]: I0319 12:35:26.392351 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-f4e38-default-external-config-data" Mar 19 12:35:26.406254 master-0 kubenswrapper[31830]: I0319 12:35:26.403484 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lpz7t"] Mar 19 12:35:26.427972 master-0 kubenswrapper[31830]: I0319 12:35:26.425230 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-ovsdbserver-sb\") pod \"28ea1273-426e-46b7-b57f-7d6bbadddd08\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " Mar 19 12:35:26.427972 master-0 kubenswrapper[31830]: I0319 12:35:26.425309 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-edpm-a\") pod \"28ea1273-426e-46b7-b57f-7d6bbadddd08\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " Mar 19 12:35:26.427972 master-0 kubenswrapper[31830]: I0319 12:35:26.425427 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfd8p\" (UniqueName: \"kubernetes.io/projected/28ea1273-426e-46b7-b57f-7d6bbadddd08-kube-api-access-bfd8p\") pod \"28ea1273-426e-46b7-b57f-7d6bbadddd08\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " Mar 19 12:35:26.427972 master-0 kubenswrapper[31830]: I0319 12:35:26.425446 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-edpm-b\") pod \"28ea1273-426e-46b7-b57f-7d6bbadddd08\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " Mar 19 12:35:26.427972 master-0 kubenswrapper[31830]: I0319 12:35:26.425482 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-config\") pod \"28ea1273-426e-46b7-b57f-7d6bbadddd08\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " Mar 19 12:35:26.427972 master-0 kubenswrapper[31830]: I0319 12:35:26.425509 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-dns-swift-storage-0\") pod \"28ea1273-426e-46b7-b57f-7d6bbadddd08\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " Mar 19 12:35:26.427972 master-0 kubenswrapper[31830]: I0319 12:35:26.425621 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-ovsdbserver-nb\") pod \"28ea1273-426e-46b7-b57f-7d6bbadddd08\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " Mar 19 12:35:26.427972 master-0 kubenswrapper[31830]: I0319 12:35:26.425672 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-dns-svc\") pod \"28ea1273-426e-46b7-b57f-7d6bbadddd08\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " Mar 19 12:35:26.433831 master-0 kubenswrapper[31830]: I0319 12:35:26.429427 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f4e38-default-external-api-0"] Mar 19 12:35:26.528352 master-0 kubenswrapper[31830]: I0319 12:35:26.528218 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ea1273-426e-46b7-b57f-7d6bbadddd08-kube-api-access-bfd8p" (OuterVolumeSpecName: "kube-api-access-bfd8p") pod "28ea1273-426e-46b7-b57f-7d6bbadddd08" (UID: "28ea1273-426e-46b7-b57f-7d6bbadddd08"). InnerVolumeSpecName "kube-api-access-bfd8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:35:26.528563 master-0 kubenswrapper[31830]: I0319 12:35:26.528458 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfd8p\" (UniqueName: \"kubernetes.io/projected/28ea1273-426e-46b7-b57f-7d6bbadddd08-kube-api-access-bfd8p\") pod \"28ea1273-426e-46b7-b57f-7d6bbadddd08\" (UID: \"28ea1273-426e-46b7-b57f-7d6bbadddd08\") " Mar 19 12:35:26.528954 master-0 kubenswrapper[31830]: I0319 12:35:26.528921 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-httpd-run\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.528954 master-0 kubenswrapper[31830]: W0319 12:35:26.528950 31830 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/28ea1273-426e-46b7-b57f-7d6bbadddd08/volumes/kubernetes.io~projected/kube-api-access-bfd8p Mar 19 12:35:26.529192 master-0 kubenswrapper[31830]: I0319 12:35:26.529037 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-combined-ca-bundle\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.529192 master-0 kubenswrapper[31830]: I0319 12:35:26.529066 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-logs\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.529192 master-0 kubenswrapper[31830]: I0319 12:35:26.529075 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ea1273-426e-46b7-b57f-7d6bbadddd08-kube-api-access-bfd8p" (OuterVolumeSpecName: "kube-api-access-bfd8p") pod "28ea1273-426e-46b7-b57f-7d6bbadddd08" (UID: "28ea1273-426e-46b7-b57f-7d6bbadddd08"). InnerVolumeSpecName "kube-api-access-bfd8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:35:26.529382 master-0 kubenswrapper[31830]: I0319 12:35:26.529252 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-config-data\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.529435 master-0 kubenswrapper[31830]: I0319 12:35:26.529405 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbqmh\" (UniqueName: \"kubernetes.io/projected/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-kube-api-access-bbqmh\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.529477 master-0 kubenswrapper[31830]: I0319 12:35:26.529463 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-38dd62d2-8408-4ac5-a7b7-e2aa55a152b1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.529514 master-0 kubenswrapper[31830]: I0319 12:35:26.529494 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-scripts\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.529545 master-0 kubenswrapper[31830]: I0319 12:35:26.529527 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-public-tls-certs\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.568223 master-0 kubenswrapper[31830]: I0319 12:35:26.529649 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfd8p\" (UniqueName: \"kubernetes.io/projected/28ea1273-426e-46b7-b57f-7d6bbadddd08-kube-api-access-bfd8p\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:26.576120 master-0 kubenswrapper[31830]: W0319 12:35:26.576036 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48e729f7_b182_49a0_8d92_174b44693dad.slice/crio-f5d3a30bc425dc190e31971ab04a82603e66a8b8d5061a644b625d6324b86801 WatchSource:0}: Error finding container f5d3a30bc425dc190e31971ab04a82603e66a8b8d5061a644b625d6324b86801: Status 404 returned error can't find the container with id f5d3a30bc425dc190e31971ab04a82603e66a8b8d5061a644b625d6324b86801 Mar 19 12:35:26.604422 master-0 kubenswrapper[31830]: I0319 12:35:26.602091 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77777b4857-hrt6t"] Mar 19 12:35:26.633100 master-0 kubenswrapper[31830]: I0319 12:35:26.633041 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbqmh\" (UniqueName: \"kubernetes.io/projected/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-kube-api-access-bbqmh\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.633233 master-0 kubenswrapper[31830]: I0319 12:35:26.633109 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-38dd62d2-8408-4ac5-a7b7-e2aa55a152b1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.633233 master-0 kubenswrapper[31830]: I0319 12:35:26.633130 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-scripts\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.633233 master-0 kubenswrapper[31830]: I0319 12:35:26.633153 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-public-tls-certs\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.633746 master-0 kubenswrapper[31830]: I0319 12:35:26.633720 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-httpd-run\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.633821 master-0 kubenswrapper[31830]: I0319 12:35:26.633807 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-combined-ca-bundle\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.633857 master-0 kubenswrapper[31830]: I0319 12:35:26.633828 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-logs\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.633944 master-0 kubenswrapper[31830]: I0319 12:35:26.633897 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-config-data\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.635263 master-0 kubenswrapper[31830]: I0319 12:35:26.635221 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-edpm-a" (OuterVolumeSpecName: "edpm-a") pod "28ea1273-426e-46b7-b57f-7d6bbadddd08" (UID: "28ea1273-426e-46b7-b57f-7d6bbadddd08"). InnerVolumeSpecName "edpm-a". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:26.636697 master-0 kubenswrapper[31830]: I0319 12:35:26.636670 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-httpd-run\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.636777 master-0 kubenswrapper[31830]: I0319 12:35:26.636701 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-logs\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.637894 master-0 kubenswrapper[31830]: I0319 12:35:26.637867 31830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 12:35:26.637951 master-0 kubenswrapper[31830]: I0319 12:35:26.637895 31830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-38dd62d2-8408-4ac5-a7b7-e2aa55a152b1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/e424bd2a44d69b7b9bbf34d8863c487c6938417f60f1d51f079a71c7d4c379eb/globalmount\"" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.643720 master-0 kubenswrapper[31830]: I0319 12:35:26.643682 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-config-data\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.652863 master-0 kubenswrapper[31830]: I0319 12:35:26.652655 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-scripts\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.652863 master-0 kubenswrapper[31830]: I0319 12:35:26.652775 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-public-tls-certs\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.661546 master-0 kubenswrapper[31830]: I0319 12:35:26.661499 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-combined-ca-bundle\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.668235 master-0 kubenswrapper[31830]: I0319 12:35:26.668159 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbqmh\" (UniqueName: \"kubernetes.io/projected/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-kube-api-access-bbqmh\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:26.674653 master-0 kubenswrapper[31830]: I0319 12:35:26.674557 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-config" (OuterVolumeSpecName: "config") pod "28ea1273-426e-46b7-b57f-7d6bbadddd08" (UID: "28ea1273-426e-46b7-b57f-7d6bbadddd08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:26.717366 master-0 kubenswrapper[31830]: I0319 12:35:26.717207 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "28ea1273-426e-46b7-b57f-7d6bbadddd08" (UID: "28ea1273-426e-46b7-b57f-7d6bbadddd08"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:26.736784 master-0 kubenswrapper[31830]: I0319 12:35:26.736184 31830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:26.736784 master-0 kubenswrapper[31830]: I0319 12:35:26.736233 31830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:26.736784 master-0 kubenswrapper[31830]: I0319 12:35:26.736248 31830 reconciler_common.go:293] "Volume detached for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-edpm-a\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:26.742219 master-0 kubenswrapper[31830]: I0319 12:35:26.739813 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "28ea1273-426e-46b7-b57f-7d6bbadddd08" (UID: "28ea1273-426e-46b7-b57f-7d6bbadddd08"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:26.788458 master-0 kubenswrapper[31830]: I0319 12:35:26.788397 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "28ea1273-426e-46b7-b57f-7d6bbadddd08" (UID: "28ea1273-426e-46b7-b57f-7d6bbadddd08"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:26.803557 master-0 kubenswrapper[31830]: I0319 12:35:26.803487 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-edpm-b" (OuterVolumeSpecName: "edpm-b") pod "28ea1273-426e-46b7-b57f-7d6bbadddd08" (UID: "28ea1273-426e-46b7-b57f-7d6bbadddd08"). InnerVolumeSpecName "edpm-b". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:26.812341 master-0 kubenswrapper[31830]: I0319 12:35:26.812211 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "28ea1273-426e-46b7-b57f-7d6bbadddd08" (UID: "28ea1273-426e-46b7-b57f-7d6bbadddd08"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:26.840846 master-0 kubenswrapper[31830]: I0319 12:35:26.839691 31830 reconciler_common.go:293] "Volume detached for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-edpm-b\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:26.840846 master-0 kubenswrapper[31830]: I0319 12:35:26.839746 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:26.840846 master-0 kubenswrapper[31830]: I0319 12:35:26.839763 31830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:26.840846 master-0 kubenswrapper[31830]: I0319 12:35:26.839776 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28ea1273-426e-46b7-b57f-7d6bbadddd08-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:27.267626 master-0 kubenswrapper[31830]: I0319 12:35:27.267208 31830 generic.go:334] "Generic (PLEG): container finished" podID="a3dcd24b-6811-4650-adb2-352c99e50b99" containerID="329ba7480b6e708e250c679c2559841524a126d289614d3d443afda9ed16ada0" exitCode=0 Mar 19 12:35:27.267626 master-0 kubenswrapper[31830]: I0319 12:35:27.267313 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77777b4857-hrt6t" event={"ID":"a3dcd24b-6811-4650-adb2-352c99e50b99","Type":"ContainerDied","Data":"329ba7480b6e708e250c679c2559841524a126d289614d3d443afda9ed16ada0"} Mar 19 12:35:27.267626 master-0 kubenswrapper[31830]: I0319 12:35:27.267346 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77777b4857-hrt6t" event={"ID":"a3dcd24b-6811-4650-adb2-352c99e50b99","Type":"ContainerStarted","Data":"113f26a344afc36dfef635a282617bb09b51c191b4c1d3a109a14bf7007e4b37"} Mar 19 12:35:27.273503 master-0 kubenswrapper[31830]: I0319 12:35:27.273445 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-db-sync-n5228" event={"ID":"538593b3-ec2b-4d6e-9f10-3e7add4f7b41","Type":"ContainerStarted","Data":"9ec2cbb39f90eeb811a4d3ba067f87086eed7f624be729dbc891d8c3e491d37a"} Mar 19 12:35:27.277220 master-0 kubenswrapper[31830]: I0319 12:35:27.277188 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lpz7t" event={"ID":"48e729f7-b182-49a0-8d92-174b44693dad","Type":"ContainerStarted","Data":"f5d3a30bc425dc190e31971ab04a82603e66a8b8d5061a644b625d6324b86801"} Mar 19 12:35:27.281088 master-0 kubenswrapper[31830]: I0319 12:35:27.280897 31830 generic.go:334] "Generic (PLEG): container finished" podID="b3fdc7f4-19ec-41b8-8eea-9f735f221281" containerID="e219e47f6a7dcbf9d4c11a890babf77021ccbf4592c057132ddd709747dac581" exitCode=0 Mar 19 12:35:27.281088 master-0 kubenswrapper[31830]: I0319 12:35:27.280947 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" event={"ID":"b3fdc7f4-19ec-41b8-8eea-9f735f221281","Type":"ContainerDied","Data":"e219e47f6a7dcbf9d4c11a890babf77021ccbf4592c057132ddd709747dac581"} Mar 19 12:35:27.284491 master-0 kubenswrapper[31830]: I0319 12:35:27.284143 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zl2rr" event={"ID":"14936556-fa0b-48fb-91e5-0ca806871a6c","Type":"ContainerStarted","Data":"b24b3a3e71f958f5220aefdf55eef0c7125e6e352b028eb2a67ee1354e09a18c"} Mar 19 12:35:27.284491 master-0 kubenswrapper[31830]: I0319 12:35:27.284328 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-577795c79c-8r4lh" Mar 19 12:35:27.355641 master-0 kubenswrapper[31830]: I0319 12:35:27.355491 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-zl2rr" podStartSLOduration=3.355470013 podStartE2EDuration="3.355470013s" podCreationTimestamp="2026-03-19 12:35:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:35:27.336648788 +0000 UTC m=+1265.885609492" watchObservedRunningTime="2026-03-19 12:35:27.355470013 +0000 UTC m=+1265.904430717" Mar 19 12:35:27.537927 master-0 kubenswrapper[31830]: I0319 12:35:27.523024 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-577795c79c-8r4lh"] Mar 19 12:35:27.651230 master-0 kubenswrapper[31830]: I0319 12:35:27.649330 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-577795c79c-8r4lh"] Mar 19 12:35:27.668015 master-0 kubenswrapper[31830]: I0319 12:35:27.667911 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-f4e38-default-internal-api-0"] Mar 19 12:35:27.670172 master-0 kubenswrapper[31830]: I0319 12:35:27.669993 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:27.672615 master-0 kubenswrapper[31830]: I0319 12:35:27.672572 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-f4e38-default-internal-config-data" Mar 19 12:35:27.672615 master-0 kubenswrapper[31830]: I0319 12:35:27.672593 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 19 12:35:27.709042 master-0 kubenswrapper[31830]: I0319 12:35:27.707404 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28ea1273-426e-46b7-b57f-7d6bbadddd08" path="/var/lib/kubelet/pods/28ea1273-426e-46b7-b57f-7d6bbadddd08/volumes" Mar 19 12:35:27.709042 master-0 kubenswrapper[31830]: I0319 12:35:27.708342 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f4e38-default-internal-api-0"] Mar 19 12:35:27.983322 master-0 kubenswrapper[31830]: I0319 12:35:27.983230 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f4e38-default-external-api-0"] Mar 19 12:35:27.984115 master-0 kubenswrapper[31830]: E0319 12:35:27.984062 31830 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-f4e38-default-external-api-0" podUID="c556fbff-d2d4-4de9-a7e7-8d57a23cfee7" Mar 19 12:35:28.064499 master-0 kubenswrapper[31830]: I0319 12:35:28.064071 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-38dd62d2-8408-4ac5-a7b7-e2aa55a152b1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33\") pod \"glance-f4e38-default-external-api-0\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:28.120339 master-0 kubenswrapper[31830]: I0319 12:35:28.120272 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e5a9bb1b-8030-49b3-b381-c2fe5809f953-httpd-run\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:28.120585 master-0 kubenswrapper[31830]: I0319 12:35:28.120360 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-combined-ca-bundle\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:28.120585 master-0 kubenswrapper[31830]: I0319 12:35:28.120546 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-scripts\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:28.120883 master-0 kubenswrapper[31830]: I0319 12:35:28.120611 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4b2w\" (UniqueName: \"kubernetes.io/projected/e5a9bb1b-8030-49b3-b381-c2fe5809f953-kube-api-access-n4b2w\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:28.120883 master-0 kubenswrapper[31830]: I0319 12:35:28.120636 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-config-data\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:28.120883 master-0 kubenswrapper[31830]: I0319 12:35:28.120715 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a9bb1b-8030-49b3-b381-c2fe5809f953-logs\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:28.120883 master-0 kubenswrapper[31830]: I0319 12:35:28.120753 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-internal-tls-certs\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:28.120883 master-0 kubenswrapper[31830]: I0319 12:35:28.120831 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a65517da-f83f-4270-b394-d7175eb38204\" (UniqueName: \"kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:28.133149 master-0 kubenswrapper[31830]: I0319 12:35:28.133103 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:28.222808 master-0 kubenswrapper[31830]: I0319 12:35:28.222745 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-edpm-b\") pod \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " Mar 19 12:35:28.223024 master-0 kubenswrapper[31830]: I0319 12:35:28.222848 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-config\") pod \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " Mar 19 12:35:28.223024 master-0 kubenswrapper[31830]: I0319 12:35:28.222890 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-edpm-a\") pod \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " Mar 19 12:35:28.223024 master-0 kubenswrapper[31830]: I0319 12:35:28.222992 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-ovsdbserver-sb\") pod \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " Mar 19 12:35:28.223128 master-0 kubenswrapper[31830]: I0319 12:35:28.223102 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-dns-svc\") pod \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " Mar 19 12:35:28.223170 master-0 kubenswrapper[31830]: I0319 12:35:28.223133 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8csr8\" (UniqueName: \"kubernetes.io/projected/b3fdc7f4-19ec-41b8-8eea-9f735f221281-kube-api-access-8csr8\") pod \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " Mar 19 12:35:28.223570 master-0 kubenswrapper[31830]: I0319 12:35:28.223293 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-dns-swift-storage-0\") pod \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " Mar 19 12:35:28.223570 master-0 kubenswrapper[31830]: I0319 12:35:28.223321 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-ovsdbserver-nb\") pod \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " Mar 19 12:35:28.223680 master-0 kubenswrapper[31830]: I0319 12:35:28.223661 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-scripts\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:28.223823 master-0 kubenswrapper[31830]: I0319 12:35:28.223718 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-config-data\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:28.223823 master-0 kubenswrapper[31830]: I0319 12:35:28.223787 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4b2w\" (UniqueName: \"kubernetes.io/projected/e5a9bb1b-8030-49b3-b381-c2fe5809f953-kube-api-access-n4b2w\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:28.223915 master-0 kubenswrapper[31830]: I0319 12:35:28.223870 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a9bb1b-8030-49b3-b381-c2fe5809f953-logs\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:28.223948 master-0 kubenswrapper[31830]: I0319 12:35:28.223911 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-internal-tls-certs\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:28.224205 master-0 kubenswrapper[31830]: I0319 12:35:28.224093 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e5a9bb1b-8030-49b3-b381-c2fe5809f953-httpd-run\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:28.224205 master-0 kubenswrapper[31830]: I0319 12:35:28.224141 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-combined-ca-bundle\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:28.225339 master-0 kubenswrapper[31830]: I0319 12:35:28.225298 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a9bb1b-8030-49b3-b381-c2fe5809f953-logs\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:28.227327 master-0 kubenswrapper[31830]: I0319 12:35:28.227280 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3fdc7f4-19ec-41b8-8eea-9f735f221281-kube-api-access-8csr8" (OuterVolumeSpecName: "kube-api-access-8csr8") pod "b3fdc7f4-19ec-41b8-8eea-9f735f221281" (UID: "b3fdc7f4-19ec-41b8-8eea-9f735f221281"). InnerVolumeSpecName "kube-api-access-8csr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:35:28.227672 master-0 kubenswrapper[31830]: I0319 12:35:28.227588 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e5a9bb1b-8030-49b3-b381-c2fe5809f953-httpd-run\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:28.238107 master-0 kubenswrapper[31830]: I0319 12:35:28.238051 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-combined-ca-bundle\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:28.240445 master-0 kubenswrapper[31830]: I0319 12:35:28.240396 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-config-data\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:28.242755 master-0 kubenswrapper[31830]: I0319 12:35:28.242715 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-scripts\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:28.261760 master-0 kubenswrapper[31830]: I0319 12:35:28.261681 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-internal-tls-certs\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:28.268095 master-0 kubenswrapper[31830]: I0319 12:35:28.268000 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-edpm-a" (OuterVolumeSpecName: "edpm-a") pod "b3fdc7f4-19ec-41b8-8eea-9f735f221281" (UID: "b3fdc7f4-19ec-41b8-8eea-9f735f221281"). InnerVolumeSpecName "edpm-a". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:28.278482 master-0 kubenswrapper[31830]: I0319 12:35:28.278416 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-edpm-b" (OuterVolumeSpecName: "edpm-b") pod "b3fdc7f4-19ec-41b8-8eea-9f735f221281" (UID: "b3fdc7f4-19ec-41b8-8eea-9f735f221281"). InnerVolumeSpecName "edpm-b". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:28.296843 master-0 kubenswrapper[31830]: I0319 12:35:28.296781 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b3fdc7f4-19ec-41b8-8eea-9f735f221281" (UID: "b3fdc7f4-19ec-41b8-8eea-9f735f221281"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:28.307624 master-0 kubenswrapper[31830]: I0319 12:35:28.307568 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" event={"ID":"b3fdc7f4-19ec-41b8-8eea-9f735f221281","Type":"ContainerDied","Data":"9116c96df3ba255200aefa9a69478d1a31dee4cb817cf60928299023dcdb5f82"} Mar 19 12:35:28.307980 master-0 kubenswrapper[31830]: I0319 12:35:28.307646 31830 scope.go:117] "RemoveContainer" containerID="e219e47f6a7dcbf9d4c11a890babf77021ccbf4592c057132ddd709747dac581" Mar 19 12:35:28.307980 master-0 kubenswrapper[31830]: I0319 12:35:28.307789 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5644f8c597-hdr4z" Mar 19 12:35:28.312038 master-0 kubenswrapper[31830]: I0319 12:35:28.312008 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:28.313700 master-0 kubenswrapper[31830]: I0319 12:35:28.313671 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77777b4857-hrt6t" event={"ID":"a3dcd24b-6811-4650-adb2-352c99e50b99","Type":"ContainerStarted","Data":"fac1dfd8fe49e8139b79a255b8309437775b2298893d36f2236b561952a3d8e9"} Mar 19 12:35:28.313784 master-0 kubenswrapper[31830]: I0319 12:35:28.313725 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:28.326534 master-0 kubenswrapper[31830]: I0319 12:35:28.326375 31830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:28.326534 master-0 kubenswrapper[31830]: I0319 12:35:28.326414 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8csr8\" (UniqueName: \"kubernetes.io/projected/b3fdc7f4-19ec-41b8-8eea-9f735f221281-kube-api-access-8csr8\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:28.326534 master-0 kubenswrapper[31830]: I0319 12:35:28.326430 31830 reconciler_common.go:293] "Volume detached for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-edpm-b\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:28.326534 master-0 kubenswrapper[31830]: I0319 12:35:28.326443 31830 reconciler_common.go:293] "Volume detached for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-edpm-a\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:28.327458 master-0 kubenswrapper[31830]: I0319 12:35:28.326578 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:28.428518 master-0 kubenswrapper[31830]: I0319 12:35:28.427892 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbqmh\" (UniqueName: \"kubernetes.io/projected/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-kube-api-access-bbqmh\") pod \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " Mar 19 12:35:28.428771 master-0 kubenswrapper[31830]: I0319 12:35:28.428534 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-public-tls-certs\") pod \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " Mar 19 12:35:28.428846 master-0 kubenswrapper[31830]: I0319 12:35:28.428782 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-config-data\") pod \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " Mar 19 12:35:28.428846 master-0 kubenswrapper[31830]: I0319 12:35:28.428840 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-scripts\") pod \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " Mar 19 12:35:28.429243 master-0 kubenswrapper[31830]: I0319 12:35:28.428936 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-httpd-run\") pod \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " Mar 19 12:35:28.429243 master-0 kubenswrapper[31830]: I0319 12:35:28.428965 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-combined-ca-bundle\") pod \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " Mar 19 12:35:28.429243 master-0 kubenswrapper[31830]: I0319 12:35:28.429004 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-logs\") pod \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " Mar 19 12:35:28.432706 master-0 kubenswrapper[31830]: I0319 12:35:28.432622 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c556fbff-d2d4-4de9-a7e7-8d57a23cfee7" (UID: "c556fbff-d2d4-4de9-a7e7-8d57a23cfee7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:35:28.437628 master-0 kubenswrapper[31830]: I0319 12:35:28.437561 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-logs" (OuterVolumeSpecName: "logs") pod "c556fbff-d2d4-4de9-a7e7-8d57a23cfee7" (UID: "c556fbff-d2d4-4de9-a7e7-8d57a23cfee7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:35:28.534816 master-0 kubenswrapper[31830]: I0319 12:35:28.534571 31830 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:28.534816 master-0 kubenswrapper[31830]: I0319 12:35:28.534737 31830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:28.550814 master-0 kubenswrapper[31830]: I0319 12:35:28.550430 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-config-data" (OuterVolumeSpecName: "config-data") pod "c556fbff-d2d4-4de9-a7e7-8d57a23cfee7" (UID: "c556fbff-d2d4-4de9-a7e7-8d57a23cfee7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:35:28.550814 master-0 kubenswrapper[31830]: I0319 12:35:28.550512 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c556fbff-d2d4-4de9-a7e7-8d57a23cfee7" (UID: "c556fbff-d2d4-4de9-a7e7-8d57a23cfee7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:35:28.550814 master-0 kubenswrapper[31830]: I0319 12:35:28.550563 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c556fbff-d2d4-4de9-a7e7-8d57a23cfee7" (UID: "c556fbff-d2d4-4de9-a7e7-8d57a23cfee7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:35:28.550814 master-0 kubenswrapper[31830]: I0319 12:35:28.550622 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-kube-api-access-bbqmh" (OuterVolumeSpecName: "kube-api-access-bbqmh") pod "c556fbff-d2d4-4de9-a7e7-8d57a23cfee7" (UID: "c556fbff-d2d4-4de9-a7e7-8d57a23cfee7"). InnerVolumeSpecName "kube-api-access-bbqmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:35:28.550814 master-0 kubenswrapper[31830]: I0319 12:35:28.550583 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-scripts" (OuterVolumeSpecName: "scripts") pod "c556fbff-d2d4-4de9-a7e7-8d57a23cfee7" (UID: "c556fbff-d2d4-4de9-a7e7-8d57a23cfee7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:35:28.551674 master-0 kubenswrapper[31830]: I0319 12:35:28.551043 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b3fdc7f4-19ec-41b8-8eea-9f735f221281" (UID: "b3fdc7f4-19ec-41b8-8eea-9f735f221281"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:28.551674 master-0 kubenswrapper[31830]: E0319 12:35:28.551600 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-config podName:b3fdc7f4-19ec-41b8-8eea-9f735f221281 nodeName:}" failed. No retries permitted until 2026-03-19 12:35:29.051527615 +0000 UTC m=+1267.600488529 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-config") pod "b3fdc7f4-19ec-41b8-8eea-9f735f221281" (UID: "b3fdc7f4-19ec-41b8-8eea-9f735f221281") : error deleting /var/lib/kubelet/pods/b3fdc7f4-19ec-41b8-8eea-9f735f221281/volume-subpaths: remove /var/lib/kubelet/pods/b3fdc7f4-19ec-41b8-8eea-9f735f221281/volume-subpaths: no such file or directory Mar 19 12:35:28.551674 master-0 kubenswrapper[31830]: I0319 12:35:28.551608 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b3fdc7f4-19ec-41b8-8eea-9f735f221281" (UID: "b3fdc7f4-19ec-41b8-8eea-9f735f221281"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:28.551877 master-0 kubenswrapper[31830]: I0319 12:35:28.551787 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b3fdc7f4-19ec-41b8-8eea-9f735f221281" (UID: "b3fdc7f4-19ec-41b8-8eea-9f735f221281"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:28.637057 master-0 kubenswrapper[31830]: I0319 12:35:28.636754 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33\") pod \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\" (UID: \"c556fbff-d2d4-4de9-a7e7-8d57a23cfee7\") " Mar 19 12:35:28.639844 master-0 kubenswrapper[31830]: I0319 12:35:28.637434 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:28.639844 master-0 kubenswrapper[31830]: I0319 12:35:28.637471 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbqmh\" (UniqueName: \"kubernetes.io/projected/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-kube-api-access-bbqmh\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:28.639844 master-0 kubenswrapper[31830]: I0319 12:35:28.637486 31830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:28.639844 master-0 kubenswrapper[31830]: I0319 12:35:28.637498 31830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:28.639844 master-0 kubenswrapper[31830]: I0319 12:35:28.637510 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:28.639844 master-0 kubenswrapper[31830]: I0319 12:35:28.637523 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:28.639844 master-0 kubenswrapper[31830]: I0319 12:35:28.637543 31830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:28.639844 master-0 kubenswrapper[31830]: I0319 12:35:28.637556 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:28.665348 master-0 kubenswrapper[31830]: I0319 12:35:28.665284 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33" (OuterVolumeSpecName: "glance") pod "c556fbff-d2d4-4de9-a7e7-8d57a23cfee7" (UID: "c556fbff-d2d4-4de9-a7e7-8d57a23cfee7"). InnerVolumeSpecName "pvc-38dd62d2-8408-4ac5-a7b7-e2aa55a152b1". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 19 12:35:28.741028 master-0 kubenswrapper[31830]: I0319 12:35:28.740768 31830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-38dd62d2-8408-4ac5-a7b7-e2aa55a152b1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33\") on node \"master-0\" " Mar 19 12:35:28.779969 master-0 kubenswrapper[31830]: I0319 12:35:28.779910 31830 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 19 12:35:28.780368 master-0 kubenswrapper[31830]: I0319 12:35:28.780335 31830 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-38dd62d2-8408-4ac5-a7b7-e2aa55a152b1" (UniqueName: "kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33") on node "master-0" Mar 19 12:35:28.843569 master-0 kubenswrapper[31830]: I0319 12:35:28.843355 31830 reconciler_common.go:293] "Volume detached for volume \"pvc-38dd62d2-8408-4ac5-a7b7-e2aa55a152b1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:29.052335 master-0 kubenswrapper[31830]: I0319 12:35:29.050890 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a65517da-f83f-4270-b394-d7175eb38204\" (UniqueName: \"kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:29.057481 master-0 kubenswrapper[31830]: I0319 12:35:29.057426 31830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 12:35:29.057481 master-0 kubenswrapper[31830]: I0319 12:35:29.057476 31830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a65517da-f83f-4270-b394-d7175eb38204\" (UniqueName: \"kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/a297acd8689bd9435b3ef7c4521a212d0a62d14f63738b5b80f182076c3660ff/globalmount\"" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:29.072876 master-0 kubenswrapper[31830]: I0319 12:35:29.072422 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4b2w\" (UniqueName: \"kubernetes.io/projected/e5a9bb1b-8030-49b3-b381-c2fe5809f953-kube-api-access-n4b2w\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:29.153722 master-0 kubenswrapper[31830]: I0319 12:35:29.152907 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-config\") pod \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\" (UID: \"b3fdc7f4-19ec-41b8-8eea-9f735f221281\") " Mar 19 12:35:29.153722 master-0 kubenswrapper[31830]: I0319 12:35:29.153442 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-config" (OuterVolumeSpecName: "config") pod "b3fdc7f4-19ec-41b8-8eea-9f735f221281" (UID: "b3fdc7f4-19ec-41b8-8eea-9f735f221281"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:29.153722 master-0 kubenswrapper[31830]: I0319 12:35:29.153673 31830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3fdc7f4-19ec-41b8-8eea-9f735f221281-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:29.346325 master-0 kubenswrapper[31830]: I0319 12:35:29.346278 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:30.437942 master-0 kubenswrapper[31830]: I0319 12:35:30.437743 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a65517da-f83f-4270-b394-d7175eb38204\" (UniqueName: \"kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:30.617237 master-0 kubenswrapper[31830]: I0319 12:35:30.617056 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f4e38-default-internal-api-0"] Mar 19 12:35:30.619382 master-0 kubenswrapper[31830]: E0319 12:35:30.619339 31830 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-f4e38-default-internal-api-0" podUID="e5a9bb1b-8030-49b3-b381-c2fe5809f953" Mar 19 12:35:31.372059 master-0 kubenswrapper[31830]: I0319 12:35:31.371983 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:31.389465 master-0 kubenswrapper[31830]: I0319 12:35:31.389418 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:31.404576 master-0 kubenswrapper[31830]: I0319 12:35:31.404476 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-config-data\") pod \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " Mar 19 12:35:31.404576 master-0 kubenswrapper[31830]: I0319 12:35:31.404552 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e5a9bb1b-8030-49b3-b381-c2fe5809f953-httpd-run\") pod \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " Mar 19 12:35:31.404892 master-0 kubenswrapper[31830]: I0319 12:35:31.404733 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3\") pod \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " Mar 19 12:35:31.404892 master-0 kubenswrapper[31830]: I0319 12:35:31.404881 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-combined-ca-bundle\") pod \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " Mar 19 12:35:31.404992 master-0 kubenswrapper[31830]: I0319 12:35:31.404916 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a9bb1b-8030-49b3-b381-c2fe5809f953-logs\") pod \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " Mar 19 12:35:31.405028 master-0 kubenswrapper[31830]: I0319 12:35:31.405008 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-scripts\") pod \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " Mar 19 12:35:31.405115 master-0 kubenswrapper[31830]: I0319 12:35:31.405092 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4b2w\" (UniqueName: \"kubernetes.io/projected/e5a9bb1b-8030-49b3-b381-c2fe5809f953-kube-api-access-n4b2w\") pod \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " Mar 19 12:35:31.405219 master-0 kubenswrapper[31830]: I0319 12:35:31.405186 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-internal-tls-certs\") pod \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\" (UID: \"e5a9bb1b-8030-49b3-b381-c2fe5809f953\") " Mar 19 12:35:31.405834 master-0 kubenswrapper[31830]: I0319 12:35:31.405431 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5a9bb1b-8030-49b3-b381-c2fe5809f953-logs" (OuterVolumeSpecName: "logs") pod "e5a9bb1b-8030-49b3-b381-c2fe5809f953" (UID: "e5a9bb1b-8030-49b3-b381-c2fe5809f953"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:35:31.405834 master-0 kubenswrapper[31830]: I0319 12:35:31.405581 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5a9bb1b-8030-49b3-b381-c2fe5809f953-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e5a9bb1b-8030-49b3-b381-c2fe5809f953" (UID: "e5a9bb1b-8030-49b3-b381-c2fe5809f953"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:35:31.405941 master-0 kubenswrapper[31830]: I0319 12:35:31.405918 31830 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e5a9bb1b-8030-49b3-b381-c2fe5809f953-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:31.405941 master-0 kubenswrapper[31830]: I0319 12:35:31.405938 31830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a9bb1b-8030-49b3-b381-c2fe5809f953-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:31.408923 master-0 kubenswrapper[31830]: I0319 12:35:31.408741 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5a9bb1b-8030-49b3-b381-c2fe5809f953-kube-api-access-n4b2w" (OuterVolumeSpecName: "kube-api-access-n4b2w") pod "e5a9bb1b-8030-49b3-b381-c2fe5809f953" (UID: "e5a9bb1b-8030-49b3-b381-c2fe5809f953"). InnerVolumeSpecName "kube-api-access-n4b2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:35:31.411185 master-0 kubenswrapper[31830]: I0319 12:35:31.411145 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-scripts" (OuterVolumeSpecName: "scripts") pod "e5a9bb1b-8030-49b3-b381-c2fe5809f953" (UID: "e5a9bb1b-8030-49b3-b381-c2fe5809f953"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:35:31.411356 master-0 kubenswrapper[31830]: I0319 12:35:31.411307 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5a9bb1b-8030-49b3-b381-c2fe5809f953" (UID: "e5a9bb1b-8030-49b3-b381-c2fe5809f953"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:35:31.412167 master-0 kubenswrapper[31830]: I0319 12:35:31.412124 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-config-data" (OuterVolumeSpecName: "config-data") pod "e5a9bb1b-8030-49b3-b381-c2fe5809f953" (UID: "e5a9bb1b-8030-49b3-b381-c2fe5809f953"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:35:31.414319 master-0 kubenswrapper[31830]: I0319 12:35:31.414275 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e5a9bb1b-8030-49b3-b381-c2fe5809f953" (UID: "e5a9bb1b-8030-49b3-b381-c2fe5809f953"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:35:31.429806 master-0 kubenswrapper[31830]: I0319 12:35:31.429723 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3" (OuterVolumeSpecName: "glance") pod "e5a9bb1b-8030-49b3-b381-c2fe5809f953" (UID: "e5a9bb1b-8030-49b3-b381-c2fe5809f953"). InnerVolumeSpecName "pvc-a65517da-f83f-4270-b394-d7175eb38204". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 19 12:35:31.507960 master-0 kubenswrapper[31830]: I0319 12:35:31.507873 31830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:31.507960 master-0 kubenswrapper[31830]: I0319 12:35:31.507934 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:31.508562 master-0 kubenswrapper[31830]: I0319 12:35:31.507986 31830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a65517da-f83f-4270-b394-d7175eb38204\" (UniqueName: \"kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3\") on node \"master-0\" " Mar 19 12:35:31.508562 master-0 kubenswrapper[31830]: I0319 12:35:31.508000 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:31.508562 master-0 kubenswrapper[31830]: I0319 12:35:31.508010 31830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a9bb1b-8030-49b3-b381-c2fe5809f953-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:31.508562 master-0 kubenswrapper[31830]: I0319 12:35:31.508021 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4b2w\" (UniqueName: \"kubernetes.io/projected/e5a9bb1b-8030-49b3-b381-c2fe5809f953-kube-api-access-n4b2w\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:31.535211 master-0 kubenswrapper[31830]: I0319 12:35:31.535114 31830 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 19 12:35:31.535436 master-0 kubenswrapper[31830]: I0319 12:35:31.535395 31830 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a65517da-f83f-4270-b394-d7175eb38204" (UniqueName: "kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3") on node "master-0" Mar 19 12:35:31.609966 master-0 kubenswrapper[31830]: I0319 12:35:31.609894 31830 reconciler_common.go:293] "Volume detached for volume \"pvc-a65517da-f83f-4270-b394-d7175eb38204\" (UniqueName: \"kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:31.737037 master-0 kubenswrapper[31830]: I0319 12:35:31.736962 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5644f8c597-hdr4z"] Mar 19 12:35:32.382255 master-0 kubenswrapper[31830]: I0319 12:35:32.382180 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:32.877813 master-0 kubenswrapper[31830]: I0319 12:35:32.877267 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5644f8c597-hdr4z"] Mar 19 12:35:33.506299 master-0 kubenswrapper[31830]: I0319 12:35:33.506189 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77777b4857-hrt6t" podStartSLOduration=9.506162192 podStartE2EDuration="9.506162192s" podCreationTimestamp="2026-03-19 12:35:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:35:33.103321448 +0000 UTC m=+1271.652282152" watchObservedRunningTime="2026-03-19 12:35:33.506162192 +0000 UTC m=+1272.055122886" Mar 19 12:35:33.640685 master-0 kubenswrapper[31830]: I0319 12:35:33.640612 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f4e38-default-external-api-0"] Mar 19 12:35:33.699910 master-0 kubenswrapper[31830]: I0319 12:35:33.699852 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3fdc7f4-19ec-41b8-8eea-9f735f221281" path="/var/lib/kubelet/pods/b3fdc7f4-19ec-41b8-8eea-9f735f221281/volumes" Mar 19 12:35:34.082871 master-0 kubenswrapper[31830]: I0319 12:35:34.078443 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-f4e38-default-external-api-0"] Mar 19 12:35:34.864808 master-0 kubenswrapper[31830]: I0319 12:35:34.864715 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-f4e38-default-external-api-0"] Mar 19 12:35:34.865282 master-0 kubenswrapper[31830]: E0319 12:35:34.865261 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3fdc7f4-19ec-41b8-8eea-9f735f221281" containerName="init" Mar 19 12:35:34.865282 master-0 kubenswrapper[31830]: I0319 12:35:34.865280 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3fdc7f4-19ec-41b8-8eea-9f735f221281" containerName="init" Mar 19 12:35:34.865536 master-0 kubenswrapper[31830]: I0319 12:35:34.865495 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3fdc7f4-19ec-41b8-8eea-9f735f221281" containerName="init" Mar 19 12:35:34.866651 master-0 kubenswrapper[31830]: I0319 12:35:34.866614 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:34.874876 master-0 kubenswrapper[31830]: I0319 12:35:34.870000 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-f4e38-default-external-config-data" Mar 19 12:35:34.874876 master-0 kubenswrapper[31830]: I0319 12:35:34.870116 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 19 12:35:34.874876 master-0 kubenswrapper[31830]: I0319 12:35:34.870280 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 19 12:35:34.962888 master-0 kubenswrapper[31830]: I0319 12:35:34.962331 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f4e38-default-external-api-0"] Mar 19 12:35:35.013065 master-0 kubenswrapper[31830]: I0319 12:35:35.007689 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2df405a8-816c-4e6f-a3a1-fb4e350d0188-logs\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.013065 master-0 kubenswrapper[31830]: I0319 12:35:35.007762 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-combined-ca-bundle\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.013065 master-0 kubenswrapper[31830]: I0319 12:35:35.007833 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2df405a8-816c-4e6f-a3a1-fb4e350d0188-httpd-run\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.013065 master-0 kubenswrapper[31830]: I0319 12:35:35.007853 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-public-tls-certs\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.013065 master-0 kubenswrapper[31830]: I0319 12:35:35.007873 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-38dd62d2-8408-4ac5-a7b7-e2aa55a152b1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.013065 master-0 kubenswrapper[31830]: I0319 12:35:35.007918 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-config-data\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.013065 master-0 kubenswrapper[31830]: I0319 12:35:35.007974 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-scripts\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.013065 master-0 kubenswrapper[31830]: I0319 12:35:35.007995 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5crrx\" (UniqueName: \"kubernetes.io/projected/2df405a8-816c-4e6f-a3a1-fb4e350d0188-kube-api-access-5crrx\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.058705 master-0 kubenswrapper[31830]: I0319 12:35:35.056186 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f4e38-default-internal-api-0"] Mar 19 12:35:35.075979 master-0 kubenswrapper[31830]: I0319 12:35:35.070426 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-f4e38-default-internal-api-0"] Mar 19 12:35:35.085676 master-0 kubenswrapper[31830]: I0319 12:35:35.085592 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-f4e38-default-internal-api-0"] Mar 19 12:35:35.094343 master-0 kubenswrapper[31830]: I0319 12:35:35.087775 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.097895 master-0 kubenswrapper[31830]: I0319 12:35:35.097529 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-f4e38-default-internal-config-data" Mar 19 12:35:35.097895 master-0 kubenswrapper[31830]: I0319 12:35:35.097767 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 19 12:35:35.117887 master-0 kubenswrapper[31830]: I0319 12:35:35.117532 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-scripts\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.117887 master-0 kubenswrapper[31830]: I0319 12:35:35.117629 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5crrx\" (UniqueName: \"kubernetes.io/projected/2df405a8-816c-4e6f-a3a1-fb4e350d0188-kube-api-access-5crrx\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.138262 master-0 kubenswrapper[31830]: I0319 12:35:35.119002 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f4e38-default-internal-api-0"] Mar 19 12:35:35.138262 master-0 kubenswrapper[31830]: I0319 12:35:35.138176 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2df405a8-816c-4e6f-a3a1-fb4e350d0188-logs\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.138753 master-0 kubenswrapper[31830]: I0319 12:35:35.138318 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-combined-ca-bundle\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.138753 master-0 kubenswrapper[31830]: I0319 12:35:35.138444 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2df405a8-816c-4e6f-a3a1-fb4e350d0188-httpd-run\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.138753 master-0 kubenswrapper[31830]: I0319 12:35:35.138485 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-public-tls-certs\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.138753 master-0 kubenswrapper[31830]: I0319 12:35:35.138533 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-38dd62d2-8408-4ac5-a7b7-e2aa55a152b1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.138753 master-0 kubenswrapper[31830]: I0319 12:35:35.138650 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-config-data\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.160826 master-0 kubenswrapper[31830]: I0319 12:35:35.142404 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2df405a8-816c-4e6f-a3a1-fb4e350d0188-httpd-run\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.160826 master-0 kubenswrapper[31830]: I0319 12:35:35.142959 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2df405a8-816c-4e6f-a3a1-fb4e350d0188-logs\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.160826 master-0 kubenswrapper[31830]: I0319 12:35:35.145489 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-public-tls-certs\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.160826 master-0 kubenswrapper[31830]: I0319 12:35:35.145534 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-scripts\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.160826 master-0 kubenswrapper[31830]: I0319 12:35:35.145558 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-combined-ca-bundle\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.160826 master-0 kubenswrapper[31830]: I0319 12:35:35.148516 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-config-data\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.160826 master-0 kubenswrapper[31830]: I0319 12:35:35.148729 31830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 12:35:35.160826 master-0 kubenswrapper[31830]: I0319 12:35:35.148774 31830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-38dd62d2-8408-4ac5-a7b7-e2aa55a152b1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/e424bd2a44d69b7b9bbf34d8863c487c6938417f60f1d51f079a71c7d4c379eb/globalmount\"" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.160826 master-0 kubenswrapper[31830]: I0319 12:35:35.148789 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5crrx\" (UniqueName: \"kubernetes.io/projected/2df405a8-816c-4e6f-a3a1-fb4e350d0188-kube-api-access-5crrx\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:35.241490 master-0 kubenswrapper[31830]: I0319 12:35:35.241435 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dz7k\" (UniqueName: \"kubernetes.io/projected/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-kube-api-access-4dz7k\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.241717 master-0 kubenswrapper[31830]: I0319 12:35:35.241505 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-internal-tls-certs\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.241717 master-0 kubenswrapper[31830]: I0319 12:35:35.241655 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-httpd-run\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.241717 master-0 kubenswrapper[31830]: I0319 12:35:35.241706 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-config-data\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.241891 master-0 kubenswrapper[31830]: I0319 12:35:35.241752 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-scripts\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.241891 master-0 kubenswrapper[31830]: I0319 12:35:35.241789 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-combined-ca-bundle\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.241891 master-0 kubenswrapper[31830]: I0319 12:35:35.241839 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a65517da-f83f-4270-b394-d7175eb38204\" (UniqueName: \"kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.242085 master-0 kubenswrapper[31830]: I0319 12:35:35.241957 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-logs\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.343703 master-0 kubenswrapper[31830]: I0319 12:35:35.343563 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-combined-ca-bundle\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.343703 master-0 kubenswrapper[31830]: I0319 12:35:35.343631 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a65517da-f83f-4270-b394-d7175eb38204\" (UniqueName: \"kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.344327 master-0 kubenswrapper[31830]: I0319 12:35:35.343718 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-logs\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.344327 master-0 kubenswrapper[31830]: I0319 12:35:35.343832 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dz7k\" (UniqueName: \"kubernetes.io/projected/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-kube-api-access-4dz7k\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.344327 master-0 kubenswrapper[31830]: I0319 12:35:35.344049 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-internal-tls-certs\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.344327 master-0 kubenswrapper[31830]: I0319 12:35:35.344164 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-httpd-run\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.344327 master-0 kubenswrapper[31830]: I0319 12:35:35.344222 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-config-data\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.344327 master-0 kubenswrapper[31830]: I0319 12:35:35.344270 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-scripts\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.344718 master-0 kubenswrapper[31830]: I0319 12:35:35.344339 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-logs\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.344718 master-0 kubenswrapper[31830]: I0319 12:35:35.344588 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-httpd-run\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.346202 master-0 kubenswrapper[31830]: I0319 12:35:35.346173 31830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 12:35:35.346290 master-0 kubenswrapper[31830]: I0319 12:35:35.346212 31830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a65517da-f83f-4270-b394-d7175eb38204\" (UniqueName: \"kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/a297acd8689bd9435b3ef7c4521a212d0a62d14f63738b5b80f182076c3660ff/globalmount\"" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.348054 master-0 kubenswrapper[31830]: I0319 12:35:35.348021 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-internal-tls-certs\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.351271 master-0 kubenswrapper[31830]: I0319 12:35:35.351226 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-combined-ca-bundle\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.358824 master-0 kubenswrapper[31830]: I0319 12:35:35.356956 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-config-data\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.361494 master-0 kubenswrapper[31830]: I0319 12:35:35.361449 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-scripts\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.365433 master-0 kubenswrapper[31830]: I0319 12:35:35.365388 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dz7k\" (UniqueName: \"kubernetes.io/projected/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-kube-api-access-4dz7k\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:35.691173 master-0 kubenswrapper[31830]: I0319 12:35:35.691048 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c556fbff-d2d4-4de9-a7e7-8d57a23cfee7" path="/var/lib/kubelet/pods/c556fbff-d2d4-4de9-a7e7-8d57a23cfee7/volumes" Mar 19 12:35:35.691638 master-0 kubenswrapper[31830]: I0319 12:35:35.691608 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5a9bb1b-8030-49b3-b381-c2fe5809f953" path="/var/lib/kubelet/pods/e5a9bb1b-8030-49b3-b381-c2fe5809f953/volumes" Mar 19 12:35:35.692136 master-0 kubenswrapper[31830]: I0319 12:35:35.692106 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:35:35.870419 master-0 kubenswrapper[31830]: I0319 12:35:35.867996 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6465768b8c-fp4jc"] Mar 19 12:35:35.870419 master-0 kubenswrapper[31830]: I0319 12:35:35.868254 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" podUID="5f825bf1-6d44-4e78-85db-bc6c7371a9d9" containerName="dnsmasq-dns" containerID="cri-o://f2d895ffc56acb0ca2fa97c4253d51276da8c0d2302ee6b07d917a8a003cffa7" gracePeriod=10 Mar 19 12:35:36.019476 master-0 kubenswrapper[31830]: I0319 12:35:36.017086 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-38dd62d2-8408-4ac5-a7b7-e2aa55a152b1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33\") pod \"glance-f4e38-default-external-api-0\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:36.097531 master-0 kubenswrapper[31830]: I0319 12:35:36.097485 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:36.817469 master-0 kubenswrapper[31830]: I0319 12:35:36.817406 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a65517da-f83f-4270-b394-d7175eb38204\" (UniqueName: \"kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:37.047341 master-0 kubenswrapper[31830]: I0319 12:35:37.047279 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:39.911400 master-0 kubenswrapper[31830]: I0319 12:35:39.911303 31830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" podUID="5f825bf1-6d44-4e78-85db-bc6c7371a9d9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.204:5353: connect: connection refused" Mar 19 12:35:44.911979 master-0 kubenswrapper[31830]: I0319 12:35:44.911906 31830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" podUID="5f825bf1-6d44-4e78-85db-bc6c7371a9d9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.204:5353: connect: connection refused" Mar 19 12:35:49.633365 master-0 kubenswrapper[31830]: I0319 12:35:49.632133 31830 generic.go:334] "Generic (PLEG): container finished" podID="4952f965-eb25-4397-bdd3-b8a75e9eb4ed" containerID="94720d41e4fa7e96d5c44a8a22c4f5e6ec5e00bb25811056d8a600795c74540b" exitCode=0 Mar 19 12:35:49.633365 master-0 kubenswrapper[31830]: I0319 12:35:49.632217 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pg69z" event={"ID":"4952f965-eb25-4397-bdd3-b8a75e9eb4ed","Type":"ContainerDied","Data":"94720d41e4fa7e96d5c44a8a22c4f5e6ec5e00bb25811056d8a600795c74540b"} Mar 19 12:35:49.673259 master-0 kubenswrapper[31830]: I0319 12:35:49.673191 31830 generic.go:334] "Generic (PLEG): container finished" podID="5f825bf1-6d44-4e78-85db-bc6c7371a9d9" containerID="f2d895ffc56acb0ca2fa97c4253d51276da8c0d2302ee6b07d917a8a003cffa7" exitCode=0 Mar 19 12:35:49.673259 master-0 kubenswrapper[31830]: I0319 12:35:49.673253 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" event={"ID":"5f825bf1-6d44-4e78-85db-bc6c7371a9d9","Type":"ContainerDied","Data":"f2d895ffc56acb0ca2fa97c4253d51276da8c0d2302ee6b07d917a8a003cffa7"} Mar 19 12:35:49.717172 master-0 kubenswrapper[31830]: I0319 12:35:49.717113 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:49.793636 master-0 kubenswrapper[31830]: I0319 12:35:49.793586 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-edpm-b\") pod \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " Mar 19 12:35:49.793932 master-0 kubenswrapper[31830]: I0319 12:35:49.793916 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-config\") pod \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " Mar 19 12:35:49.794159 master-0 kubenswrapper[31830]: I0319 12:35:49.794144 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-ovsdbserver-nb\") pod \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " Mar 19 12:35:49.794367 master-0 kubenswrapper[31830]: I0319 12:35:49.794351 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lrpj\" (UniqueName: \"kubernetes.io/projected/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-kube-api-access-2lrpj\") pod \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " Mar 19 12:35:49.794643 master-0 kubenswrapper[31830]: I0319 12:35:49.794627 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-dns-svc\") pod \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " Mar 19 12:35:49.794852 master-0 kubenswrapper[31830]: I0319 12:35:49.794837 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-ovsdbserver-sb\") pod \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " Mar 19 12:35:49.795160 master-0 kubenswrapper[31830]: I0319 12:35:49.795142 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-edpm-a\") pod \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " Mar 19 12:35:49.795251 master-0 kubenswrapper[31830]: I0319 12:35:49.795237 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-dns-swift-storage-0\") pod \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\" (UID: \"5f825bf1-6d44-4e78-85db-bc6c7371a9d9\") " Mar 19 12:35:49.842218 master-0 kubenswrapper[31830]: I0319 12:35:49.842161 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-kube-api-access-2lrpj" (OuterVolumeSpecName: "kube-api-access-2lrpj") pod "5f825bf1-6d44-4e78-85db-bc6c7371a9d9" (UID: "5f825bf1-6d44-4e78-85db-bc6c7371a9d9"). InnerVolumeSpecName "kube-api-access-2lrpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:35:49.869891 master-0 kubenswrapper[31830]: I0319 12:35:49.869777 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-config" (OuterVolumeSpecName: "config") pod "5f825bf1-6d44-4e78-85db-bc6c7371a9d9" (UID: "5f825bf1-6d44-4e78-85db-bc6c7371a9d9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:49.890117 master-0 kubenswrapper[31830]: I0319 12:35:49.876056 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-edpm-a" (OuterVolumeSpecName: "edpm-a") pod "5f825bf1-6d44-4e78-85db-bc6c7371a9d9" (UID: "5f825bf1-6d44-4e78-85db-bc6c7371a9d9"). InnerVolumeSpecName "edpm-a". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:49.891527 master-0 kubenswrapper[31830]: I0319 12:35:49.890683 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f825bf1-6d44-4e78-85db-bc6c7371a9d9" (UID: "5f825bf1-6d44-4e78-85db-bc6c7371a9d9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:49.892769 master-0 kubenswrapper[31830]: I0319 12:35:49.892703 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5f825bf1-6d44-4e78-85db-bc6c7371a9d9" (UID: "5f825bf1-6d44-4e78-85db-bc6c7371a9d9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:49.893188 master-0 kubenswrapper[31830]: I0319 12:35:49.893135 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-edpm-b" (OuterVolumeSpecName: "edpm-b") pod "5f825bf1-6d44-4e78-85db-bc6c7371a9d9" (UID: "5f825bf1-6d44-4e78-85db-bc6c7371a9d9"). InnerVolumeSpecName "edpm-b". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:49.901501 master-0 kubenswrapper[31830]: I0319 12:35:49.898359 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:49.901501 master-0 kubenswrapper[31830]: I0319 12:35:49.898939 31830 reconciler_common.go:293] "Volume detached for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-edpm-a\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:49.901501 master-0 kubenswrapper[31830]: I0319 12:35:49.898954 31830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:49.901501 master-0 kubenswrapper[31830]: I0319 12:35:49.898969 31830 reconciler_common.go:293] "Volume detached for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-edpm-b\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:49.901501 master-0 kubenswrapper[31830]: I0319 12:35:49.898982 31830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:49.901501 master-0 kubenswrapper[31830]: I0319 12:35:49.898993 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lrpj\" (UniqueName: \"kubernetes.io/projected/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-kube-api-access-2lrpj\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:49.911840 master-0 kubenswrapper[31830]: I0319 12:35:49.911776 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f825bf1-6d44-4e78-85db-bc6c7371a9d9" (UID: "5f825bf1-6d44-4e78-85db-bc6c7371a9d9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:49.921532 master-0 kubenswrapper[31830]: I0319 12:35:49.921390 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f825bf1-6d44-4e78-85db-bc6c7371a9d9" (UID: "5f825bf1-6d44-4e78-85db-bc6c7371a9d9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:35:50.001342 master-0 kubenswrapper[31830]: I0319 12:35:50.001293 31830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:50.001342 master-0 kubenswrapper[31830]: I0319 12:35:50.001336 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f825bf1-6d44-4e78-85db-bc6c7371a9d9-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:50.110700 master-0 kubenswrapper[31830]: I0319 12:35:50.110434 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f4e38-default-external-api-0"] Mar 19 12:35:50.719246 master-0 kubenswrapper[31830]: I0319 12:35:50.719041 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lpz7t" event={"ID":"48e729f7-b182-49a0-8d92-174b44693dad","Type":"ContainerStarted","Data":"55aeeab99a6e9fda0c5166cfb5d594105808d1546be7583628ed115f2fbfb80e"} Mar 19 12:35:50.732825 master-0 kubenswrapper[31830]: I0319 12:35:50.732053 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f4e38-default-external-api-0" event={"ID":"2df405a8-816c-4e6f-a3a1-fb4e350d0188","Type":"ContainerStarted","Data":"8e3d70cd1c3ac357bb3c4d53a15aed9178705458e5bc95d03d836f9960bb897a"} Mar 19 12:35:50.751464 master-0 kubenswrapper[31830]: I0319 12:35:50.750477 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h" event={"ID":"3bb563fb-d536-4cb0-9614-d331baa95e1b","Type":"ContainerStarted","Data":"aac0cefd133f580439cc2e3c6dc6fe7ae61f04fb85794fd95132e5c8dd0d68ad"} Mar 19 12:35:50.770223 master-0 kubenswrapper[31830]: I0319 12:35:50.770163 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb" event={"ID":"b7a848b2-11a9-47c9-881c-6ed12d3e3d1b","Type":"ContainerStarted","Data":"05c70f76b5038bca4bf23d0f8d1d569493148c3b786dfcc2d142b46a0577b9d8"} Mar 19 12:35:50.785826 master-0 kubenswrapper[31830]: I0319 12:35:50.784574 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" Mar 19 12:35:50.788063 master-0 kubenswrapper[31830]: I0319 12:35:50.787689 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6465768b8c-fp4jc" event={"ID":"5f825bf1-6d44-4e78-85db-bc6c7371a9d9","Type":"ContainerDied","Data":"992d445430916840aa57622c37656719f29eb9a8aae519a77a22704ed0cf5a41"} Mar 19 12:35:50.788063 master-0 kubenswrapper[31830]: I0319 12:35:50.787797 31830 scope.go:117] "RemoveContainer" containerID="f2d895ffc56acb0ca2fa97c4253d51276da8c0d2302ee6b07d917a8a003cffa7" Mar 19 12:35:50.835647 master-0 kubenswrapper[31830]: I0319 12:35:50.835487 31830 scope.go:117] "RemoveContainer" containerID="9885ad45a1ac75058d6dbc090d642d8e3d13e6a511b1a39effa22b69a519ddf4" Mar 19 12:35:51.107892 master-0 kubenswrapper[31830]: I0319 12:35:51.107837 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-lpz7t" podStartSLOduration=4.164119749 podStartE2EDuration="27.107817914s" podCreationTimestamp="2026-03-19 12:35:24 +0000 UTC" firstStartedPulling="2026-03-19 12:35:26.584389759 +0000 UTC m=+1265.133350463" lastFinishedPulling="2026-03-19 12:35:49.528087924 +0000 UTC m=+1288.077048628" observedRunningTime="2026-03-19 12:35:51.092625124 +0000 UTC m=+1289.641585828" watchObservedRunningTime="2026-03-19 12:35:51.107817914 +0000 UTC m=+1289.656778618" Mar 19 12:35:51.310038 master-0 kubenswrapper[31830]: I0319 12:35:51.309984 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:51.438675 master-0 kubenswrapper[31830]: I0319 12:35:51.438565 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-combined-ca-bundle\") pod \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " Mar 19 12:35:51.438675 master-0 kubenswrapper[31830]: I0319 12:35:51.438648 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-config-data\") pod \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " Mar 19 12:35:51.438958 master-0 kubenswrapper[31830]: I0319 12:35:51.438897 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-credential-keys\") pod \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " Mar 19 12:35:51.439055 master-0 kubenswrapper[31830]: I0319 12:35:51.439030 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-scripts\") pod \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " Mar 19 12:35:51.439154 master-0 kubenswrapper[31830]: I0319 12:35:51.439124 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr2gx\" (UniqueName: \"kubernetes.io/projected/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-kube-api-access-qr2gx\") pod \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " Mar 19 12:35:51.439204 master-0 kubenswrapper[31830]: I0319 12:35:51.439172 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-fernet-keys\") pod \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\" (UID: \"4952f965-eb25-4397-bdd3-b8a75e9eb4ed\") " Mar 19 12:35:51.442550 master-0 kubenswrapper[31830]: I0319 12:35:51.442505 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-scripts" (OuterVolumeSpecName: "scripts") pod "4952f965-eb25-4397-bdd3-b8a75e9eb4ed" (UID: "4952f965-eb25-4397-bdd3-b8a75e9eb4ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:35:51.444160 master-0 kubenswrapper[31830]: I0319 12:35:51.444105 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4952f965-eb25-4397-bdd3-b8a75e9eb4ed" (UID: "4952f965-eb25-4397-bdd3-b8a75e9eb4ed"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:35:51.444251 master-0 kubenswrapper[31830]: I0319 12:35:51.444198 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4952f965-eb25-4397-bdd3-b8a75e9eb4ed" (UID: "4952f965-eb25-4397-bdd3-b8a75e9eb4ed"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:35:51.445140 master-0 kubenswrapper[31830]: I0319 12:35:51.444864 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-kube-api-access-qr2gx" (OuterVolumeSpecName: "kube-api-access-qr2gx") pod "4952f965-eb25-4397-bdd3-b8a75e9eb4ed" (UID: "4952f965-eb25-4397-bdd3-b8a75e9eb4ed"). InnerVolumeSpecName "kube-api-access-qr2gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:35:51.471346 master-0 kubenswrapper[31830]: I0319 12:35:51.471225 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4952f965-eb25-4397-bdd3-b8a75e9eb4ed" (UID: "4952f965-eb25-4397-bdd3-b8a75e9eb4ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:35:51.474595 master-0 kubenswrapper[31830]: I0319 12:35:51.474548 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-config-data" (OuterVolumeSpecName: "config-data") pod "4952f965-eb25-4397-bdd3-b8a75e9eb4ed" (UID: "4952f965-eb25-4397-bdd3-b8a75e9eb4ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:35:51.543500 master-0 kubenswrapper[31830]: I0319 12:35:51.543430 31830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:51.543500 master-0 kubenswrapper[31830]: I0319 12:35:51.543488 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qr2gx\" (UniqueName: \"kubernetes.io/projected/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-kube-api-access-qr2gx\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:51.543500 master-0 kubenswrapper[31830]: I0319 12:35:51.543503 31830 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-fernet-keys\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:51.543500 master-0 kubenswrapper[31830]: I0319 12:35:51.543515 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:51.544115 master-0 kubenswrapper[31830]: I0319 12:35:51.543529 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:51.544115 master-0 kubenswrapper[31830]: I0319 12:35:51.543541 31830 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4952f965-eb25-4397-bdd3-b8a75e9eb4ed-credential-keys\") on node \"master-0\" DevicePath \"\"" Mar 19 12:35:51.796757 master-0 kubenswrapper[31830]: I0319 12:35:51.796686 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-db-sync-n5228" event={"ID":"538593b3-ec2b-4d6e-9f10-3e7add4f7b41","Type":"ContainerStarted","Data":"680fb3101048559f79bb52dbc1af33ada1a89966aa320d17df762228772fa09b"} Mar 19 12:35:51.800291 master-0 kubenswrapper[31830]: I0319 12:35:51.800251 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f4e38-default-external-api-0" event={"ID":"2df405a8-816c-4e6f-a3a1-fb4e350d0188","Type":"ContainerStarted","Data":"882becff49bfa95bb73cd4b31ade5255291e0c70a9eb13fc2dda996634ebf04f"} Mar 19 12:35:51.800365 master-0 kubenswrapper[31830]: I0319 12:35:51.800299 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f4e38-default-external-api-0" event={"ID":"2df405a8-816c-4e6f-a3a1-fb4e350d0188","Type":"ContainerStarted","Data":"540715b44bf4f1fb44c15bd059ddd42ec745d87db740c1bcad0d1b93567610e4"} Mar 19 12:35:51.803067 master-0 kubenswrapper[31830]: I0319 12:35:51.803038 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pg69z" event={"ID":"4952f965-eb25-4397-bdd3-b8a75e9eb4ed","Type":"ContainerDied","Data":"647c4f3ace511298076b3f544835bcf3aab86d60b90661b9b9ce9e843bf510a4"} Mar 19 12:35:51.803138 master-0 kubenswrapper[31830]: I0319 12:35:51.803073 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="647c4f3ace511298076b3f544835bcf3aab86d60b90661b9b9ce9e843bf510a4" Mar 19 12:35:51.803138 master-0 kubenswrapper[31830]: I0319 12:35:51.803120 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pg69z" Mar 19 12:35:52.016708 master-0 kubenswrapper[31830]: I0319 12:35:52.016639 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6465768b8c-fp4jc"] Mar 19 12:35:52.395457 master-0 kubenswrapper[31830]: I0319 12:35:52.395295 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6465768b8c-fp4jc"] Mar 19 12:35:52.815395 master-0 kubenswrapper[31830]: I0319 12:35:52.815198 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f4e38-default-internal-api-0"] Mar 19 12:35:53.220958 master-0 kubenswrapper[31830]: I0319 12:35:53.220875 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-cce1e-db-sync-n5228" podStartSLOduration=5.389229024 podStartE2EDuration="29.220856045s" podCreationTimestamp="2026-03-19 12:35:24 +0000 UTC" firstStartedPulling="2026-03-19 12:35:26.120635707 +0000 UTC m=+1264.669596401" lastFinishedPulling="2026-03-19 12:35:49.952262718 +0000 UTC m=+1288.501223422" observedRunningTime="2026-03-19 12:35:53.20681512 +0000 UTC m=+1291.755775824" watchObservedRunningTime="2026-03-19 12:35:53.220856045 +0000 UTC m=+1291.769816759" Mar 19 12:35:53.702235 master-0 kubenswrapper[31830]: I0319 12:35:53.702156 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f825bf1-6d44-4e78-85db-bc6c7371a9d9" path="/var/lib/kubelet/pods/5f825bf1-6d44-4e78-85db-bc6c7371a9d9/volumes" Mar 19 12:35:53.870308 master-0 kubenswrapper[31830]: I0319 12:35:53.870173 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f4e38-default-internal-api-0" event={"ID":"3e4ccffc-3539-4e3a-b507-3fa51250d5a6","Type":"ContainerStarted","Data":"a662b6953e099e8046c0f19e2f43fe2830ffd4d8ed5268bb5b5772d761645370"} Mar 19 12:35:53.870308 master-0 kubenswrapper[31830]: I0319 12:35:53.870240 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f4e38-default-internal-api-0" event={"ID":"3e4ccffc-3539-4e3a-b507-3fa51250d5a6","Type":"ContainerStarted","Data":"0f46d1a142f064c51b50aa3c3425107fb12424d7093b89808c1ed6c81745ca5c"} Mar 19 12:35:54.777505 master-0 kubenswrapper[31830]: I0319 12:35:54.777314 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-f4e38-default-external-api-0" podStartSLOduration=20.777238514 podStartE2EDuration="20.777238514s" podCreationTimestamp="2026-03-19 12:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:35:54.688186951 +0000 UTC m=+1293.237147655" watchObservedRunningTime="2026-03-19 12:35:54.777238514 +0000 UTC m=+1293.326199218" Mar 19 12:35:54.882298 master-0 kubenswrapper[31830]: I0319 12:35:54.882244 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f4e38-default-internal-api-0" event={"ID":"3e4ccffc-3539-4e3a-b507-3fa51250d5a6","Type":"ContainerStarted","Data":"b69e125e42a299fc4c60cee3320600e1ac8a82dfc8fed27137eaceadec37c002"} Mar 19 12:35:55.336970 master-0 kubenswrapper[31830]: I0319 12:35:55.336872 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-f4e38-default-internal-api-0" podStartSLOduration=20.336845368 podStartE2EDuration="20.336845368s" podCreationTimestamp="2026-03-19 12:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:35:55.32046921 +0000 UTC m=+1293.869429924" watchObservedRunningTime="2026-03-19 12:35:55.336845368 +0000 UTC m=+1293.885806082" Mar 19 12:35:56.101277 master-0 kubenswrapper[31830]: I0319 12:35:56.101111 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:56.101277 master-0 kubenswrapper[31830]: I0319 12:35:56.101209 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:56.174897 master-0 kubenswrapper[31830]: I0319 12:35:56.174828 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:56.178267 master-0 kubenswrapper[31830]: I0319 12:35:56.177190 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:56.909252 master-0 kubenswrapper[31830]: I0319 12:35:56.909208 31830 generic.go:334] "Generic (PLEG): container finished" podID="3bb563fb-d536-4cb0-9614-d331baa95e1b" containerID="aac0cefd133f580439cc2e3c6dc6fe7ae61f04fb85794fd95132e5c8dd0d68ad" exitCode=0 Mar 19 12:35:56.909556 master-0 kubenswrapper[31830]: I0319 12:35:56.909298 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h" event={"ID":"3bb563fb-d536-4cb0-9614-d331baa95e1b","Type":"ContainerDied","Data":"aac0cefd133f580439cc2e3c6dc6fe7ae61f04fb85794fd95132e5c8dd0d68ad"} Mar 19 12:35:56.914057 master-0 kubenswrapper[31830]: I0319 12:35:56.913527 31830 generic.go:334] "Generic (PLEG): container finished" podID="b7a848b2-11a9-47c9-881c-6ed12d3e3d1b" containerID="05c70f76b5038bca4bf23d0f8d1d569493148c3b786dfcc2d142b46a0577b9d8" exitCode=0 Mar 19 12:35:56.915744 master-0 kubenswrapper[31830]: I0319 12:35:56.915660 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb" event={"ID":"b7a848b2-11a9-47c9-881c-6ed12d3e3d1b","Type":"ContainerDied","Data":"05c70f76b5038bca4bf23d0f8d1d569493148c3b786dfcc2d142b46a0577b9d8"} Mar 19 12:35:56.915870 master-0 kubenswrapper[31830]: I0319 12:35:56.915756 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:56.915870 master-0 kubenswrapper[31830]: I0319 12:35:56.915779 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:35:57.047907 master-0 kubenswrapper[31830]: I0319 12:35:57.047839 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:57.047907 master-0 kubenswrapper[31830]: I0319 12:35:57.047900 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:57.078125 master-0 kubenswrapper[31830]: I0319 12:35:57.078069 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:57.090963 master-0 kubenswrapper[31830]: I0319 12:35:57.090772 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:57.127384 master-0 kubenswrapper[31830]: I0319 12:35:57.127314 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-pg69z"] Mar 19 12:35:57.446665 master-0 kubenswrapper[31830]: I0319 12:35:57.446607 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-pg69z"] Mar 19 12:35:57.515578 master-0 kubenswrapper[31830]: I0319 12:35:57.514757 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-v2lnb"] Mar 19 12:35:57.523736 master-0 kubenswrapper[31830]: E0319 12:35:57.522733 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f825bf1-6d44-4e78-85db-bc6c7371a9d9" containerName="dnsmasq-dns" Mar 19 12:35:57.523736 master-0 kubenswrapper[31830]: I0319 12:35:57.522779 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f825bf1-6d44-4e78-85db-bc6c7371a9d9" containerName="dnsmasq-dns" Mar 19 12:35:57.523736 master-0 kubenswrapper[31830]: E0319 12:35:57.522818 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f825bf1-6d44-4e78-85db-bc6c7371a9d9" containerName="init" Mar 19 12:35:57.523736 master-0 kubenswrapper[31830]: I0319 12:35:57.522828 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f825bf1-6d44-4e78-85db-bc6c7371a9d9" containerName="init" Mar 19 12:35:57.533885 master-0 kubenswrapper[31830]: E0319 12:35:57.522871 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4952f965-eb25-4397-bdd3-b8a75e9eb4ed" containerName="keystone-bootstrap" Mar 19 12:35:57.533885 master-0 kubenswrapper[31830]: I0319 12:35:57.526244 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4952f965-eb25-4397-bdd3-b8a75e9eb4ed" containerName="keystone-bootstrap" Mar 19 12:35:57.533885 master-0 kubenswrapper[31830]: I0319 12:35:57.532009 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="4952f965-eb25-4397-bdd3-b8a75e9eb4ed" containerName="keystone-bootstrap" Mar 19 12:35:57.533885 master-0 kubenswrapper[31830]: I0319 12:35:57.532108 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f825bf1-6d44-4e78-85db-bc6c7371a9d9" containerName="dnsmasq-dns" Mar 19 12:35:57.533885 master-0 kubenswrapper[31830]: I0319 12:35:57.533349 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:35:57.540128 master-0 kubenswrapper[31830]: I0319 12:35:57.540070 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 19 12:35:57.540716 master-0 kubenswrapper[31830]: I0319 12:35:57.540691 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 19 12:35:57.541732 master-0 kubenswrapper[31830]: I0319 12:35:57.541705 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 19 12:35:57.545116 master-0 kubenswrapper[31830]: I0319 12:35:57.542733 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 19 12:35:57.545116 master-0 kubenswrapper[31830]: I0319 12:35:57.543962 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-v2lnb"] Mar 19 12:35:57.592662 master-0 kubenswrapper[31830]: I0319 12:35:57.592599 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-config-data\") pod \"keystone-bootstrap-v2lnb\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:35:57.592973 master-0 kubenswrapper[31830]: I0319 12:35:57.592938 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjgxm\" (UniqueName: \"kubernetes.io/projected/afeb235b-1d56-46d5-9d18-dbbb5f50e141-kube-api-access-hjgxm\") pod \"keystone-bootstrap-v2lnb\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:35:57.593071 master-0 kubenswrapper[31830]: I0319 12:35:57.593050 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-fernet-keys\") pod \"keystone-bootstrap-v2lnb\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:35:57.593115 master-0 kubenswrapper[31830]: I0319 12:35:57.593092 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-combined-ca-bundle\") pod \"keystone-bootstrap-v2lnb\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:35:57.593156 master-0 kubenswrapper[31830]: I0319 12:35:57.593133 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-scripts\") pod \"keystone-bootstrap-v2lnb\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:35:57.593189 master-0 kubenswrapper[31830]: I0319 12:35:57.593163 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-credential-keys\") pod \"keystone-bootstrap-v2lnb\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:35:57.696907 master-0 kubenswrapper[31830]: I0319 12:35:57.696403 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4952f965-eb25-4397-bdd3-b8a75e9eb4ed" path="/var/lib/kubelet/pods/4952f965-eb25-4397-bdd3-b8a75e9eb4ed/volumes" Mar 19 12:35:57.711773 master-0 kubenswrapper[31830]: I0319 12:35:57.711710 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-config-data\") pod \"keystone-bootstrap-v2lnb\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:35:57.715823 master-0 kubenswrapper[31830]: I0319 12:35:57.712750 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjgxm\" (UniqueName: \"kubernetes.io/projected/afeb235b-1d56-46d5-9d18-dbbb5f50e141-kube-api-access-hjgxm\") pod \"keystone-bootstrap-v2lnb\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:35:57.715823 master-0 kubenswrapper[31830]: I0319 12:35:57.712924 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-fernet-keys\") pod \"keystone-bootstrap-v2lnb\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:35:57.715823 master-0 kubenswrapper[31830]: I0319 12:35:57.712972 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-combined-ca-bundle\") pod \"keystone-bootstrap-v2lnb\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:35:57.715823 master-0 kubenswrapper[31830]: I0319 12:35:57.713026 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-scripts\") pod \"keystone-bootstrap-v2lnb\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:35:57.715823 master-0 kubenswrapper[31830]: I0319 12:35:57.713064 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-credential-keys\") pod \"keystone-bootstrap-v2lnb\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:35:57.719337 master-0 kubenswrapper[31830]: I0319 12:35:57.719256 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-config-data\") pod \"keystone-bootstrap-v2lnb\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:35:57.722313 master-0 kubenswrapper[31830]: I0319 12:35:57.722252 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-credential-keys\") pod \"keystone-bootstrap-v2lnb\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:35:57.727836 master-0 kubenswrapper[31830]: I0319 12:35:57.722571 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-fernet-keys\") pod \"keystone-bootstrap-v2lnb\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:35:57.727836 master-0 kubenswrapper[31830]: I0319 12:35:57.722599 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-scripts\") pod \"keystone-bootstrap-v2lnb\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:35:57.734828 master-0 kubenswrapper[31830]: I0319 12:35:57.734278 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-combined-ca-bundle\") pod \"keystone-bootstrap-v2lnb\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:35:57.738491 master-0 kubenswrapper[31830]: I0319 12:35:57.738425 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjgxm\" (UniqueName: \"kubernetes.io/projected/afeb235b-1d56-46d5-9d18-dbbb5f50e141-kube-api-access-hjgxm\") pod \"keystone-bootstrap-v2lnb\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:35:57.873148 master-0 kubenswrapper[31830]: I0319 12:35:57.873094 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:35:57.941030 master-0 kubenswrapper[31830]: I0319 12:35:57.940973 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:57.941030 master-0 kubenswrapper[31830]: I0319 12:35:57.941034 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:35:58.374190 master-0 kubenswrapper[31830]: I0319 12:35:58.374068 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-v2lnb"] Mar 19 12:35:58.953916 master-0 kubenswrapper[31830]: I0319 12:35:58.952987 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v2lnb" event={"ID":"afeb235b-1d56-46d5-9d18-dbbb5f50e141","Type":"ContainerStarted","Data":"47e74fdc2de15d7bbd751f4fe10b141c16a6b7ba87d12191394e6ff93e2fac5f"} Mar 19 12:35:58.953916 master-0 kubenswrapper[31830]: I0319 12:35:58.953043 31830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 12:35:58.953916 master-0 kubenswrapper[31830]: I0319 12:35:58.953077 31830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 12:35:58.953916 master-0 kubenswrapper[31830]: I0319 12:35:58.953054 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v2lnb" event={"ID":"afeb235b-1d56-46d5-9d18-dbbb5f50e141","Type":"ContainerStarted","Data":"16b5cb6bafdf45fd395e249666e026529a723a946087c4b0f8d60779c3031229"} Mar 19 12:35:58.975999 master-0 kubenswrapper[31830]: I0319 12:35:58.975868 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-v2lnb" podStartSLOduration=1.9758527529999999 podStartE2EDuration="1.975852753s" podCreationTimestamp="2026-03-19 12:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:35:58.974281314 +0000 UTC m=+1297.523242018" watchObservedRunningTime="2026-03-19 12:35:58.975852753 +0000 UTC m=+1297.524813457" Mar 19 12:35:59.989990 master-0 kubenswrapper[31830]: I0319 12:35:59.989929 31830 generic.go:334] "Generic (PLEG): container finished" podID="48e729f7-b182-49a0-8d92-174b44693dad" containerID="55aeeab99a6e9fda0c5166cfb5d594105808d1546be7583628ed115f2fbfb80e" exitCode=0 Mar 19 12:35:59.991637 master-0 kubenswrapper[31830]: I0319 12:35:59.991616 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lpz7t" event={"ID":"48e729f7-b182-49a0-8d92-174b44693dad","Type":"ContainerDied","Data":"55aeeab99a6e9fda0c5166cfb5d594105808d1546be7583628ed115f2fbfb80e"} Mar 19 12:36:00.873929 master-0 kubenswrapper[31830]: I0319 12:36:00.873872 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:36:00.874164 master-0 kubenswrapper[31830]: I0319 12:36:00.873971 31830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 12:36:00.878118 master-0 kubenswrapper[31830]: I0319 12:36:00.878061 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:36:01.553036 master-0 kubenswrapper[31830]: I0319 12:36:01.552982 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lpz7t" Mar 19 12:36:01.612931 master-0 kubenswrapper[31830]: I0319 12:36:01.612868 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48e729f7-b182-49a0-8d92-174b44693dad-scripts\") pod \"48e729f7-b182-49a0-8d92-174b44693dad\" (UID: \"48e729f7-b182-49a0-8d92-174b44693dad\") " Mar 19 12:36:01.613161 master-0 kubenswrapper[31830]: I0319 12:36:01.612958 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48e729f7-b182-49a0-8d92-174b44693dad-combined-ca-bundle\") pod \"48e729f7-b182-49a0-8d92-174b44693dad\" (UID: \"48e729f7-b182-49a0-8d92-174b44693dad\") " Mar 19 12:36:01.613161 master-0 kubenswrapper[31830]: I0319 12:36:01.613065 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jccrz\" (UniqueName: \"kubernetes.io/projected/48e729f7-b182-49a0-8d92-174b44693dad-kube-api-access-jccrz\") pod \"48e729f7-b182-49a0-8d92-174b44693dad\" (UID: \"48e729f7-b182-49a0-8d92-174b44693dad\") " Mar 19 12:36:01.613665 master-0 kubenswrapper[31830]: I0319 12:36:01.613301 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48e729f7-b182-49a0-8d92-174b44693dad-config-data\") pod \"48e729f7-b182-49a0-8d92-174b44693dad\" (UID: \"48e729f7-b182-49a0-8d92-174b44693dad\") " Mar 19 12:36:01.613735 master-0 kubenswrapper[31830]: I0319 12:36:01.613710 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48e729f7-b182-49a0-8d92-174b44693dad-logs\") pod \"48e729f7-b182-49a0-8d92-174b44693dad\" (UID: \"48e729f7-b182-49a0-8d92-174b44693dad\") " Mar 19 12:36:01.615423 master-0 kubenswrapper[31830]: I0319 12:36:01.615025 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48e729f7-b182-49a0-8d92-174b44693dad-logs" (OuterVolumeSpecName: "logs") pod "48e729f7-b182-49a0-8d92-174b44693dad" (UID: "48e729f7-b182-49a0-8d92-174b44693dad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:36:01.631367 master-0 kubenswrapper[31830]: I0319 12:36:01.631302 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48e729f7-b182-49a0-8d92-174b44693dad-scripts" (OuterVolumeSpecName: "scripts") pod "48e729f7-b182-49a0-8d92-174b44693dad" (UID: "48e729f7-b182-49a0-8d92-174b44693dad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:01.631561 master-0 kubenswrapper[31830]: I0319 12:36:01.631403 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48e729f7-b182-49a0-8d92-174b44693dad-kube-api-access-jccrz" (OuterVolumeSpecName: "kube-api-access-jccrz") pod "48e729f7-b182-49a0-8d92-174b44693dad" (UID: "48e729f7-b182-49a0-8d92-174b44693dad"). InnerVolumeSpecName "kube-api-access-jccrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:36:01.649787 master-0 kubenswrapper[31830]: I0319 12:36:01.649717 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48e729f7-b182-49a0-8d92-174b44693dad-config-data" (OuterVolumeSpecName: "config-data") pod "48e729f7-b182-49a0-8d92-174b44693dad" (UID: "48e729f7-b182-49a0-8d92-174b44693dad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:01.668779 master-0 kubenswrapper[31830]: I0319 12:36:01.668738 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:36:01.702133 master-0 kubenswrapper[31830]: I0319 12:36:01.677516 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48e729f7-b182-49a0-8d92-174b44693dad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48e729f7-b182-49a0-8d92-174b44693dad" (UID: "48e729f7-b182-49a0-8d92-174b44693dad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:01.716640 master-0 kubenswrapper[31830]: I0319 12:36:01.716306 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48e729f7-b182-49a0-8d92-174b44693dad-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:01.716640 master-0 kubenswrapper[31830]: I0319 12:36:01.716341 31830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48e729f7-b182-49a0-8d92-174b44693dad-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:01.716640 master-0 kubenswrapper[31830]: I0319 12:36:01.716350 31830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48e729f7-b182-49a0-8d92-174b44693dad-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:01.716640 master-0 kubenswrapper[31830]: I0319 12:36:01.716361 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48e729f7-b182-49a0-8d92-174b44693dad-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:01.716640 master-0 kubenswrapper[31830]: I0319 12:36:01.716371 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jccrz\" (UniqueName: \"kubernetes.io/projected/48e729f7-b182-49a0-8d92-174b44693dad-kube-api-access-jccrz\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:02.069416 master-0 kubenswrapper[31830]: I0319 12:36:02.069351 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lpz7t" event={"ID":"48e729f7-b182-49a0-8d92-174b44693dad","Type":"ContainerDied","Data":"f5d3a30bc425dc190e31971ab04a82603e66a8b8d5061a644b625d6324b86801"} Mar 19 12:36:02.069416 master-0 kubenswrapper[31830]: I0319 12:36:02.069397 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5d3a30bc425dc190e31971ab04a82603e66a8b8d5061a644b625d6324b86801" Mar 19 12:36:02.069835 master-0 kubenswrapper[31830]: I0319 12:36:02.069457 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lpz7t" Mar 19 12:36:02.233825 master-0 kubenswrapper[31830]: I0319 12:36:02.227765 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5f878994d6-brrf9"] Mar 19 12:36:02.233825 master-0 kubenswrapper[31830]: E0319 12:36:02.228421 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48e729f7-b182-49a0-8d92-174b44693dad" containerName="placement-db-sync" Mar 19 12:36:02.233825 master-0 kubenswrapper[31830]: I0319 12:36:02.228444 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="48e729f7-b182-49a0-8d92-174b44693dad" containerName="placement-db-sync" Mar 19 12:36:02.233825 master-0 kubenswrapper[31830]: I0319 12:36:02.229924 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="48e729f7-b182-49a0-8d92-174b44693dad" containerName="placement-db-sync" Mar 19 12:36:02.233825 master-0 kubenswrapper[31830]: I0319 12:36:02.231884 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.237244 master-0 kubenswrapper[31830]: I0319 12:36:02.235766 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 19 12:36:02.237244 master-0 kubenswrapper[31830]: I0319 12:36:02.235945 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 19 12:36:02.237244 master-0 kubenswrapper[31830]: I0319 12:36:02.236139 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 19 12:36:02.237244 master-0 kubenswrapper[31830]: I0319 12:36:02.237065 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 19 12:36:02.263555 master-0 kubenswrapper[31830]: I0319 12:36:02.263479 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f878994d6-brrf9"] Mar 19 12:36:02.337234 master-0 kubenswrapper[31830]: I0319 12:36:02.336063 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-internal-tls-certs\") pod \"placement-5f878994d6-brrf9\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.337234 master-0 kubenswrapper[31830]: I0319 12:36:02.336145 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ab4af90-2e9a-489c-b2bf-08579f4c3335-logs\") pod \"placement-5f878994d6-brrf9\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.337234 master-0 kubenswrapper[31830]: I0319 12:36:02.336244 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-config-data\") pod \"placement-5f878994d6-brrf9\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.337234 master-0 kubenswrapper[31830]: I0319 12:36:02.336396 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwnj9\" (UniqueName: \"kubernetes.io/projected/8ab4af90-2e9a-489c-b2bf-08579f4c3335-kube-api-access-vwnj9\") pod \"placement-5f878994d6-brrf9\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.337234 master-0 kubenswrapper[31830]: I0319 12:36:02.336467 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-public-tls-certs\") pod \"placement-5f878994d6-brrf9\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.337234 master-0 kubenswrapper[31830]: I0319 12:36:02.336568 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-combined-ca-bundle\") pod \"placement-5f878994d6-brrf9\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.337989 master-0 kubenswrapper[31830]: I0319 12:36:02.337947 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-scripts\") pod \"placement-5f878994d6-brrf9\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.442755 master-0 kubenswrapper[31830]: I0319 12:36:02.441840 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwnj9\" (UniqueName: \"kubernetes.io/projected/8ab4af90-2e9a-489c-b2bf-08579f4c3335-kube-api-access-vwnj9\") pod \"placement-5f878994d6-brrf9\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.442755 master-0 kubenswrapper[31830]: I0319 12:36:02.441916 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-public-tls-certs\") pod \"placement-5f878994d6-brrf9\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.442755 master-0 kubenswrapper[31830]: I0319 12:36:02.441952 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-combined-ca-bundle\") pod \"placement-5f878994d6-brrf9\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.442755 master-0 kubenswrapper[31830]: I0319 12:36:02.442006 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-scripts\") pod \"placement-5f878994d6-brrf9\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.442755 master-0 kubenswrapper[31830]: I0319 12:36:02.442207 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-internal-tls-certs\") pod \"placement-5f878994d6-brrf9\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.442755 master-0 kubenswrapper[31830]: I0319 12:36:02.442243 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ab4af90-2e9a-489c-b2bf-08579f4c3335-logs\") pod \"placement-5f878994d6-brrf9\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.442755 master-0 kubenswrapper[31830]: I0319 12:36:02.442304 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-config-data\") pod \"placement-5f878994d6-brrf9\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.446971 master-0 kubenswrapper[31830]: I0319 12:36:02.446141 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-config-data\") pod \"placement-5f878994d6-brrf9\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.450319 master-0 kubenswrapper[31830]: I0319 12:36:02.450207 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-scripts\") pod \"placement-5f878994d6-brrf9\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.451010 master-0 kubenswrapper[31830]: I0319 12:36:02.450978 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-public-tls-certs\") pod \"placement-5f878994d6-brrf9\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.457491 master-0 kubenswrapper[31830]: I0319 12:36:02.457446 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-combined-ca-bundle\") pod \"placement-5f878994d6-brrf9\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.461012 master-0 kubenswrapper[31830]: I0319 12:36:02.460109 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ab4af90-2e9a-489c-b2bf-08579f4c3335-logs\") pod \"placement-5f878994d6-brrf9\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.461368 master-0 kubenswrapper[31830]: I0319 12:36:02.461190 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-internal-tls-certs\") pod \"placement-5f878994d6-brrf9\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.478869 master-0 kubenswrapper[31830]: I0319 12:36:02.478747 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwnj9\" (UniqueName: \"kubernetes.io/projected/8ab4af90-2e9a-489c-b2bf-08579f4c3335-kube-api-access-vwnj9\") pod \"placement-5f878994d6-brrf9\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:02.581285 master-0 kubenswrapper[31830]: I0319 12:36:02.581202 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:03.946368 master-0 kubenswrapper[31830]: I0319 12:36:03.946296 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:36:06.163382 master-0 kubenswrapper[31830]: I0319 12:36:06.163169 31830 generic.go:334] "Generic (PLEG): container finished" podID="14936556-fa0b-48fb-91e5-0ca806871a6c" containerID="b24b3a3e71f958f5220aefdf55eef0c7125e6e352b028eb2a67ee1354e09a18c" exitCode=0 Mar 19 12:36:06.163382 master-0 kubenswrapper[31830]: I0319 12:36:06.163229 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zl2rr" event={"ID":"14936556-fa0b-48fb-91e5-0ca806871a6c","Type":"ContainerDied","Data":"b24b3a3e71f958f5220aefdf55eef0c7125e6e352b028eb2a67ee1354e09a18c"} Mar 19 12:36:06.167645 master-0 kubenswrapper[31830]: I0319 12:36:06.167472 31830 generic.go:334] "Generic (PLEG): container finished" podID="538593b3-ec2b-4d6e-9f10-3e7add4f7b41" containerID="680fb3101048559f79bb52dbc1af33ada1a89966aa320d17df762228772fa09b" exitCode=0 Mar 19 12:36:06.167645 master-0 kubenswrapper[31830]: I0319 12:36:06.167538 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-db-sync-n5228" event={"ID":"538593b3-ec2b-4d6e-9f10-3e7add4f7b41","Type":"ContainerDied","Data":"680fb3101048559f79bb52dbc1af33ada1a89966aa320d17df762228772fa09b"} Mar 19 12:36:06.169175 master-0 kubenswrapper[31830]: I0319 12:36:06.169074 31830 generic.go:334] "Generic (PLEG): container finished" podID="afeb235b-1d56-46d5-9d18-dbbb5f50e141" containerID="47e74fdc2de15d7bbd751f4fe10b141c16a6b7ba87d12191394e6ff93e2fac5f" exitCode=0 Mar 19 12:36:06.169175 master-0 kubenswrapper[31830]: I0319 12:36:06.169097 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v2lnb" event={"ID":"afeb235b-1d56-46d5-9d18-dbbb5f50e141","Type":"ContainerDied","Data":"47e74fdc2de15d7bbd751f4fe10b141c16a6b7ba87d12191394e6ff93e2fac5f"} Mar 19 12:36:07.197710 master-0 kubenswrapper[31830]: I0319 12:36:07.196381 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f878994d6-brrf9"] Mar 19 12:36:07.209955 master-0 kubenswrapper[31830]: I0319 12:36:07.206329 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb" event={"ID":"b7a848b2-11a9-47c9-881c-6ed12d3e3d1b","Type":"ContainerStarted","Data":"853990ee90118f961c1c8950bac3784d35fd97366a2386cf0d067c21fd343bca"} Mar 19 12:36:07.213714 master-0 kubenswrapper[31830]: I0319 12:36:07.213594 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h" event={"ID":"3bb563fb-d536-4cb0-9614-d331baa95e1b","Type":"ContainerStarted","Data":"30fd523f28ab86cda1af5dded378a35826f9f751e70edb33bbfbabee36807d55"} Mar 19 12:36:07.253979 master-0 kubenswrapper[31830]: I0319 12:36:07.253890 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb" podStartSLOduration=1.764259251 podStartE2EDuration="1m3.253866547s" podCreationTimestamp="2026-03-19 12:35:04 +0000 UTC" firstStartedPulling="2026-03-19 12:35:05.28152638 +0000 UTC m=+1243.830487084" lastFinishedPulling="2026-03-19 12:36:06.771133676 +0000 UTC m=+1305.320094380" observedRunningTime="2026-03-19 12:36:07.243129174 +0000 UTC m=+1305.792089878" watchObservedRunningTime="2026-03-19 12:36:07.253866547 +0000 UTC m=+1305.802827251" Mar 19 12:36:07.298871 master-0 kubenswrapper[31830]: I0319 12:36:07.298739 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h" podStartSLOduration=1.397274673 podStartE2EDuration="1m2.298678267s" podCreationTimestamp="2026-03-19 12:35:05 +0000 UTC" firstStartedPulling="2026-03-19 12:35:05.914584003 +0000 UTC m=+1244.463544707" lastFinishedPulling="2026-03-19 12:36:06.815987597 +0000 UTC m=+1305.364948301" observedRunningTime="2026-03-19 12:36:07.281311178 +0000 UTC m=+1305.830271882" watchObservedRunningTime="2026-03-19 12:36:07.298678267 +0000 UTC m=+1305.847638981" Mar 19 12:36:07.734074 master-0 kubenswrapper[31830]: I0319 12:36:07.734005 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zl2rr" Mar 19 12:36:07.876000 master-0 kubenswrapper[31830]: I0319 12:36:07.872857 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:36:07.883695 master-0 kubenswrapper[31830]: I0319 12:36:07.883583 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14936556-fa0b-48fb-91e5-0ca806871a6c-combined-ca-bundle\") pod \"14936556-fa0b-48fb-91e5-0ca806871a6c\" (UID: \"14936556-fa0b-48fb-91e5-0ca806871a6c\") " Mar 19 12:36:07.883810 master-0 kubenswrapper[31830]: I0319 12:36:07.883778 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/14936556-fa0b-48fb-91e5-0ca806871a6c-config\") pod \"14936556-fa0b-48fb-91e5-0ca806871a6c\" (UID: \"14936556-fa0b-48fb-91e5-0ca806871a6c\") " Mar 19 12:36:07.883933 master-0 kubenswrapper[31830]: I0319 12:36:07.883876 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd6hc\" (UniqueName: \"kubernetes.io/projected/14936556-fa0b-48fb-91e5-0ca806871a6c-kube-api-access-vd6hc\") pod \"14936556-fa0b-48fb-91e5-0ca806871a6c\" (UID: \"14936556-fa0b-48fb-91e5-0ca806871a6c\") " Mar 19 12:36:07.897996 master-0 kubenswrapper[31830]: I0319 12:36:07.888254 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14936556-fa0b-48fb-91e5-0ca806871a6c-kube-api-access-vd6hc" (OuterVolumeSpecName: "kube-api-access-vd6hc") pod "14936556-fa0b-48fb-91e5-0ca806871a6c" (UID: "14936556-fa0b-48fb-91e5-0ca806871a6c"). InnerVolumeSpecName "kube-api-access-vd6hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:36:07.911761 master-0 kubenswrapper[31830]: I0319 12:36:07.911638 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14936556-fa0b-48fb-91e5-0ca806871a6c-config" (OuterVolumeSpecName: "config") pod "14936556-fa0b-48fb-91e5-0ca806871a6c" (UID: "14936556-fa0b-48fb-91e5-0ca806871a6c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:07.911938 master-0 kubenswrapper[31830]: I0319 12:36:07.911885 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14936556-fa0b-48fb-91e5-0ca806871a6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14936556-fa0b-48fb-91e5-0ca806871a6c" (UID: "14936556-fa0b-48fb-91e5-0ca806871a6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:07.914679 master-0 kubenswrapper[31830]: I0319 12:36:07.914644 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:36:07.986468 master-0 kubenswrapper[31830]: I0319 12:36:07.986310 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-etc-machine-id\") pod \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " Mar 19 12:36:07.986468 master-0 kubenswrapper[31830]: I0319 12:36:07.986384 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gmpq\" (UniqueName: \"kubernetes.io/projected/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-kube-api-access-4gmpq\") pod \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " Mar 19 12:36:07.986468 master-0 kubenswrapper[31830]: I0319 12:36:07.986418 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-scripts\") pod \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " Mar 19 12:36:07.986468 master-0 kubenswrapper[31830]: I0319 12:36:07.986455 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-db-sync-config-data\") pod \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " Mar 19 12:36:07.986783 master-0 kubenswrapper[31830]: I0319 12:36:07.986515 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-config-data\") pod \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " Mar 19 12:36:07.986783 master-0 kubenswrapper[31830]: I0319 12:36:07.986585 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-combined-ca-bundle\") pod \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\" (UID: \"538593b3-ec2b-4d6e-9f10-3e7add4f7b41\") " Mar 19 12:36:07.987424 master-0 kubenswrapper[31830]: I0319 12:36:07.987139 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14936556-fa0b-48fb-91e5-0ca806871a6c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:07.987424 master-0 kubenswrapper[31830]: I0319 12:36:07.987158 31830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/14936556-fa0b-48fb-91e5-0ca806871a6c-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:07.987424 master-0 kubenswrapper[31830]: I0319 12:36:07.987153 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "538593b3-ec2b-4d6e-9f10-3e7add4f7b41" (UID: "538593b3-ec2b-4d6e-9f10-3e7add4f7b41"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:07.987424 master-0 kubenswrapper[31830]: I0319 12:36:07.987169 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd6hc\" (UniqueName: \"kubernetes.io/projected/14936556-fa0b-48fb-91e5-0ca806871a6c-kube-api-access-vd6hc\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:07.990454 master-0 kubenswrapper[31830]: I0319 12:36:07.990400 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-scripts" (OuterVolumeSpecName: "scripts") pod "538593b3-ec2b-4d6e-9f10-3e7add4f7b41" (UID: "538593b3-ec2b-4d6e-9f10-3e7add4f7b41"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:07.990641 master-0 kubenswrapper[31830]: I0319 12:36:07.990608 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "538593b3-ec2b-4d6e-9f10-3e7add4f7b41" (UID: "538593b3-ec2b-4d6e-9f10-3e7add4f7b41"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:07.990695 master-0 kubenswrapper[31830]: I0319 12:36:07.990652 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-kube-api-access-4gmpq" (OuterVolumeSpecName: "kube-api-access-4gmpq") pod "538593b3-ec2b-4d6e-9f10-3e7add4f7b41" (UID: "538593b3-ec2b-4d6e-9f10-3e7add4f7b41"). InnerVolumeSpecName "kube-api-access-4gmpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:36:08.048933 master-0 kubenswrapper[31830]: I0319 12:36:08.032847 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "538593b3-ec2b-4d6e-9f10-3e7add4f7b41" (UID: "538593b3-ec2b-4d6e-9f10-3e7add4f7b41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:08.063206 master-0 kubenswrapper[31830]: I0319 12:36:08.063056 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-config-data" (OuterVolumeSpecName: "config-data") pod "538593b3-ec2b-4d6e-9f10-3e7add4f7b41" (UID: "538593b3-ec2b-4d6e-9f10-3e7add4f7b41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:08.093937 master-0 kubenswrapper[31830]: I0319 12:36:08.091421 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-scripts\") pod \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " Mar 19 12:36:08.093937 master-0 kubenswrapper[31830]: I0319 12:36:08.091554 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjgxm\" (UniqueName: \"kubernetes.io/projected/afeb235b-1d56-46d5-9d18-dbbb5f50e141-kube-api-access-hjgxm\") pod \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " Mar 19 12:36:08.093937 master-0 kubenswrapper[31830]: I0319 12:36:08.091608 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-config-data\") pod \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " Mar 19 12:36:08.093937 master-0 kubenswrapper[31830]: I0319 12:36:08.091640 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-combined-ca-bundle\") pod \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " Mar 19 12:36:08.093937 master-0 kubenswrapper[31830]: I0319 12:36:08.091675 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-credential-keys\") pod \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " Mar 19 12:36:08.093937 master-0 kubenswrapper[31830]: I0319 12:36:08.091850 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-fernet-keys\") pod \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\" (UID: \"afeb235b-1d56-46d5-9d18-dbbb5f50e141\") " Mar 19 12:36:08.125408 master-0 kubenswrapper[31830]: I0319 12:36:08.102640 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afeb235b-1d56-46d5-9d18-dbbb5f50e141-kube-api-access-hjgxm" (OuterVolumeSpecName: "kube-api-access-hjgxm") pod "afeb235b-1d56-46d5-9d18-dbbb5f50e141" (UID: "afeb235b-1d56-46d5-9d18-dbbb5f50e141"). InnerVolumeSpecName "kube-api-access-hjgxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:36:08.125408 master-0 kubenswrapper[31830]: I0319 12:36:08.104044 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "afeb235b-1d56-46d5-9d18-dbbb5f50e141" (UID: "afeb235b-1d56-46d5-9d18-dbbb5f50e141"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:08.125408 master-0 kubenswrapper[31830]: I0319 12:36:08.111622 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "afeb235b-1d56-46d5-9d18-dbbb5f50e141" (UID: "afeb235b-1d56-46d5-9d18-dbbb5f50e141"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:08.125408 master-0 kubenswrapper[31830]: I0319 12:36:08.112585 31830 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:08.125408 master-0 kubenswrapper[31830]: I0319 12:36:08.112625 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gmpq\" (UniqueName: \"kubernetes.io/projected/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-kube-api-access-4gmpq\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:08.125408 master-0 kubenswrapper[31830]: I0319 12:36:08.112643 31830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:08.125408 master-0 kubenswrapper[31830]: I0319 12:36:08.112664 31830 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:08.125408 master-0 kubenswrapper[31830]: I0319 12:36:08.112679 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:08.125408 master-0 kubenswrapper[31830]: I0319 12:36:08.112691 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/538593b3-ec2b-4d6e-9f10-3e7add4f7b41-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:08.148887 master-0 kubenswrapper[31830]: I0319 12:36:08.127184 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-scripts" (OuterVolumeSpecName: "scripts") pod "afeb235b-1d56-46d5-9d18-dbbb5f50e141" (UID: "afeb235b-1d56-46d5-9d18-dbbb5f50e141"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:08.165107 master-0 kubenswrapper[31830]: I0319 12:36:08.165036 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afeb235b-1d56-46d5-9d18-dbbb5f50e141" (UID: "afeb235b-1d56-46d5-9d18-dbbb5f50e141"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:08.175954 master-0 kubenswrapper[31830]: I0319 12:36:08.175901 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-config-data" (OuterVolumeSpecName: "config-data") pod "afeb235b-1d56-46d5-9d18-dbbb5f50e141" (UID: "afeb235b-1d56-46d5-9d18-dbbb5f50e141"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:08.216007 master-0 kubenswrapper[31830]: I0319 12:36:08.214683 31830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:08.216007 master-0 kubenswrapper[31830]: I0319 12:36:08.214735 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjgxm\" (UniqueName: \"kubernetes.io/projected/afeb235b-1d56-46d5-9d18-dbbb5f50e141-kube-api-access-hjgxm\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:08.216007 master-0 kubenswrapper[31830]: I0319 12:36:08.214752 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:08.216007 master-0 kubenswrapper[31830]: I0319 12:36:08.214764 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:08.216007 master-0 kubenswrapper[31830]: I0319 12:36:08.214776 31830 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-credential-keys\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:08.216007 master-0 kubenswrapper[31830]: I0319 12:36:08.214788 31830 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/afeb235b-1d56-46d5-9d18-dbbb5f50e141-fernet-keys\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:08.238892 master-0 kubenswrapper[31830]: I0319 12:36:08.238383 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zl2rr" event={"ID":"14936556-fa0b-48fb-91e5-0ca806871a6c","Type":"ContainerDied","Data":"ce14fef57edc05c0be17f2e05287e048a72b687d1de36ea9b5b8c2cb1b3a2f80"} Mar 19 12:36:08.238892 master-0 kubenswrapper[31830]: I0319 12:36:08.238539 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce14fef57edc05c0be17f2e05287e048a72b687d1de36ea9b5b8c2cb1b3a2f80" Mar 19 12:36:08.238892 master-0 kubenswrapper[31830]: I0319 12:36:08.238706 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zl2rr" Mar 19 12:36:08.251831 master-0 kubenswrapper[31830]: I0319 12:36:08.247342 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-db-sync-n5228" event={"ID":"538593b3-ec2b-4d6e-9f10-3e7add4f7b41","Type":"ContainerDied","Data":"9ec2cbb39f90eeb811a4d3ba067f87086eed7f624be729dbc891d8c3e491d37a"} Mar 19 12:36:08.251831 master-0 kubenswrapper[31830]: I0319 12:36:08.247411 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ec2cbb39f90eeb811a4d3ba067f87086eed7f624be729dbc891d8c3e491d37a" Mar 19 12:36:08.251831 master-0 kubenswrapper[31830]: I0319 12:36:08.247532 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-db-sync-n5228" Mar 19 12:36:08.261692 master-0 kubenswrapper[31830]: I0319 12:36:08.261626 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v2lnb" event={"ID":"afeb235b-1d56-46d5-9d18-dbbb5f50e141","Type":"ContainerDied","Data":"16b5cb6bafdf45fd395e249666e026529a723a946087c4b0f8d60779c3031229"} Mar 19 12:36:08.261692 master-0 kubenswrapper[31830]: I0319 12:36:08.261678 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16b5cb6bafdf45fd395e249666e026529a723a946087c4b0f8d60779c3031229" Mar 19 12:36:08.261999 master-0 kubenswrapper[31830]: I0319 12:36:08.261772 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v2lnb" Mar 19 12:36:08.277365 master-0 kubenswrapper[31830]: I0319 12:36:08.277291 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f878994d6-brrf9" event={"ID":"8ab4af90-2e9a-489c-b2bf-08579f4c3335","Type":"ContainerStarted","Data":"73e07780bcc782edd93533f0824cda16b30d7283ee7559e869062923116f5506"} Mar 19 12:36:08.277365 master-0 kubenswrapper[31830]: I0319 12:36:08.277346 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f878994d6-brrf9" event={"ID":"8ab4af90-2e9a-489c-b2bf-08579f4c3335","Type":"ContainerStarted","Data":"7622d7453d22bc08ff3e47847b95f0d99de8b75a0bc629a881f7e1e47fbc5127"} Mar 19 12:36:08.277365 master-0 kubenswrapper[31830]: I0319 12:36:08.277358 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f878994d6-brrf9" event={"ID":"8ab4af90-2e9a-489c-b2bf-08579f4c3335","Type":"ContainerStarted","Data":"e2f9b3db66ff5545df1ea69249bfbd71db3e4a7246c420207d4558b0ddea3b55"} Mar 19 12:36:08.277712 master-0 kubenswrapper[31830]: I0319 12:36:08.277473 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:08.401397 master-0 kubenswrapper[31830]: I0319 12:36:08.401308 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5f878994d6-brrf9" podStartSLOduration=6.401287331 podStartE2EDuration="6.401287331s" podCreationTimestamp="2026-03-19 12:36:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:36:08.397901276 +0000 UTC m=+1306.946862000" watchObservedRunningTime="2026-03-19 12:36:08.401287331 +0000 UTC m=+1306.950248035" Mar 19 12:36:08.634851 master-0 kubenswrapper[31830]: I0319 12:36:08.624651 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-569d794d4c-pmgr5"] Mar 19 12:36:08.675027 master-0 kubenswrapper[31830]: E0319 12:36:08.651647 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeb235b-1d56-46d5-9d18-dbbb5f50e141" containerName="keystone-bootstrap" Mar 19 12:36:08.675027 master-0 kubenswrapper[31830]: I0319 12:36:08.651701 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeb235b-1d56-46d5-9d18-dbbb5f50e141" containerName="keystone-bootstrap" Mar 19 12:36:08.675027 master-0 kubenswrapper[31830]: E0319 12:36:08.651729 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14936556-fa0b-48fb-91e5-0ca806871a6c" containerName="neutron-db-sync" Mar 19 12:36:08.675027 master-0 kubenswrapper[31830]: I0319 12:36:08.651736 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="14936556-fa0b-48fb-91e5-0ca806871a6c" containerName="neutron-db-sync" Mar 19 12:36:08.675027 master-0 kubenswrapper[31830]: E0319 12:36:08.651758 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="538593b3-ec2b-4d6e-9f10-3e7add4f7b41" containerName="cinder-cce1e-db-sync" Mar 19 12:36:08.675027 master-0 kubenswrapper[31830]: I0319 12:36:08.651767 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="538593b3-ec2b-4d6e-9f10-3e7add4f7b41" containerName="cinder-cce1e-db-sync" Mar 19 12:36:08.675027 master-0 kubenswrapper[31830]: I0319 12:36:08.652086 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="14936556-fa0b-48fb-91e5-0ca806871a6c" containerName="neutron-db-sync" Mar 19 12:36:08.675027 master-0 kubenswrapper[31830]: I0319 12:36:08.652105 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="afeb235b-1d56-46d5-9d18-dbbb5f50e141" containerName="keystone-bootstrap" Mar 19 12:36:08.675027 master-0 kubenswrapper[31830]: I0319 12:36:08.652133 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="538593b3-ec2b-4d6e-9f10-3e7add4f7b41" containerName="cinder-cce1e-db-sync" Mar 19 12:36:08.675027 master-0 kubenswrapper[31830]: I0319 12:36:08.654628 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-569d794d4c-pmgr5"] Mar 19 12:36:08.675027 master-0 kubenswrapper[31830]: I0319 12:36:08.654720 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.701196 master-0 kubenswrapper[31830]: I0319 12:36:08.689439 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 19 12:36:08.701196 master-0 kubenswrapper[31830]: I0319 12:36:08.690075 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 19 12:36:08.701196 master-0 kubenswrapper[31830]: I0319 12:36:08.690189 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 19 12:36:08.701196 master-0 kubenswrapper[31830]: I0319 12:36:08.690286 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 19 12:36:08.701196 master-0 kubenswrapper[31830]: I0319 12:36:08.690457 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 19 12:36:08.766859 master-0 kubenswrapper[31830]: I0319 12:36:08.760430 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dc8c48879-kkx7f"] Mar 19 12:36:08.766859 master-0 kubenswrapper[31830]: I0319 12:36:08.762782 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:08.786855 master-0 kubenswrapper[31830]: I0319 12:36:08.771157 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-public-tls-certs\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.786855 master-0 kubenswrapper[31830]: I0319 12:36:08.771259 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-config-data\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.786855 master-0 kubenswrapper[31830]: I0319 12:36:08.778166 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-scripts\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.786855 master-0 kubenswrapper[31830]: I0319 12:36:08.778357 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-combined-ca-bundle\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.786855 master-0 kubenswrapper[31830]: I0319 12:36:08.778407 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-credential-keys\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.786855 master-0 kubenswrapper[31830]: I0319 12:36:08.778474 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-fernet-keys\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.786855 master-0 kubenswrapper[31830]: I0319 12:36:08.778657 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p28jk\" (UniqueName: \"kubernetes.io/projected/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-kube-api-access-p28jk\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.786855 master-0 kubenswrapper[31830]: I0319 12:36:08.778686 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-internal-tls-certs\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.871742 master-0 kubenswrapper[31830]: I0319 12:36:08.869309 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc8c48879-kkx7f"] Mar 19 12:36:08.888445 master-0 kubenswrapper[31830]: I0319 12:36:08.885040 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-scripts\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.888445 master-0 kubenswrapper[31830]: I0319 12:36:08.885170 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-config\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:08.888445 master-0 kubenswrapper[31830]: I0319 12:36:08.885200 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-edpm-b\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:08.888445 master-0 kubenswrapper[31830]: I0319 12:36:08.885228 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-combined-ca-bundle\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.888445 master-0 kubenswrapper[31830]: I0319 12:36:08.885257 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-credential-keys\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.888445 master-0 kubenswrapper[31830]: I0319 12:36:08.885281 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-fernet-keys\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.888445 master-0 kubenswrapper[31830]: I0319 12:36:08.885329 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:08.888445 master-0 kubenswrapper[31830]: I0319 12:36:08.885354 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:08.888445 master-0 kubenswrapper[31830]: I0319 12:36:08.885373 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p28jk\" (UniqueName: \"kubernetes.io/projected/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-kube-api-access-p28jk\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.888445 master-0 kubenswrapper[31830]: I0319 12:36:08.885391 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-internal-tls-certs\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.888445 master-0 kubenswrapper[31830]: I0319 12:36:08.885415 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px96q\" (UniqueName: \"kubernetes.io/projected/98e9b667-3127-485d-8970-4debf1ca6259-kube-api-access-px96q\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:08.888445 master-0 kubenswrapper[31830]: I0319 12:36:08.885451 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-edpm-a\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:08.888445 master-0 kubenswrapper[31830]: I0319 12:36:08.885475 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:08.888445 master-0 kubenswrapper[31830]: I0319 12:36:08.885499 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-dns-svc\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:08.888445 master-0 kubenswrapper[31830]: I0319 12:36:08.885521 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-public-tls-certs\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.888445 master-0 kubenswrapper[31830]: I0319 12:36:08.885560 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-config-data\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.908959 master-0 kubenswrapper[31830]: I0319 12:36:08.898659 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-config-data\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.908959 master-0 kubenswrapper[31830]: I0319 12:36:08.902820 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-scripts\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.917846 master-0 kubenswrapper[31830]: I0319 12:36:08.909775 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-internal-tls-certs\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.917846 master-0 kubenswrapper[31830]: I0319 12:36:08.909907 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-combined-ca-bundle\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.917846 master-0 kubenswrapper[31830]: I0319 12:36:08.916098 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-public-tls-certs\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.917846 master-0 kubenswrapper[31830]: I0319 12:36:08.916575 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-credential-keys\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.933860 master-0 kubenswrapper[31830]: I0319 12:36:08.920539 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-fernet-keys\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.937924 master-0 kubenswrapper[31830]: I0319 12:36:08.935303 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-cce1e-scheduler-0"] Mar 19 12:36:08.968868 master-0 kubenswrapper[31830]: I0319 12:36:08.958178 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:08.968868 master-0 kubenswrapper[31830]: I0319 12:36:08.967606 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cce1e-scripts" Mar 19 12:36:08.968868 master-0 kubenswrapper[31830]: I0319 12:36:08.967895 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cce1e-scheduler-config-data" Mar 19 12:36:08.989865 master-0 kubenswrapper[31830]: I0319 12:36:08.972106 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cce1e-config-data" Mar 19 12:36:08.989865 master-0 kubenswrapper[31830]: I0319 12:36:08.987382 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-config\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:08.989865 master-0 kubenswrapper[31830]: I0319 12:36:08.987462 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-edpm-b\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:08.989865 master-0 kubenswrapper[31830]: I0319 12:36:08.987559 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:08.989865 master-0 kubenswrapper[31830]: I0319 12:36:08.987592 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:08.989865 master-0 kubenswrapper[31830]: I0319 12:36:08.987637 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px96q\" (UniqueName: \"kubernetes.io/projected/98e9b667-3127-485d-8970-4debf1ca6259-kube-api-access-px96q\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:08.989865 master-0 kubenswrapper[31830]: I0319 12:36:08.987693 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-edpm-a\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:08.989865 master-0 kubenswrapper[31830]: I0319 12:36:08.987725 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:08.989865 master-0 kubenswrapper[31830]: I0319 12:36:08.987753 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-dns-svc\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:08.989865 master-0 kubenswrapper[31830]: I0319 12:36:08.988558 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p28jk\" (UniqueName: \"kubernetes.io/projected/39eba887-ef1e-47a9-b6cf-6d445d0ae88b-kube-api-access-p28jk\") pod \"keystone-569d794d4c-pmgr5\" (UID: \"39eba887-ef1e-47a9-b6cf-6d445d0ae88b\") " pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:08.989865 master-0 kubenswrapper[31830]: I0319 12:36:08.988606 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:09.001843 master-0 kubenswrapper[31830]: I0319 12:36:08.995874 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-edpm-b\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:09.001843 master-0 kubenswrapper[31830]: I0319 12:36:08.996537 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-config\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:09.001843 master-0 kubenswrapper[31830]: I0319 12:36:08.997184 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:09.001843 master-0 kubenswrapper[31830]: I0319 12:36:08.997911 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-edpm-a\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:09.001843 master-0 kubenswrapper[31830]: I0319 12:36:08.998886 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:09.001843 master-0 kubenswrapper[31830]: I0319 12:36:08.999558 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-dns-svc\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:09.008892 master-0 kubenswrapper[31830]: I0319 12:36:09.003556 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cce1e-scheduler-0"] Mar 19 12:36:09.128762 master-0 kubenswrapper[31830]: I0319 12:36:09.042688 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:09.128762 master-0 kubenswrapper[31830]: I0319 12:36:09.057712 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px96q\" (UniqueName: \"kubernetes.io/projected/98e9b667-3127-485d-8970-4debf1ca6259-kube-api-access-px96q\") pod \"dnsmasq-dns-5dc8c48879-kkx7f\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:09.128762 master-0 kubenswrapper[31830]: I0319 12:36:09.106214 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-config-data\") pod \"cinder-cce1e-scheduler-0\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:09.128762 master-0 kubenswrapper[31830]: I0319 12:36:09.106458 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-combined-ca-bundle\") pod \"cinder-cce1e-scheduler-0\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:09.128762 master-0 kubenswrapper[31830]: I0319 12:36:09.106523 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/569ec673-0799-4639-80f6-44155889d03c-etc-machine-id\") pod \"cinder-cce1e-scheduler-0\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:09.128762 master-0 kubenswrapper[31830]: I0319 12:36:09.106566 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrqxt\" (UniqueName: \"kubernetes.io/projected/569ec673-0799-4639-80f6-44155889d03c-kube-api-access-qrqxt\") pod \"cinder-cce1e-scheduler-0\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:09.128762 master-0 kubenswrapper[31830]: I0319 12:36:09.106595 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-config-data-custom\") pod \"cinder-cce1e-scheduler-0\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:09.128762 master-0 kubenswrapper[31830]: I0319 12:36:09.106623 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-scripts\") pod \"cinder-cce1e-scheduler-0\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:09.128762 master-0 kubenswrapper[31830]: I0319 12:36:09.121330 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:09.203823 master-0 kubenswrapper[31830]: I0319 12:36:09.201591 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-cce1e-backup-0"] Mar 19 12:36:09.211826 master-0 kubenswrapper[31830]: I0319 12:36:09.208236 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/569ec673-0799-4639-80f6-44155889d03c-etc-machine-id\") pod \"cinder-cce1e-scheduler-0\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:09.211826 master-0 kubenswrapper[31830]: I0319 12:36:09.208298 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrqxt\" (UniqueName: \"kubernetes.io/projected/569ec673-0799-4639-80f6-44155889d03c-kube-api-access-qrqxt\") pod \"cinder-cce1e-scheduler-0\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:09.211826 master-0 kubenswrapper[31830]: I0319 12:36:09.208321 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-config-data-custom\") pod \"cinder-cce1e-scheduler-0\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:09.211826 master-0 kubenswrapper[31830]: I0319 12:36:09.208341 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-scripts\") pod \"cinder-cce1e-scheduler-0\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:09.211826 master-0 kubenswrapper[31830]: I0319 12:36:09.208396 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-config-data\") pod \"cinder-cce1e-scheduler-0\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:09.211826 master-0 kubenswrapper[31830]: I0319 12:36:09.208517 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-combined-ca-bundle\") pod \"cinder-cce1e-scheduler-0\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:09.225837 master-0 kubenswrapper[31830]: I0319 12:36:09.223965 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.233114 master-0 kubenswrapper[31830]: I0319 12:36:09.228731 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/569ec673-0799-4639-80f6-44155889d03c-etc-machine-id\") pod \"cinder-cce1e-scheduler-0\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:09.238045 master-0 kubenswrapper[31830]: I0319 12:36:09.237689 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-config-data\") pod \"cinder-cce1e-scheduler-0\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:09.241736 master-0 kubenswrapper[31830]: I0319 12:36:09.240654 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-combined-ca-bundle\") pod \"cinder-cce1e-scheduler-0\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:09.241736 master-0 kubenswrapper[31830]: I0319 12:36:09.240831 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cce1e-backup-config-data" Mar 19 12:36:09.241989 master-0 kubenswrapper[31830]: I0319 12:36:09.241829 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-config-data-custom\") pod \"cinder-cce1e-scheduler-0\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:09.249479 master-0 kubenswrapper[31830]: I0319 12:36:09.242640 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-scripts\") pod \"cinder-cce1e-scheduler-0\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:09.309929 master-0 kubenswrapper[31830]: I0319 12:36:09.306602 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cce1e-backup-0"] Mar 19 12:36:09.325124 master-0 kubenswrapper[31830]: I0319 12:36:09.324962 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-sys\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.325124 master-0 kubenswrapper[31830]: I0319 12:36:09.325055 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-lib-modules\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.325124 master-0 kubenswrapper[31830]: I0319 12:36:09.325117 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-etc-nvme\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.325490 master-0 kubenswrapper[31830]: I0319 12:36:09.325175 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-config-data-custom\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.325490 master-0 kubenswrapper[31830]: I0319 12:36:09.325352 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-etc-machine-id\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.325490 master-0 kubenswrapper[31830]: I0319 12:36:09.325401 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-etc-iscsi\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.325490 master-0 kubenswrapper[31830]: I0319 12:36:09.325467 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-config-data\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.325490 master-0 kubenswrapper[31830]: I0319 12:36:09.325482 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-var-locks-brick\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.325647 master-0 kubenswrapper[31830]: I0319 12:36:09.325510 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kckcx\" (UniqueName: \"kubernetes.io/projected/46a3c173-3990-4ec4-9125-086b417b3b69-kube-api-access-kckcx\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.325647 master-0 kubenswrapper[31830]: I0319 12:36:09.325537 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-var-lib-cinder\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.325647 master-0 kubenswrapper[31830]: I0319 12:36:09.325564 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-run\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.325647 master-0 kubenswrapper[31830]: I0319 12:36:09.325579 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-scripts\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.325647 master-0 kubenswrapper[31830]: I0319 12:36:09.325609 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-combined-ca-bundle\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.325813 master-0 kubenswrapper[31830]: I0319 12:36:09.325698 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-dev\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.325813 master-0 kubenswrapper[31830]: I0319 12:36:09.325713 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-var-locks-cinder\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.338130 master-0 kubenswrapper[31830]: I0319 12:36:09.336430 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrqxt\" (UniqueName: \"kubernetes.io/projected/569ec673-0799-4639-80f6-44155889d03c-kube-api-access-qrqxt\") pod \"cinder-cce1e-scheduler-0\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:09.436845 master-0 kubenswrapper[31830]: I0319 12:36:09.433895 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-64b98cb88d-7qp8f"] Mar 19 12:36:09.436845 master-0 kubenswrapper[31830]: I0319 12:36:09.435728 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64b98cb88d-7qp8f" Mar 19 12:36:09.436845 master-0 kubenswrapper[31830]: I0319 12:36:09.436766 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:09.443826 master-0 kubenswrapper[31830]: I0319 12:36:09.438622 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 19 12:36:09.443826 master-0 kubenswrapper[31830]: I0319 12:36:09.439885 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-sys\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.443826 master-0 kubenswrapper[31830]: I0319 12:36:09.439916 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-lib-modules\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.443826 master-0 kubenswrapper[31830]: I0319 12:36:09.439950 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-etc-nvme\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.443826 master-0 kubenswrapper[31830]: I0319 12:36:09.440026 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-config-data-custom\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.443826 master-0 kubenswrapper[31830]: I0319 12:36:09.440090 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-etc-machine-id\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.443826 master-0 kubenswrapper[31830]: I0319 12:36:09.440113 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-etc-iscsi\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.443826 master-0 kubenswrapper[31830]: I0319 12:36:09.440152 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-config-data\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.443826 master-0 kubenswrapper[31830]: I0319 12:36:09.440167 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-var-locks-brick\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.443826 master-0 kubenswrapper[31830]: I0319 12:36:09.440189 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kckcx\" (UniqueName: \"kubernetes.io/projected/46a3c173-3990-4ec4-9125-086b417b3b69-kube-api-access-kckcx\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.443826 master-0 kubenswrapper[31830]: I0319 12:36:09.440210 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-var-lib-cinder\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.443826 master-0 kubenswrapper[31830]: I0319 12:36:09.440235 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-run\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.443826 master-0 kubenswrapper[31830]: I0319 12:36:09.440251 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-scripts\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.443826 master-0 kubenswrapper[31830]: I0319 12:36:09.440269 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-combined-ca-bundle\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.443826 master-0 kubenswrapper[31830]: I0319 12:36:09.440307 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-dev\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.443826 master-0 kubenswrapper[31830]: I0319 12:36:09.440322 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-var-locks-cinder\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.443826 master-0 kubenswrapper[31830]: I0319 12:36:09.440599 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-var-locks-cinder\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.443826 master-0 kubenswrapper[31830]: I0319 12:36:09.442982 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-sys\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.443826 master-0 kubenswrapper[31830]: I0319 12:36:09.443017 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-lib-modules\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.443826 master-0 kubenswrapper[31830]: I0319 12:36:09.443088 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-etc-nvme\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.444582 master-0 kubenswrapper[31830]: I0319 12:36:09.444051 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 19 12:36:09.444582 master-0 kubenswrapper[31830]: I0319 12:36:09.444343 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-etc-machine-id\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.444582 master-0 kubenswrapper[31830]: I0319 12:36:09.444385 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-etc-iscsi\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.449327 master-0 kubenswrapper[31830]: I0319 12:36:09.448146 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-var-locks-brick\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.460826 master-0 kubenswrapper[31830]: I0319 12:36:09.457340 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-dev\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.460826 master-0 kubenswrapper[31830]: I0319 12:36:09.457529 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-var-lib-cinder\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.460826 master-0 kubenswrapper[31830]: I0319 12:36:09.457555 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-run\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.460826 master-0 kubenswrapper[31830]: I0319 12:36:09.458086 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 19 12:36:09.461192 master-0 kubenswrapper[31830]: I0319 12:36:09.460897 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-combined-ca-bundle\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.461832 master-0 kubenswrapper[31830]: I0319 12:36:09.461360 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:09.462992 master-0 kubenswrapper[31830]: I0319 12:36:09.462715 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-config-data\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.466535 master-0 kubenswrapper[31830]: I0319 12:36:09.464925 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-config-data-custom\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.482831 master-0 kubenswrapper[31830]: I0319 12:36:09.475761 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64b98cb88d-7qp8f"] Mar 19 12:36:09.493898 master-0 kubenswrapper[31830]: I0319 12:36:09.491880 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-scripts\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.518825 master-0 kubenswrapper[31830]: I0319 12:36:09.516271 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-cce1e-volume-lvm-iscsi-0"] Mar 19 12:36:09.518825 master-0 kubenswrapper[31830]: I0319 12:36:09.518498 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.542371 master-0 kubenswrapper[31830]: I0319 12:36:09.524777 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cce1e-volume-lvm-iscsi-config-data" Mar 19 12:36:09.542371 master-0 kubenswrapper[31830]: I0319 12:36:09.540190 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kckcx\" (UniqueName: \"kubernetes.io/projected/46a3c173-3990-4ec4-9125-086b417b3b69-kube-api-access-kckcx\") pod \"cinder-cce1e-backup-0\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.559815 master-0 kubenswrapper[31830]: I0319 12:36:09.551615 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc8c48879-kkx7f"] Mar 19 12:36:09.559815 master-0 kubenswrapper[31830]: I0319 12:36:09.552014 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-ovndb-tls-certs\") pod \"neutron-64b98cb88d-7qp8f\" (UID: \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\") " pod="openstack/neutron-64b98cb88d-7qp8f" Mar 19 12:36:09.559815 master-0 kubenswrapper[31830]: I0319 12:36:09.552105 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9ghf\" (UniqueName: \"kubernetes.io/projected/f63713e2-7d18-4053-b79c-86ab7b8e1e57-kube-api-access-d9ghf\") pod \"neutron-64b98cb88d-7qp8f\" (UID: \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\") " pod="openstack/neutron-64b98cb88d-7qp8f" Mar 19 12:36:09.559815 master-0 kubenswrapper[31830]: I0319 12:36:09.552225 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-combined-ca-bundle\") pod \"neutron-64b98cb88d-7qp8f\" (UID: \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\") " pod="openstack/neutron-64b98cb88d-7qp8f" Mar 19 12:36:09.559815 master-0 kubenswrapper[31830]: I0319 12:36:09.552271 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-httpd-config\") pod \"neutron-64b98cb88d-7qp8f\" (UID: \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\") " pod="openstack/neutron-64b98cb88d-7qp8f" Mar 19 12:36:09.559815 master-0 kubenswrapper[31830]: I0319 12:36:09.552289 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-config\") pod \"neutron-64b98cb88d-7qp8f\" (UID: \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\") " pod="openstack/neutron-64b98cb88d-7qp8f" Mar 19 12:36:09.591007 master-0 kubenswrapper[31830]: I0319 12:36:09.583457 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cce1e-volume-lvm-iscsi-0"] Mar 19 12:36:09.611822 master-0 kubenswrapper[31830]: I0319 12:36:09.606377 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7557f57847-t2m77"] Mar 19 12:36:09.611822 master-0 kubenswrapper[31830]: I0319 12:36:09.608887 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.625765 master-0 kubenswrapper[31830]: I0319 12:36:09.625682 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7557f57847-t2m77"] Mar 19 12:36:09.654722 master-0 kubenswrapper[31830]: I0319 12:36:09.654546 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-var-lib-cinder\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.654722 master-0 kubenswrapper[31830]: I0319 12:36:09.654603 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-combined-ca-bundle\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.654722 master-0 kubenswrapper[31830]: I0319 12:36:09.654657 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-ovndb-tls-certs\") pod \"neutron-64b98cb88d-7qp8f\" (UID: \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\") " pod="openstack/neutron-64b98cb88d-7qp8f" Mar 19 12:36:09.654722 master-0 kubenswrapper[31830]: I0319 12:36:09.654676 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-etc-nvme\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.654722 master-0 kubenswrapper[31830]: I0319 12:36:09.654711 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9ghf\" (UniqueName: \"kubernetes.io/projected/f63713e2-7d18-4053-b79c-86ab7b8e1e57-kube-api-access-d9ghf\") pod \"neutron-64b98cb88d-7qp8f\" (UID: \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\") " pod="openstack/neutron-64b98cb88d-7qp8f" Mar 19 12:36:09.655051 master-0 kubenswrapper[31830]: I0319 12:36:09.654735 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-etc-machine-id\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.655051 master-0 kubenswrapper[31830]: I0319 12:36:09.654764 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-dev\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.655051 master-0 kubenswrapper[31830]: I0319 12:36:09.654817 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-lib-modules\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.655051 master-0 kubenswrapper[31830]: I0319 12:36:09.654845 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-config-data-custom\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.655051 master-0 kubenswrapper[31830]: I0319 12:36:09.654887 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-var-locks-cinder\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.655051 master-0 kubenswrapper[31830]: I0319 12:36:09.654937 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-sys\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.655051 master-0 kubenswrapper[31830]: I0319 12:36:09.654953 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-combined-ca-bundle\") pod \"neutron-64b98cb88d-7qp8f\" (UID: \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\") " pod="openstack/neutron-64b98cb88d-7qp8f" Mar 19 12:36:09.655051 master-0 kubenswrapper[31830]: I0319 12:36:09.654997 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-httpd-config\") pod \"neutron-64b98cb88d-7qp8f\" (UID: \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\") " pod="openstack/neutron-64b98cb88d-7qp8f" Mar 19 12:36:09.655051 master-0 kubenswrapper[31830]: I0319 12:36:09.655014 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-config\") pod \"neutron-64b98cb88d-7qp8f\" (UID: \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\") " pod="openstack/neutron-64b98cb88d-7qp8f" Mar 19 12:36:09.655051 master-0 kubenswrapper[31830]: I0319 12:36:09.655036 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-etc-iscsi\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.655359 master-0 kubenswrapper[31830]: I0319 12:36:09.655081 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-var-locks-brick\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.655359 master-0 kubenswrapper[31830]: I0319 12:36:09.655106 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htlnm\" (UniqueName: \"kubernetes.io/projected/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-kube-api-access-htlnm\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.655359 master-0 kubenswrapper[31830]: I0319 12:36:09.655135 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-scripts\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.656948 master-0 kubenswrapper[31830]: I0319 12:36:09.656370 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-config-data\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.656948 master-0 kubenswrapper[31830]: I0319 12:36:09.656413 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-run\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.715534 master-0 kubenswrapper[31830]: I0319 12:36:09.710900 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-combined-ca-bundle\") pod \"neutron-64b98cb88d-7qp8f\" (UID: \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\") " pod="openstack/neutron-64b98cb88d-7qp8f" Mar 19 12:36:09.736050 master-0 kubenswrapper[31830]: I0319 12:36:09.733031 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:09.759987 master-0 kubenswrapper[31830]: I0319 12:36:09.759155 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-combined-ca-bundle\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.759987 master-0 kubenswrapper[31830]: I0319 12:36:09.759222 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-etc-nvme\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.759987 master-0 kubenswrapper[31830]: I0319 12:36:09.759255 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-edpm-a\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.759987 master-0 kubenswrapper[31830]: I0319 12:36:09.759298 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-etc-machine-id\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.759987 master-0 kubenswrapper[31830]: I0319 12:36:09.759336 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-dev\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.759987 master-0 kubenswrapper[31830]: I0319 12:36:09.759365 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccght\" (UniqueName: \"kubernetes.io/projected/293ebf87-213b-41aa-86be-a71453a91c0c-kube-api-access-ccght\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.759987 master-0 kubenswrapper[31830]: I0319 12:36:09.759382 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-dns-svc\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.759987 master-0 kubenswrapper[31830]: I0319 12:36:09.759402 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-lib-modules\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.759987 master-0 kubenswrapper[31830]: I0319 12:36:09.759424 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-config-data-custom\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.759987 master-0 kubenswrapper[31830]: I0319 12:36:09.759520 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-var-locks-cinder\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.759987 master-0 kubenswrapper[31830]: I0319 12:36:09.759548 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-ovsdbserver-nb\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.759987 master-0 kubenswrapper[31830]: I0319 12:36:09.759580 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-sys\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.759987 master-0 kubenswrapper[31830]: I0319 12:36:09.759622 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-edpm-b\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.759987 master-0 kubenswrapper[31830]: I0319 12:36:09.759667 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-etc-iscsi\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.759987 master-0 kubenswrapper[31830]: I0319 12:36:09.759698 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-var-locks-brick\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.759987 master-0 kubenswrapper[31830]: I0319 12:36:09.759731 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-dns-swift-storage-0\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.759987 master-0 kubenswrapper[31830]: I0319 12:36:09.759751 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htlnm\" (UniqueName: \"kubernetes.io/projected/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-kube-api-access-htlnm\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.759987 master-0 kubenswrapper[31830]: I0319 12:36:09.759786 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-scripts\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.759987 master-0 kubenswrapper[31830]: I0319 12:36:09.759844 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-config-data\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.759987 master-0 kubenswrapper[31830]: I0319 12:36:09.759863 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-config\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.759987 master-0 kubenswrapper[31830]: I0319 12:36:09.759975 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-run\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.789364 master-0 kubenswrapper[31830]: I0319 12:36:09.760025 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-var-lib-cinder\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.789364 master-0 kubenswrapper[31830]: I0319 12:36:09.760064 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-ovsdbserver-sb\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.789364 master-0 kubenswrapper[31830]: I0319 12:36:09.760690 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-etc-nvme\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.789364 master-0 kubenswrapper[31830]: I0319 12:36:09.760826 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-etc-machine-id\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.789364 master-0 kubenswrapper[31830]: I0319 12:36:09.760881 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-dev\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.789364 master-0 kubenswrapper[31830]: I0319 12:36:09.761385 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-etc-iscsi\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.789364 master-0 kubenswrapper[31830]: I0319 12:36:09.761532 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-lib-modules\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.789364 master-0 kubenswrapper[31830]: I0319 12:36:09.763993 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9ghf\" (UniqueName: \"kubernetes.io/projected/f63713e2-7d18-4053-b79c-86ab7b8e1e57-kube-api-access-d9ghf\") pod \"neutron-64b98cb88d-7qp8f\" (UID: \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\") " pod="openstack/neutron-64b98cb88d-7qp8f" Mar 19 12:36:09.789364 master-0 kubenswrapper[31830]: I0319 12:36:09.770774 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-var-locks-brick\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.789364 master-0 kubenswrapper[31830]: I0319 12:36:09.770999 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-run\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.789364 master-0 kubenswrapper[31830]: I0319 12:36:09.771080 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-var-locks-cinder\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.789364 master-0 kubenswrapper[31830]: I0319 12:36:09.771145 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-var-lib-cinder\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.789364 master-0 kubenswrapper[31830]: I0319 12:36:09.771190 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-sys\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.789364 master-0 kubenswrapper[31830]: I0319 12:36:09.775296 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-config-data-custom\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.789364 master-0 kubenswrapper[31830]: I0319 12:36:09.781523 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-scripts\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.797958 master-0 kubenswrapper[31830]: I0319 12:36:09.793156 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-config-data\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.832261 master-0 kubenswrapper[31830]: I0319 12:36:09.832184 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-combined-ca-bundle\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.854805 master-0 kubenswrapper[31830]: I0319 12:36:09.853021 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-httpd-config\") pod \"neutron-64b98cb88d-7qp8f\" (UID: \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\") " pod="openstack/neutron-64b98cb88d-7qp8f" Mar 19 12:36:09.854805 master-0 kubenswrapper[31830]: I0319 12:36:09.854490 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htlnm\" (UniqueName: \"kubernetes.io/projected/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-kube-api-access-htlnm\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.873818 master-0 kubenswrapper[31830]: I0319 12:36:09.873570 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-ovndb-tls-certs\") pod \"neutron-64b98cb88d-7qp8f\" (UID: \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\") " pod="openstack/neutron-64b98cb88d-7qp8f" Mar 19 12:36:09.876540 master-0 kubenswrapper[31830]: I0319 12:36:09.876487 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-config\") pod \"neutron-64b98cb88d-7qp8f\" (UID: \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\") " pod="openstack/neutron-64b98cb88d-7qp8f" Mar 19 12:36:09.876780 master-0 kubenswrapper[31830]: I0319 12:36:09.876757 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-config\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.879171 master-0 kubenswrapper[31830]: I0319 12:36:09.879151 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-ovsdbserver-sb\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.879313 master-0 kubenswrapper[31830]: I0319 12:36:09.879301 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-edpm-a\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.879387 master-0 kubenswrapper[31830]: I0319 12:36:09.879347 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-config\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.905981 master-0 kubenswrapper[31830]: I0319 12:36:09.880831 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-ovsdbserver-sb\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.905981 master-0 kubenswrapper[31830]: I0319 12:36:09.881058 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-edpm-a\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.905981 master-0 kubenswrapper[31830]: I0319 12:36:09.893835 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:09.907814 master-0 kubenswrapper[31830]: I0319 12:36:09.906359 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-dns-svc\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.907814 master-0 kubenswrapper[31830]: I0319 12:36:09.906418 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccght\" (UniqueName: \"kubernetes.io/projected/293ebf87-213b-41aa-86be-a71453a91c0c-kube-api-access-ccght\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.907814 master-0 kubenswrapper[31830]: I0319 12:36:09.906557 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-ovsdbserver-nb\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.907814 master-0 kubenswrapper[31830]: I0319 12:36:09.906688 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-edpm-b\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.907814 master-0 kubenswrapper[31830]: I0319 12:36:09.906774 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-dns-swift-storage-0\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.907814 master-0 kubenswrapper[31830]: I0319 12:36:09.907678 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-dns-swift-storage-0\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.908539 master-0 kubenswrapper[31830]: I0319 12:36:09.908513 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-dns-svc\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.909848 master-0 kubenswrapper[31830]: I0319 12:36:09.909789 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-ovsdbserver-nb\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.915518 master-0 kubenswrapper[31830]: I0319 12:36:09.915490 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-edpm-b\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.936419 master-0 kubenswrapper[31830]: I0319 12:36:09.936383 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccght\" (UniqueName: \"kubernetes.io/projected/293ebf87-213b-41aa-86be-a71453a91c0c-kube-api-access-ccght\") pod \"dnsmasq-dns-7557f57847-t2m77\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:09.970594 master-0 kubenswrapper[31830]: I0319 12:36:09.970522 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-cce1e-api-0"] Mar 19 12:36:09.972702 master-0 kubenswrapper[31830]: I0319 12:36:09.972652 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cce1e-api-0"] Mar 19 12:36:09.973053 master-0 kubenswrapper[31830]: I0319 12:36:09.973033 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:09.977293 master-0 kubenswrapper[31830]: I0319 12:36:09.977271 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cce1e-api-config-data" Mar 19 12:36:10.012054 master-0 kubenswrapper[31830]: I0319 12:36:10.011980 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-scripts\") pod \"cinder-cce1e-api-0\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.012412 master-0 kubenswrapper[31830]: I0319 12:36:10.012388 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-config-data\") pod \"cinder-cce1e-api-0\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.012533 master-0 kubenswrapper[31830]: I0319 12:36:10.012516 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e29b9f27-2667-4fa6-9a91-91d92a7950e7-logs\") pod \"cinder-cce1e-api-0\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.012774 master-0 kubenswrapper[31830]: I0319 12:36:10.012754 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-combined-ca-bundle\") pod \"cinder-cce1e-api-0\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.012931 master-0 kubenswrapper[31830]: I0319 12:36:10.012909 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e29b9f27-2667-4fa6-9a91-91d92a7950e7-etc-machine-id\") pod \"cinder-cce1e-api-0\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.013418 master-0 kubenswrapper[31830]: I0319 12:36:10.013395 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-config-data-custom\") pod \"cinder-cce1e-api-0\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.013545 master-0 kubenswrapper[31830]: I0319 12:36:10.013524 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzsv5\" (UniqueName: \"kubernetes.io/projected/e29b9f27-2667-4fa6-9a91-91d92a7950e7-kube-api-access-qzsv5\") pod \"cinder-cce1e-api-0\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.044887 master-0 kubenswrapper[31830]: I0319 12:36:10.043434 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:10.087245 master-0 kubenswrapper[31830]: I0319 12:36:10.087180 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64b98cb88d-7qp8f" Mar 19 12:36:10.120593 master-0 kubenswrapper[31830]: I0319 12:36:10.118288 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-config-data\") pod \"cinder-cce1e-api-0\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.120593 master-0 kubenswrapper[31830]: I0319 12:36:10.118385 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e29b9f27-2667-4fa6-9a91-91d92a7950e7-logs\") pod \"cinder-cce1e-api-0\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.120593 master-0 kubenswrapper[31830]: I0319 12:36:10.118754 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-combined-ca-bundle\") pod \"cinder-cce1e-api-0\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.120593 master-0 kubenswrapper[31830]: I0319 12:36:10.118827 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e29b9f27-2667-4fa6-9a91-91d92a7950e7-etc-machine-id\") pod \"cinder-cce1e-api-0\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.120593 master-0 kubenswrapper[31830]: I0319 12:36:10.118898 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-config-data-custom\") pod \"cinder-cce1e-api-0\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.120593 master-0 kubenswrapper[31830]: I0319 12:36:10.118918 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzsv5\" (UniqueName: \"kubernetes.io/projected/e29b9f27-2667-4fa6-9a91-91d92a7950e7-kube-api-access-qzsv5\") pod \"cinder-cce1e-api-0\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.120593 master-0 kubenswrapper[31830]: I0319 12:36:10.119221 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-scripts\") pod \"cinder-cce1e-api-0\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.124223 master-0 kubenswrapper[31830]: I0319 12:36:10.123358 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e29b9f27-2667-4fa6-9a91-91d92a7950e7-logs\") pod \"cinder-cce1e-api-0\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.124223 master-0 kubenswrapper[31830]: I0319 12:36:10.123442 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e29b9f27-2667-4fa6-9a91-91d92a7950e7-etc-machine-id\") pod \"cinder-cce1e-api-0\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.143387 master-0 kubenswrapper[31830]: I0319 12:36:10.143325 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-config-data-custom\") pod \"cinder-cce1e-api-0\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.152051 master-0 kubenswrapper[31830]: I0319 12:36:10.151983 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-config-data\") pod \"cinder-cce1e-api-0\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.152467 master-0 kubenswrapper[31830]: I0319 12:36:10.152364 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-scripts\") pod \"cinder-cce1e-api-0\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.153929 master-0 kubenswrapper[31830]: I0319 12:36:10.153875 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-569d794d4c-pmgr5"] Mar 19 12:36:10.164994 master-0 kubenswrapper[31830]: I0319 12:36:10.163949 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-combined-ca-bundle\") pod \"cinder-cce1e-api-0\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.167328 master-0 kubenswrapper[31830]: I0319 12:36:10.165445 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzsv5\" (UniqueName: \"kubernetes.io/projected/e29b9f27-2667-4fa6-9a91-91d92a7950e7-kube-api-access-qzsv5\") pod \"cinder-cce1e-api-0\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.210866 master-0 kubenswrapper[31830]: W0319 12:36:10.201953 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39eba887_ef1e_47a9_b6cf_6d445d0ae88b.slice/crio-b2217aa4a6d477894fc04b9a6aa30bb7d68a3cae532e5f9c88d1e4e826221b01 WatchSource:0}: Error finding container b2217aa4a6d477894fc04b9a6aa30bb7d68a3cae532e5f9c88d1e4e826221b01: Status 404 returned error can't find the container with id b2217aa4a6d477894fc04b9a6aa30bb7d68a3cae532e5f9c88d1e4e826221b01 Mar 19 12:36:10.227033 master-0 kubenswrapper[31830]: I0319 12:36:10.226994 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb" Mar 19 12:36:10.274891 master-0 kubenswrapper[31830]: I0319 12:36:10.264776 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cce1e-scheduler-0"] Mar 19 12:36:10.285368 master-0 kubenswrapper[31830]: I0319 12:36:10.281542 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc8c48879-kkx7f"] Mar 19 12:36:10.325225 master-0 kubenswrapper[31830]: I0319 12:36:10.325175 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:10.400061 master-0 kubenswrapper[31830]: W0319 12:36:10.396626 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98e9b667_3127_485d_8970_4debf1ca6259.slice/crio-20370796d6b7d56ba0e8733216a3c6ef3649ab8b05dac7b917adb535ee168fe7 WatchSource:0}: Error finding container 20370796d6b7d56ba0e8733216a3c6ef3649ab8b05dac7b917adb535ee168fe7: Status 404 returned error can't find the container with id 20370796d6b7d56ba0e8733216a3c6ef3649ab8b05dac7b917adb535ee168fe7 Mar 19 12:36:10.450541 master-0 kubenswrapper[31830]: I0319 12:36:10.450478 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" event={"ID":"98e9b667-3127-485d-8970-4debf1ca6259","Type":"ContainerStarted","Data":"20370796d6b7d56ba0e8733216a3c6ef3649ab8b05dac7b917adb535ee168fe7"} Mar 19 12:36:10.452787 master-0 kubenswrapper[31830]: I0319 12:36:10.452742 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-569d794d4c-pmgr5" event={"ID":"39eba887-ef1e-47a9-b6cf-6d445d0ae88b","Type":"ContainerStarted","Data":"b2217aa4a6d477894fc04b9a6aa30bb7d68a3cae532e5f9c88d1e4e826221b01"} Mar 19 12:36:10.454730 master-0 kubenswrapper[31830]: I0319 12:36:10.454688 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-scheduler-0" event={"ID":"569ec673-0799-4639-80f6-44155889d03c","Type":"ContainerStarted","Data":"770f1f688bad92e37cabe794c84b8e3f66124d6941f781e4f81a701e3a5b0e20"} Mar 19 12:36:10.859409 master-0 kubenswrapper[31830]: I0319 12:36:10.856785 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h" Mar 19 12:36:11.034253 master-0 kubenswrapper[31830]: I0319 12:36:11.034186 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cce1e-volume-lvm-iscsi-0"] Mar 19 12:36:11.087191 master-0 kubenswrapper[31830]: I0319 12:36:11.087111 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7557f57847-t2m77"] Mar 19 12:36:11.098077 master-0 kubenswrapper[31830]: W0319 12:36:11.097065 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod293ebf87_213b_41aa_86be_a71453a91c0c.slice/crio-ffb64820b482bc0c14792c76ba19db44193a9c2d1f67c51d8339fa14b2c69ef2 WatchSource:0}: Error finding container ffb64820b482bc0c14792c76ba19db44193a9c2d1f67c51d8339fa14b2c69ef2: Status 404 returned error can't find the container with id ffb64820b482bc0c14792c76ba19db44193a9c2d1f67c51d8339fa14b2c69ef2 Mar 19 12:36:11.254042 master-0 kubenswrapper[31830]: I0319 12:36:11.253992 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cce1e-api-0"] Mar 19 12:36:11.466738 master-0 kubenswrapper[31830]: I0319 12:36:11.466682 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7557f57847-t2m77" event={"ID":"293ebf87-213b-41aa-86be-a71453a91c0c","Type":"ContainerStarted","Data":"ffb64820b482bc0c14792c76ba19db44193a9c2d1f67c51d8339fa14b2c69ef2"} Mar 19 12:36:11.468581 master-0 kubenswrapper[31830]: I0319 12:36:11.468546 31830 generic.go:334] "Generic (PLEG): container finished" podID="98e9b667-3127-485d-8970-4debf1ca6259" containerID="b86c6f285286b0bc045feb769c3297dd58644d09fbcc96d9bd7a2df37655e905" exitCode=0 Mar 19 12:36:11.468652 master-0 kubenswrapper[31830]: I0319 12:36:11.468599 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" event={"ID":"98e9b667-3127-485d-8970-4debf1ca6259","Type":"ContainerDied","Data":"b86c6f285286b0bc045feb769c3297dd58644d09fbcc96d9bd7a2df37655e905"} Mar 19 12:36:11.470468 master-0 kubenswrapper[31830]: I0319 12:36:11.470431 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" event={"ID":"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e","Type":"ContainerStarted","Data":"9ee71ae8ceec8486323653c9fb76f98534c51fbf3817d7ece85f18f147f1dd7f"} Mar 19 12:36:11.472703 master-0 kubenswrapper[31830]: I0319 12:36:11.472663 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-569d794d4c-pmgr5" event={"ID":"39eba887-ef1e-47a9-b6cf-6d445d0ae88b","Type":"ContainerStarted","Data":"fd0e679c73990832845bde013a68eb1130ac9ab97aae2dfd245aa98f01f8b37f"} Mar 19 12:36:11.475602 master-0 kubenswrapper[31830]: I0319 12:36:11.475581 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-api-0" event={"ID":"e29b9f27-2667-4fa6-9a91-91d92a7950e7","Type":"ContainerStarted","Data":"11c3606c41ab9b005756d9bcd5e9cf21f7cef63058815801405bdf6a4051ccdf"} Mar 19 12:36:11.629558 master-0 kubenswrapper[31830]: I0319 12:36:11.629493 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cce1e-backup-0"] Mar 19 12:36:11.636050 master-0 kubenswrapper[31830]: I0319 12:36:11.635961 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-569d794d4c-pmgr5" podStartSLOduration=3.635937397 podStartE2EDuration="3.635937397s" podCreationTimestamp="2026-03-19 12:36:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:36:11.574729549 +0000 UTC m=+1310.123690253" watchObservedRunningTime="2026-03-19 12:36:11.635937397 +0000 UTC m=+1310.184898101" Mar 19 12:36:11.946748 master-0 kubenswrapper[31830]: I0319 12:36:11.946697 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:12.088072 master-0 kubenswrapper[31830]: I0319 12:36:12.087281 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64b98cb88d-7qp8f"] Mar 19 12:36:12.120070 master-0 kubenswrapper[31830]: I0319 12:36:12.120020 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-config\") pod \"98e9b667-3127-485d-8970-4debf1ca6259\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " Mar 19 12:36:12.120366 master-0 kubenswrapper[31830]: I0319 12:36:12.120146 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px96q\" (UniqueName: \"kubernetes.io/projected/98e9b667-3127-485d-8970-4debf1ca6259-kube-api-access-px96q\") pod \"98e9b667-3127-485d-8970-4debf1ca6259\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " Mar 19 12:36:12.120366 master-0 kubenswrapper[31830]: I0319 12:36:12.120250 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-ovsdbserver-nb\") pod \"98e9b667-3127-485d-8970-4debf1ca6259\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " Mar 19 12:36:12.120366 master-0 kubenswrapper[31830]: I0319 12:36:12.120358 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-edpm-a\") pod \"98e9b667-3127-485d-8970-4debf1ca6259\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " Mar 19 12:36:12.120500 master-0 kubenswrapper[31830]: I0319 12:36:12.120428 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-dns-svc\") pod \"98e9b667-3127-485d-8970-4debf1ca6259\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " Mar 19 12:36:12.120500 master-0 kubenswrapper[31830]: I0319 12:36:12.120463 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-dns-swift-storage-0\") pod \"98e9b667-3127-485d-8970-4debf1ca6259\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " Mar 19 12:36:12.120500 master-0 kubenswrapper[31830]: I0319 12:36:12.120496 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-edpm-b\") pod \"98e9b667-3127-485d-8970-4debf1ca6259\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " Mar 19 12:36:12.120606 master-0 kubenswrapper[31830]: I0319 12:36:12.120518 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-ovsdbserver-sb\") pod \"98e9b667-3127-485d-8970-4debf1ca6259\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " Mar 19 12:36:12.151224 master-0 kubenswrapper[31830]: I0319 12:36:12.151137 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98e9b667-3127-485d-8970-4debf1ca6259-kube-api-access-px96q" (OuterVolumeSpecName: "kube-api-access-px96q") pod "98e9b667-3127-485d-8970-4debf1ca6259" (UID: "98e9b667-3127-485d-8970-4debf1ca6259"). InnerVolumeSpecName "kube-api-access-px96q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:36:12.156293 master-0 kubenswrapper[31830]: I0319 12:36:12.154434 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "98e9b667-3127-485d-8970-4debf1ca6259" (UID: "98e9b667-3127-485d-8970-4debf1ca6259"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:36:12.168274 master-0 kubenswrapper[31830]: I0319 12:36:12.168209 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "98e9b667-3127-485d-8970-4debf1ca6259" (UID: "98e9b667-3127-485d-8970-4debf1ca6259"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:36:12.182598 master-0 kubenswrapper[31830]: I0319 12:36:12.182332 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-edpm-b" (OuterVolumeSpecName: "edpm-b") pod "98e9b667-3127-485d-8970-4debf1ca6259" (UID: "98e9b667-3127-485d-8970-4debf1ca6259"). InnerVolumeSpecName "edpm-b". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:36:12.183118 master-0 kubenswrapper[31830]: I0319 12:36:12.183012 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "98e9b667-3127-485d-8970-4debf1ca6259" (UID: "98e9b667-3127-485d-8970-4debf1ca6259"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:36:12.183525 master-0 kubenswrapper[31830]: I0319 12:36:12.183478 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-edpm-a" (OuterVolumeSpecName: "edpm-a") pod "98e9b667-3127-485d-8970-4debf1ca6259" (UID: "98e9b667-3127-485d-8970-4debf1ca6259"). InnerVolumeSpecName "edpm-a". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:36:12.223741 master-0 kubenswrapper[31830]: I0319 12:36:12.223599 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-config" (OuterVolumeSpecName: "config") pod "98e9b667-3127-485d-8970-4debf1ca6259" (UID: "98e9b667-3127-485d-8970-4debf1ca6259"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:36:12.224376 master-0 kubenswrapper[31830]: I0319 12:36:12.224318 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-config\") pod \"98e9b667-3127-485d-8970-4debf1ca6259\" (UID: \"98e9b667-3127-485d-8970-4debf1ca6259\") " Mar 19 12:36:12.225059 master-0 kubenswrapper[31830]: W0319 12:36:12.225036 31830 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/98e9b667-3127-485d-8970-4debf1ca6259/volumes/kubernetes.io~configmap/config Mar 19 12:36:12.225144 master-0 kubenswrapper[31830]: I0319 12:36:12.225058 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-config" (OuterVolumeSpecName: "config") pod "98e9b667-3127-485d-8970-4debf1ca6259" (UID: "98e9b667-3127-485d-8970-4debf1ca6259"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:36:12.225300 master-0 kubenswrapper[31830]: I0319 12:36:12.225278 31830 reconciler_common.go:293] "Volume detached for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-edpm-a\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:12.225365 master-0 kubenswrapper[31830]: I0319 12:36:12.225300 31830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:12.225365 master-0 kubenswrapper[31830]: I0319 12:36:12.225311 31830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:12.225365 master-0 kubenswrapper[31830]: I0319 12:36:12.225324 31830 reconciler_common.go:293] "Volume detached for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-edpm-b\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:12.225365 master-0 kubenswrapper[31830]: I0319 12:36:12.225333 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:12.225365 master-0 kubenswrapper[31830]: I0319 12:36:12.225341 31830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:12.225365 master-0 kubenswrapper[31830]: I0319 12:36:12.225352 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px96q\" (UniqueName: \"kubernetes.io/projected/98e9b667-3127-485d-8970-4debf1ca6259-kube-api-access-px96q\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:12.226488 master-0 kubenswrapper[31830]: I0319 12:36:12.226455 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "98e9b667-3127-485d-8970-4debf1ca6259" (UID: "98e9b667-3127-485d-8970-4debf1ca6259"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:36:12.330630 master-0 kubenswrapper[31830]: I0319 12:36:12.330288 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98e9b667-3127-485d-8970-4debf1ca6259-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:12.516009 master-0 kubenswrapper[31830]: I0319 12:36:12.514707 31830 generic.go:334] "Generic (PLEG): container finished" podID="293ebf87-213b-41aa-86be-a71453a91c0c" containerID="fd8cb515e6f48fa4810d7740925785737ed4514d65dd33253f75d1d869c99d24" exitCode=0 Mar 19 12:36:12.516009 master-0 kubenswrapper[31830]: I0319 12:36:12.514840 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7557f57847-t2m77" event={"ID":"293ebf87-213b-41aa-86be-a71453a91c0c","Type":"ContainerDied","Data":"fd8cb515e6f48fa4810d7740925785737ed4514d65dd33253f75d1d869c99d24"} Mar 19 12:36:12.530926 master-0 kubenswrapper[31830]: I0319 12:36:12.530858 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" event={"ID":"98e9b667-3127-485d-8970-4debf1ca6259","Type":"ContainerDied","Data":"20370796d6b7d56ba0e8733216a3c6ef3649ab8b05dac7b917adb535ee168fe7"} Mar 19 12:36:12.531127 master-0 kubenswrapper[31830]: I0319 12:36:12.530935 31830 scope.go:117] "RemoveContainer" containerID="b86c6f285286b0bc045feb769c3297dd58644d09fbcc96d9bd7a2df37655e905" Mar 19 12:36:12.531127 master-0 kubenswrapper[31830]: I0319 12:36:12.531109 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc8c48879-kkx7f" Mar 19 12:36:12.546581 master-0 kubenswrapper[31830]: I0319 12:36:12.546529 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64b98cb88d-7qp8f" event={"ID":"f63713e2-7d18-4053-b79c-86ab7b8e1e57","Type":"ContainerStarted","Data":"4c0b32a194c064efd865c0225819704a1830c5b8e49a78d509bfd0a4cc84e5b9"} Mar 19 12:36:12.554291 master-0 kubenswrapper[31830]: I0319 12:36:12.554250 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-backup-0" event={"ID":"46a3c173-3990-4ec4-9125-086b417b3b69","Type":"ContainerStarted","Data":"740da661bda19353216548d3f5edfb3f73813bbbcb6b60bcad7ef05c1964cd6b"} Mar 19 12:36:12.556342 master-0 kubenswrapper[31830]: I0319 12:36:12.556315 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-api-0" event={"ID":"e29b9f27-2667-4fa6-9a91-91d92a7950e7","Type":"ContainerStarted","Data":"a2e08da402135711e902b6cc6a76e56115e5418dbbe1e06c2d891ca1f5908d5a"} Mar 19 12:36:12.556698 master-0 kubenswrapper[31830]: I0319 12:36:12.556669 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:12.651401 master-0 kubenswrapper[31830]: I0319 12:36:12.651331 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc8c48879-kkx7f"] Mar 19 12:36:12.665033 master-0 kubenswrapper[31830]: I0319 12:36:12.664993 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dc8c48879-kkx7f"] Mar 19 12:36:13.598366 master-0 kubenswrapper[31830]: I0319 12:36:13.598313 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-api-0" event={"ID":"e29b9f27-2667-4fa6-9a91-91d92a7950e7","Type":"ContainerStarted","Data":"4e7f0677f0650349af13b07c7b8c2c3b4b8f9d6155cfe8a51a3571ff7aff3daf"} Mar 19 12:36:13.599158 master-0 kubenswrapper[31830]: I0319 12:36:13.599115 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:13.615818 master-0 kubenswrapper[31830]: I0319 12:36:13.615747 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7557f57847-t2m77" event={"ID":"293ebf87-213b-41aa-86be-a71453a91c0c","Type":"ContainerStarted","Data":"73c784ae1f1ba201279fca33a48c0e3517d76ecbe82116e10b0ecf59e8173cf5"} Mar 19 12:36:13.619341 master-0 kubenswrapper[31830]: I0319 12:36:13.619280 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:13.630582 master-0 kubenswrapper[31830]: I0319 12:36:13.630400 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-cce1e-api-0" podStartSLOduration=4.63038033 podStartE2EDuration="4.63038033s" podCreationTimestamp="2026-03-19 12:36:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:36:13.623306191 +0000 UTC m=+1312.172266895" watchObservedRunningTime="2026-03-19 12:36:13.63038033 +0000 UTC m=+1312.179341034" Mar 19 12:36:13.636516 master-0 kubenswrapper[31830]: I0319 12:36:13.635730 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-scheduler-0" event={"ID":"569ec673-0799-4639-80f6-44155889d03c","Type":"ContainerStarted","Data":"ac960a84a8a49d25a9eed1f44861b73518354897fd89c2e88671a72f60eb44c3"} Mar 19 12:36:13.644113 master-0 kubenswrapper[31830]: I0319 12:36:13.644052 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" event={"ID":"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e","Type":"ContainerStarted","Data":"d43eaee996917ceb6745545f47af700ee3a2bc4511923c9a37c04aa30afb3c04"} Mar 19 12:36:13.651006 master-0 kubenswrapper[31830]: I0319 12:36:13.650950 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64b98cb88d-7qp8f" event={"ID":"f63713e2-7d18-4053-b79c-86ab7b8e1e57","Type":"ContainerStarted","Data":"cb6af135f4ae69eedbb4aec9e3cbe89d878ef397b2a48c0d77f21c32471ee978"} Mar 19 12:36:13.651006 master-0 kubenswrapper[31830]: I0319 12:36:13.651013 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-64b98cb88d-7qp8f" Mar 19 12:36:13.651006 master-0 kubenswrapper[31830]: I0319 12:36:13.651028 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64b98cb88d-7qp8f" event={"ID":"f63713e2-7d18-4053-b79c-86ab7b8e1e57","Type":"ContainerStarted","Data":"c1e486bc1b061db94e8c2a39ba8abda61e5e754c92bcec99626f94dd2915ed34"} Mar 19 12:36:13.677846 master-0 kubenswrapper[31830]: I0319 12:36:13.677598 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7557f57847-t2m77" podStartSLOduration=4.677575343 podStartE2EDuration="4.677575343s" podCreationTimestamp="2026-03-19 12:36:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:36:13.660405692 +0000 UTC m=+1312.209366396" watchObservedRunningTime="2026-03-19 12:36:13.677575343 +0000 UTC m=+1312.226536047" Mar 19 12:36:13.745697 master-0 kubenswrapper[31830]: I0319 12:36:13.739396 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98e9b667-3127-485d-8970-4debf1ca6259" path="/var/lib/kubelet/pods/98e9b667-3127-485d-8970-4debf1ca6259/volumes" Mar 19 12:36:13.837084 master-0 kubenswrapper[31830]: I0319 12:36:13.834713 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-64b98cb88d-7qp8f" podStartSLOduration=4.834693976 podStartE2EDuration="4.834693976s" podCreationTimestamp="2026-03-19 12:36:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:36:13.699445792 +0000 UTC m=+1312.248406496" watchObservedRunningTime="2026-03-19 12:36:13.834693976 +0000 UTC m=+1312.383654680" Mar 19 12:36:13.883277 master-0 kubenswrapper[31830]: I0319 12:36:13.880679 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-cce1e-api-0"] Mar 19 12:36:14.662748 master-0 kubenswrapper[31830]: I0319 12:36:14.662685 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-backup-0" event={"ID":"46a3c173-3990-4ec4-9125-086b417b3b69","Type":"ContainerStarted","Data":"090c1faf2666575f0113c6ef10434ceb938ebe4596a3f2caeb440faac9ddb1ba"} Mar 19 12:36:14.662748 master-0 kubenswrapper[31830]: I0319 12:36:14.662751 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-backup-0" event={"ID":"46a3c173-3990-4ec4-9125-086b417b3b69","Type":"ContainerStarted","Data":"763539fc9243733146a6553966e0a5e874325f8cf74de027693b2e894c092271"} Mar 19 12:36:14.665527 master-0 kubenswrapper[31830]: I0319 12:36:14.665487 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-scheduler-0" event={"ID":"569ec673-0799-4639-80f6-44155889d03c","Type":"ContainerStarted","Data":"c41ba551b1eb4593e27123776d95ef1602f087c8156505e6d5c6abee484a6e21"} Mar 19 12:36:14.668044 master-0 kubenswrapper[31830]: I0319 12:36:14.667998 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" event={"ID":"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e","Type":"ContainerStarted","Data":"4788f446ec61a1f61a6723b78d4ea832d97a6cca661d0c232e999296c0f3155b"} Mar 19 12:36:14.746564 master-0 kubenswrapper[31830]: I0319 12:36:14.746467 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-cce1e-backup-0" podStartSLOduration=4.6787772709999995 podStartE2EDuration="5.746443052s" podCreationTimestamp="2026-03-19 12:36:09 +0000 UTC" firstStartedPulling="2026-03-19 12:36:11.876643572 +0000 UTC m=+1310.425604276" lastFinishedPulling="2026-03-19 12:36:12.944309353 +0000 UTC m=+1311.493270057" observedRunningTime="2026-03-19 12:36:14.737538626 +0000 UTC m=+1313.286499350" watchObservedRunningTime="2026-03-19 12:36:14.746443052 +0000 UTC m=+1313.295403766" Mar 19 12:36:14.770254 master-0 kubenswrapper[31830]: I0319 12:36:14.770159 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:14.895426 master-0 kubenswrapper[31830]: I0319 12:36:14.895358 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:15.008340 master-0 kubenswrapper[31830]: I0319 12:36:15.008252 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" podStartSLOduration=4.627335024 podStartE2EDuration="6.00823065s" podCreationTimestamp="2026-03-19 12:36:09 +0000 UTC" firstStartedPulling="2026-03-19 12:36:11.054164684 +0000 UTC m=+1309.603125388" lastFinishedPulling="2026-03-19 12:36:12.43506031 +0000 UTC m=+1310.984021014" observedRunningTime="2026-03-19 12:36:15.000191311 +0000 UTC m=+1313.549152025" watchObservedRunningTime="2026-03-19 12:36:15.00823065 +0000 UTC m=+1313.557191354" Mar 19 12:36:15.199389 master-0 kubenswrapper[31830]: I0319 12:36:15.199302 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb" Mar 19 12:36:15.202886 master-0 kubenswrapper[31830]: I0319 12:36:15.202840 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb" Mar 19 12:36:15.226913 master-0 kubenswrapper[31830]: I0319 12:36:15.226821 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-cce1e-scheduler-0" podStartSLOduration=5.614191117 podStartE2EDuration="7.226777548s" podCreationTimestamp="2026-03-19 12:36:08 +0000 UTC" firstStartedPulling="2026-03-19 12:36:10.316546609 +0000 UTC m=+1308.865507313" lastFinishedPulling="2026-03-19 12:36:11.92913304 +0000 UTC m=+1310.478093744" observedRunningTime="2026-03-19 12:36:15.212905958 +0000 UTC m=+1313.761866682" watchObservedRunningTime="2026-03-19 12:36:15.226777548 +0000 UTC m=+1313.775738252" Mar 19 12:36:15.703564 master-0 kubenswrapper[31830]: I0319 12:36:15.700114 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-cce1e-api-0" podUID="e29b9f27-2667-4fa6-9a91-91d92a7950e7" containerName="cinder-cce1e-api-log" containerID="cri-o://a2e08da402135711e902b6cc6a76e56115e5418dbbe1e06c2d891ca1f5908d5a" gracePeriod=30 Mar 19 12:36:15.703564 master-0 kubenswrapper[31830]: I0319 12:36:15.700873 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-cce1e-api-0" podUID="e29b9f27-2667-4fa6-9a91-91d92a7950e7" containerName="cinder-api" containerID="cri-o://4e7f0677f0650349af13b07c7b8c2c3b4b8f9d6155cfe8a51a3571ff7aff3daf" gracePeriod=30 Mar 19 12:36:15.735886 master-0 kubenswrapper[31830]: I0319 12:36:15.729059 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb" Mar 19 12:36:15.860822 master-0 kubenswrapper[31830]: I0319 12:36:15.858209 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h" Mar 19 12:36:15.872900 master-0 kubenswrapper[31830]: I0319 12:36:15.870349 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h" Mar 19 12:36:16.045367 master-0 kubenswrapper[31830]: I0319 12:36:16.043910 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/edpm-a-provisionserver-checksum-discovery-g8vpc"] Mar 19 12:36:16.045367 master-0 kubenswrapper[31830]: E0319 12:36:16.044588 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98e9b667-3127-485d-8970-4debf1ca6259" containerName="init" Mar 19 12:36:16.045367 master-0 kubenswrapper[31830]: I0319 12:36:16.044611 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="98e9b667-3127-485d-8970-4debf1ca6259" containerName="init" Mar 19 12:36:16.045367 master-0 kubenswrapper[31830]: I0319 12:36:16.044905 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="98e9b667-3127-485d-8970-4debf1ca6259" containerName="init" Mar 19 12:36:16.046927 master-0 kubenswrapper[31830]: I0319 12:36:16.046517 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/edpm-a-provisionserver-checksum-discovery-g8vpc" Mar 19 12:36:16.095629 master-0 kubenswrapper[31830]: I0319 12:36:16.095553 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/edpm-a-provisionserver-checksum-discovery-g8vpc"] Mar 19 12:36:16.160168 master-0 kubenswrapper[31830]: I0319 12:36:16.160122 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-data\" (UniqueName: \"kubernetes.io/empty-dir/f33cd3dd-af62-465e-8e5d-6a2ad7e86748-image-data\") pod \"edpm-a-provisionserver-checksum-discovery-g8vpc\" (UID: \"f33cd3dd-af62-465e-8e5d-6a2ad7e86748\") " pod="openstack/edpm-a-provisionserver-checksum-discovery-g8vpc" Mar 19 12:36:16.160383 master-0 kubenswrapper[31830]: I0319 12:36:16.160187 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nqx2\" (UniqueName: \"kubernetes.io/projected/f33cd3dd-af62-465e-8e5d-6a2ad7e86748-kube-api-access-8nqx2\") pod \"edpm-a-provisionserver-checksum-discovery-g8vpc\" (UID: \"f33cd3dd-af62-465e-8e5d-6a2ad7e86748\") " pod="openstack/edpm-a-provisionserver-checksum-discovery-g8vpc" Mar 19 12:36:16.281969 master-0 kubenswrapper[31830]: I0319 12:36:16.263432 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-data\" (UniqueName: \"kubernetes.io/empty-dir/f33cd3dd-af62-465e-8e5d-6a2ad7e86748-image-data\") pod \"edpm-a-provisionserver-checksum-discovery-g8vpc\" (UID: \"f33cd3dd-af62-465e-8e5d-6a2ad7e86748\") " pod="openstack/edpm-a-provisionserver-checksum-discovery-g8vpc" Mar 19 12:36:16.281969 master-0 kubenswrapper[31830]: I0319 12:36:16.263493 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nqx2\" (UniqueName: \"kubernetes.io/projected/f33cd3dd-af62-465e-8e5d-6a2ad7e86748-kube-api-access-8nqx2\") pod \"edpm-a-provisionserver-checksum-discovery-g8vpc\" (UID: \"f33cd3dd-af62-465e-8e5d-6a2ad7e86748\") " pod="openstack/edpm-a-provisionserver-checksum-discovery-g8vpc" Mar 19 12:36:16.281969 master-0 kubenswrapper[31830]: I0319 12:36:16.264101 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-data\" (UniqueName: \"kubernetes.io/empty-dir/f33cd3dd-af62-465e-8e5d-6a2ad7e86748-image-data\") pod \"edpm-a-provisionserver-checksum-discovery-g8vpc\" (UID: \"f33cd3dd-af62-465e-8e5d-6a2ad7e86748\") " pod="openstack/edpm-a-provisionserver-checksum-discovery-g8vpc" Mar 19 12:36:16.305886 master-0 kubenswrapper[31830]: I0319 12:36:16.298751 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nqx2\" (UniqueName: \"kubernetes.io/projected/f33cd3dd-af62-465e-8e5d-6a2ad7e86748-kube-api-access-8nqx2\") pod \"edpm-a-provisionserver-checksum-discovery-g8vpc\" (UID: \"f33cd3dd-af62-465e-8e5d-6a2ad7e86748\") " pod="openstack/edpm-a-provisionserver-checksum-discovery-g8vpc" Mar 19 12:36:16.414240 master-0 kubenswrapper[31830]: I0319 12:36:16.414181 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/edpm-a-provisionserver-checksum-discovery-g8vpc" Mar 19 12:36:16.763814 master-0 kubenswrapper[31830]: I0319 12:36:16.758811 31830 generic.go:334] "Generic (PLEG): container finished" podID="e29b9f27-2667-4fa6-9a91-91d92a7950e7" containerID="4e7f0677f0650349af13b07c7b8c2c3b4b8f9d6155cfe8a51a3571ff7aff3daf" exitCode=0 Mar 19 12:36:16.763814 master-0 kubenswrapper[31830]: I0319 12:36:16.758849 31830 generic.go:334] "Generic (PLEG): container finished" podID="e29b9f27-2667-4fa6-9a91-91d92a7950e7" containerID="a2e08da402135711e902b6cc6a76e56115e5418dbbe1e06c2d891ca1f5908d5a" exitCode=143 Mar 19 12:36:16.763814 master-0 kubenswrapper[31830]: I0319 12:36:16.760024 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-api-0" event={"ID":"e29b9f27-2667-4fa6-9a91-91d92a7950e7","Type":"ContainerDied","Data":"4e7f0677f0650349af13b07c7b8c2c3b4b8f9d6155cfe8a51a3571ff7aff3daf"} Mar 19 12:36:16.763814 master-0 kubenswrapper[31830]: I0319 12:36:16.760052 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-api-0" event={"ID":"e29b9f27-2667-4fa6-9a91-91d92a7950e7","Type":"ContainerDied","Data":"a2e08da402135711e902b6cc6a76e56115e5418dbbe1e06c2d891ca1f5908d5a"} Mar 19 12:36:16.785821 master-0 kubenswrapper[31830]: I0319 12:36:16.780637 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h" Mar 19 12:36:16.785821 master-0 kubenswrapper[31830]: I0319 12:36:16.785142 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:16.894883 master-0 kubenswrapper[31830]: I0319 12:36:16.893812 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-config-data\") pod \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " Mar 19 12:36:16.894883 master-0 kubenswrapper[31830]: I0319 12:36:16.893897 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e29b9f27-2667-4fa6-9a91-91d92a7950e7-etc-machine-id\") pod \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " Mar 19 12:36:16.894883 master-0 kubenswrapper[31830]: I0319 12:36:16.894010 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e29b9f27-2667-4fa6-9a91-91d92a7950e7-logs\") pod \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " Mar 19 12:36:16.894883 master-0 kubenswrapper[31830]: I0319 12:36:16.894076 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-config-data-custom\") pod \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " Mar 19 12:36:16.894883 master-0 kubenswrapper[31830]: I0319 12:36:16.894119 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-combined-ca-bundle\") pod \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " Mar 19 12:36:16.894883 master-0 kubenswrapper[31830]: I0319 12:36:16.894320 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzsv5\" (UniqueName: \"kubernetes.io/projected/e29b9f27-2667-4fa6-9a91-91d92a7950e7-kube-api-access-qzsv5\") pod \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " Mar 19 12:36:16.894883 master-0 kubenswrapper[31830]: I0319 12:36:16.894352 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-scripts\") pod \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\" (UID: \"e29b9f27-2667-4fa6-9a91-91d92a7950e7\") " Mar 19 12:36:16.923686 master-0 kubenswrapper[31830]: I0319 12:36:16.922979 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e29b9f27-2667-4fa6-9a91-91d92a7950e7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e29b9f27-2667-4fa6-9a91-91d92a7950e7" (UID: "e29b9f27-2667-4fa6-9a91-91d92a7950e7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:16.927831 master-0 kubenswrapper[31830]: I0319 12:36:16.924016 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e29b9f27-2667-4fa6-9a91-91d92a7950e7-logs" (OuterVolumeSpecName: "logs") pod "e29b9f27-2667-4fa6-9a91-91d92a7950e7" (UID: "e29b9f27-2667-4fa6-9a91-91d92a7950e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:36:16.948292 master-0 kubenswrapper[31830]: I0319 12:36:16.942757 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e29b9f27-2667-4fa6-9a91-91d92a7950e7" (UID: "e29b9f27-2667-4fa6-9a91-91d92a7950e7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:16.959827 master-0 kubenswrapper[31830]: I0319 12:36:16.954576 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e29b9f27-2667-4fa6-9a91-91d92a7950e7-kube-api-access-qzsv5" (OuterVolumeSpecName: "kube-api-access-qzsv5") pod "e29b9f27-2667-4fa6-9a91-91d92a7950e7" (UID: "e29b9f27-2667-4fa6-9a91-91d92a7950e7"). InnerVolumeSpecName "kube-api-access-qzsv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:36:16.983836 master-0 kubenswrapper[31830]: I0319 12:36:16.981148 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-scripts" (OuterVolumeSpecName: "scripts") pod "e29b9f27-2667-4fa6-9a91-91d92a7950e7" (UID: "e29b9f27-2667-4fa6-9a91-91d92a7950e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:17.007961 master-0 kubenswrapper[31830]: I0319 12:36:17.007902 31830 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e29b9f27-2667-4fa6-9a91-91d92a7950e7-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:17.007961 master-0 kubenswrapper[31830]: I0319 12:36:17.007956 31830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e29b9f27-2667-4fa6-9a91-91d92a7950e7-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:17.007961 master-0 kubenswrapper[31830]: I0319 12:36:17.007971 31830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:17.008697 master-0 kubenswrapper[31830]: I0319 12:36:17.007985 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzsv5\" (UniqueName: \"kubernetes.io/projected/e29b9f27-2667-4fa6-9a91-91d92a7950e7-kube-api-access-qzsv5\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:17.008697 master-0 kubenswrapper[31830]: I0319 12:36:17.007999 31830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:17.026820 master-0 kubenswrapper[31830]: I0319 12:36:17.026332 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e29b9f27-2667-4fa6-9a91-91d92a7950e7" (UID: "e29b9f27-2667-4fa6-9a91-91d92a7950e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:17.038828 master-0 kubenswrapper[31830]: I0319 12:36:17.034821 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-config-data" (OuterVolumeSpecName: "config-data") pod "e29b9f27-2667-4fa6-9a91-91d92a7950e7" (UID: "e29b9f27-2667-4fa6-9a91-91d92a7950e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:17.112524 master-0 kubenswrapper[31830]: W0319 12:36:17.111050 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf33cd3dd_af62_465e_8e5d_6a2ad7e86748.slice/crio-ca2307f3bdb6b4b04319a9540a4149f2bae1b83c7f78569206f15e70bb15556e WatchSource:0}: Error finding container ca2307f3bdb6b4b04319a9540a4149f2bae1b83c7f78569206f15e70bb15556e: Status 404 returned error can't find the container with id ca2307f3bdb6b4b04319a9540a4149f2bae1b83c7f78569206f15e70bb15556e Mar 19 12:36:17.113509 master-0 kubenswrapper[31830]: I0319 12:36:17.113470 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:17.113651 master-0 kubenswrapper[31830]: I0319 12:36:17.113518 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29b9f27-2667-4fa6-9a91-91d92a7950e7-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:17.122256 master-0 kubenswrapper[31830]: I0319 12:36:17.116969 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/edpm-a-provisionserver-checksum-discovery-g8vpc"] Mar 19 12:36:17.194309 master-0 kubenswrapper[31830]: I0319 12:36:17.194155 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/edpm-b-provisionserver-checksum-discovery-m9sgp"] Mar 19 12:36:17.195015 master-0 kubenswrapper[31830]: E0319 12:36:17.194780 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29b9f27-2667-4fa6-9a91-91d92a7950e7" containerName="cinder-cce1e-api-log" Mar 19 12:36:17.195015 master-0 kubenswrapper[31830]: I0319 12:36:17.194825 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29b9f27-2667-4fa6-9a91-91d92a7950e7" containerName="cinder-cce1e-api-log" Mar 19 12:36:17.195015 master-0 kubenswrapper[31830]: E0319 12:36:17.194843 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29b9f27-2667-4fa6-9a91-91d92a7950e7" containerName="cinder-api" Mar 19 12:36:17.195015 master-0 kubenswrapper[31830]: I0319 12:36:17.194852 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29b9f27-2667-4fa6-9a91-91d92a7950e7" containerName="cinder-api" Mar 19 12:36:17.195253 master-0 kubenswrapper[31830]: I0319 12:36:17.195102 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e29b9f27-2667-4fa6-9a91-91d92a7950e7" containerName="cinder-api" Mar 19 12:36:17.195253 master-0 kubenswrapper[31830]: I0319 12:36:17.195140 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e29b9f27-2667-4fa6-9a91-91d92a7950e7" containerName="cinder-cce1e-api-log" Mar 19 12:36:17.208919 master-0 kubenswrapper[31830]: I0319 12:36:17.202709 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/edpm-b-provisionserver-checksum-discovery-m9sgp" Mar 19 12:36:17.249183 master-0 kubenswrapper[31830]: I0319 12:36:17.244253 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/edpm-b-provisionserver-checksum-discovery-m9sgp"] Mar 19 12:36:17.320832 master-0 kubenswrapper[31830]: I0319 12:36:17.317267 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57rf7\" (UniqueName: \"kubernetes.io/projected/35f4844e-6e9b-4f93-a711-1e673e39add8-kube-api-access-57rf7\") pod \"edpm-b-provisionserver-checksum-discovery-m9sgp\" (UID: \"35f4844e-6e9b-4f93-a711-1e673e39add8\") " pod="openstack/edpm-b-provisionserver-checksum-discovery-m9sgp" Mar 19 12:36:17.320832 master-0 kubenswrapper[31830]: I0319 12:36:17.317584 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-data\" (UniqueName: \"kubernetes.io/empty-dir/35f4844e-6e9b-4f93-a711-1e673e39add8-image-data\") pod \"edpm-b-provisionserver-checksum-discovery-m9sgp\" (UID: \"35f4844e-6e9b-4f93-a711-1e673e39add8\") " pod="openstack/edpm-b-provisionserver-checksum-discovery-m9sgp" Mar 19 12:36:17.420535 master-0 kubenswrapper[31830]: I0319 12:36:17.420460 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57rf7\" (UniqueName: \"kubernetes.io/projected/35f4844e-6e9b-4f93-a711-1e673e39add8-kube-api-access-57rf7\") pod \"edpm-b-provisionserver-checksum-discovery-m9sgp\" (UID: \"35f4844e-6e9b-4f93-a711-1e673e39add8\") " pod="openstack/edpm-b-provisionserver-checksum-discovery-m9sgp" Mar 19 12:36:17.421011 master-0 kubenswrapper[31830]: I0319 12:36:17.420974 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-data\" (UniqueName: \"kubernetes.io/empty-dir/35f4844e-6e9b-4f93-a711-1e673e39add8-image-data\") pod \"edpm-b-provisionserver-checksum-discovery-m9sgp\" (UID: \"35f4844e-6e9b-4f93-a711-1e673e39add8\") " pod="openstack/edpm-b-provisionserver-checksum-discovery-m9sgp" Mar 19 12:36:17.421564 master-0 kubenswrapper[31830]: I0319 12:36:17.421518 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-data\" (UniqueName: \"kubernetes.io/empty-dir/35f4844e-6e9b-4f93-a711-1e673e39add8-image-data\") pod \"edpm-b-provisionserver-checksum-discovery-m9sgp\" (UID: \"35f4844e-6e9b-4f93-a711-1e673e39add8\") " pod="openstack/edpm-b-provisionserver-checksum-discovery-m9sgp" Mar 19 12:36:17.452194 master-0 kubenswrapper[31830]: I0319 12:36:17.452048 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57rf7\" (UniqueName: \"kubernetes.io/projected/35f4844e-6e9b-4f93-a711-1e673e39add8-kube-api-access-57rf7\") pod \"edpm-b-provisionserver-checksum-discovery-m9sgp\" (UID: \"35f4844e-6e9b-4f93-a711-1e673e39add8\") " pod="openstack/edpm-b-provisionserver-checksum-discovery-m9sgp" Mar 19 12:36:17.560511 master-0 kubenswrapper[31830]: I0319 12:36:17.560452 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/edpm-b-provisionserver-checksum-discovery-m9sgp" Mar 19 12:36:17.770908 master-0 kubenswrapper[31830]: I0319 12:36:17.770852 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/edpm-a-provisionserver-checksum-discovery-g8vpc" event={"ID":"f33cd3dd-af62-465e-8e5d-6a2ad7e86748","Type":"ContainerStarted","Data":"dd0b18994ca266cf34b06540b08eedc5c31e1c3a1dcc6208aa709913cac07117"} Mar 19 12:36:17.770908 master-0 kubenswrapper[31830]: I0319 12:36:17.770902 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/edpm-a-provisionserver-checksum-discovery-g8vpc" event={"ID":"f33cd3dd-af62-465e-8e5d-6a2ad7e86748","Type":"ContainerStarted","Data":"ca2307f3bdb6b4b04319a9540a4149f2bae1b83c7f78569206f15e70bb15556e"} Mar 19 12:36:17.775785 master-0 kubenswrapper[31830]: I0319 12:36:17.775732 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:17.776295 master-0 kubenswrapper[31830]: I0319 12:36:17.776270 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-api-0" event={"ID":"e29b9f27-2667-4fa6-9a91-91d92a7950e7","Type":"ContainerDied","Data":"11c3606c41ab9b005756d9bcd5e9cf21f7cef63058815801405bdf6a4051ccdf"} Mar 19 12:36:17.776341 master-0 kubenswrapper[31830]: I0319 12:36:17.776304 31830 scope.go:117] "RemoveContainer" containerID="4e7f0677f0650349af13b07c7b8c2c3b4b8f9d6155cfe8a51a3571ff7aff3daf" Mar 19 12:36:17.817640 master-0 kubenswrapper[31830]: I0319 12:36:17.817606 31830 scope.go:117] "RemoveContainer" containerID="a2e08da402135711e902b6cc6a76e56115e5418dbbe1e06c2d891ca1f5908d5a" Mar 19 12:36:19.380044 master-0 kubenswrapper[31830]: I0319 12:36:19.379214 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-cce1e-api-0"] Mar 19 12:36:19.390049 master-0 kubenswrapper[31830]: I0319 12:36:19.389850 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/edpm-b-provisionserver-checksum-discovery-m9sgp"] Mar 19 12:36:19.390956 master-0 kubenswrapper[31830]: W0319 12:36:19.390919 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35f4844e_6e9b_4f93_a711_1e673e39add8.slice/crio-7817689c1558fce7d9388fadfc8374ac6460faeb8b9811f3e745003951e303b3 WatchSource:0}: Error finding container 7817689c1558fce7d9388fadfc8374ac6460faeb8b9811f3e745003951e303b3: Status 404 returned error can't find the container with id 7817689c1558fce7d9388fadfc8374ac6460faeb8b9811f3e745003951e303b3 Mar 19 12:36:19.463909 master-0 kubenswrapper[31830]: I0319 12:36:19.463841 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:19.798448 master-0 kubenswrapper[31830]: I0319 12:36:19.798368 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/edpm-b-provisionserver-checksum-discovery-m9sgp" event={"ID":"35f4844e-6e9b-4f93-a711-1e673e39add8","Type":"ContainerStarted","Data":"7817689c1558fce7d9388fadfc8374ac6460faeb8b9811f3e745003951e303b3"} Mar 19 12:36:19.967625 master-0 kubenswrapper[31830]: I0319 12:36:19.967548 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-cce1e-scheduler-0" podUID="569ec673-0799-4639-80f6-44155889d03c" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:36:19.979784 master-0 kubenswrapper[31830]: I0319 12:36:19.979703 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:20.045076 master-0 kubenswrapper[31830]: I0319 12:36:20.045010 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:36:20.101693 master-0 kubenswrapper[31830]: I0319 12:36:20.101572 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" podUID="1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 12:36:20.172512 master-0 kubenswrapper[31830]: I0319 12:36:20.172432 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-cce1e-api-0"] Mar 19 12:36:20.218124 master-0 kubenswrapper[31830]: I0319 12:36:20.217292 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-cce1e-api-0"] Mar 19 12:36:20.223008 master-0 kubenswrapper[31830]: I0319 12:36:20.219220 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.231951 master-0 kubenswrapper[31830]: I0319 12:36:20.231774 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 19 12:36:20.232178 master-0 kubenswrapper[31830]: I0319 12:36:20.232030 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cce1e-api-config-data" Mar 19 12:36:20.232178 master-0 kubenswrapper[31830]: I0319 12:36:20.232172 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 19 12:36:20.337911 master-0 kubenswrapper[31830]: I0319 12:36:20.329600 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02c596d-10f7-46cc-baef-11d61e942bb3-config-data\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.337911 master-0 kubenswrapper[31830]: I0319 12:36:20.329766 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d02c596d-10f7-46cc-baef-11d61e942bb3-scripts\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.337911 master-0 kubenswrapper[31830]: I0319 12:36:20.329849 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d02c596d-10f7-46cc-baef-11d61e942bb3-etc-machine-id\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.337911 master-0 kubenswrapper[31830]: I0319 12:36:20.329956 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d02c596d-10f7-46cc-baef-11d61e942bb3-logs\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.337911 master-0 kubenswrapper[31830]: I0319 12:36:20.330172 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d02c596d-10f7-46cc-baef-11d61e942bb3-config-data-custom\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.337911 master-0 kubenswrapper[31830]: I0319 12:36:20.330244 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb4qt\" (UniqueName: \"kubernetes.io/projected/d02c596d-10f7-46cc-baef-11d61e942bb3-kube-api-access-bb4qt\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.337911 master-0 kubenswrapper[31830]: I0319 12:36:20.330295 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d02c596d-10f7-46cc-baef-11d61e942bb3-internal-tls-certs\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.337911 master-0 kubenswrapper[31830]: I0319 12:36:20.330399 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d02c596d-10f7-46cc-baef-11d61e942bb3-public-tls-certs\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.337911 master-0 kubenswrapper[31830]: I0319 12:36:20.330442 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02c596d-10f7-46cc-baef-11d61e942bb3-combined-ca-bundle\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.433635 master-0 kubenswrapper[31830]: I0319 12:36:20.433304 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d02c596d-10f7-46cc-baef-11d61e942bb3-config-data-custom\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.433635 master-0 kubenswrapper[31830]: I0319 12:36:20.433395 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb4qt\" (UniqueName: \"kubernetes.io/projected/d02c596d-10f7-46cc-baef-11d61e942bb3-kube-api-access-bb4qt\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.433635 master-0 kubenswrapper[31830]: I0319 12:36:20.433431 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d02c596d-10f7-46cc-baef-11d61e942bb3-internal-tls-certs\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.433635 master-0 kubenswrapper[31830]: I0319 12:36:20.433491 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d02c596d-10f7-46cc-baef-11d61e942bb3-public-tls-certs\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.433635 master-0 kubenswrapper[31830]: I0319 12:36:20.433523 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02c596d-10f7-46cc-baef-11d61e942bb3-combined-ca-bundle\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.433635 master-0 kubenswrapper[31830]: I0319 12:36:20.433575 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02c596d-10f7-46cc-baef-11d61e942bb3-config-data\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.433635 master-0 kubenswrapper[31830]: I0319 12:36:20.433621 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d02c596d-10f7-46cc-baef-11d61e942bb3-scripts\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.433635 master-0 kubenswrapper[31830]: I0319 12:36:20.433649 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d02c596d-10f7-46cc-baef-11d61e942bb3-etc-machine-id\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.435079 master-0 kubenswrapper[31830]: I0319 12:36:20.433685 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d02c596d-10f7-46cc-baef-11d61e942bb3-logs\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.472116 master-0 kubenswrapper[31830]: I0319 12:36:20.447512 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cce1e-api-0"] Mar 19 12:36:20.472116 master-0 kubenswrapper[31830]: I0319 12:36:20.448374 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d02c596d-10f7-46cc-baef-11d61e942bb3-etc-machine-id\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.472116 master-0 kubenswrapper[31830]: I0319 12:36:20.457161 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02c596d-10f7-46cc-baef-11d61e942bb3-config-data\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.474373 master-0 kubenswrapper[31830]: I0319 12:36:20.474331 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d02c596d-10f7-46cc-baef-11d61e942bb3-public-tls-certs\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.474620 master-0 kubenswrapper[31830]: I0319 12:36:20.474569 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d02c596d-10f7-46cc-baef-11d61e942bb3-logs\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.479225 master-0 kubenswrapper[31830]: I0319 12:36:20.479175 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77777b4857-hrt6t"] Mar 19 12:36:20.480349 master-0 kubenswrapper[31830]: I0319 12:36:20.480125 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d02c596d-10f7-46cc-baef-11d61e942bb3-internal-tls-certs\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.483722 master-0 kubenswrapper[31830]: I0319 12:36:20.483673 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d02c596d-10f7-46cc-baef-11d61e942bb3-config-data-custom\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.491554 master-0 kubenswrapper[31830]: I0319 12:36:20.484684 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d02c596d-10f7-46cc-baef-11d61e942bb3-scripts\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.491554 master-0 kubenswrapper[31830]: I0319 12:36:20.487843 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77777b4857-hrt6t" podUID="a3dcd24b-6811-4650-adb2-352c99e50b99" containerName="dnsmasq-dns" containerID="cri-o://fac1dfd8fe49e8139b79a255b8309437775b2298893d36f2236b561952a3d8e9" gracePeriod=10 Mar 19 12:36:20.491554 master-0 kubenswrapper[31830]: I0319 12:36:20.490138 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02c596d-10f7-46cc-baef-11d61e942bb3-combined-ca-bundle\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.517179 master-0 kubenswrapper[31830]: I0319 12:36:20.516890 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-cce1e-backup-0"] Mar 19 12:36:20.547628 master-0 kubenswrapper[31830]: I0319 12:36:20.547514 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-949dd44b5-vklms"] Mar 19 12:36:20.551234 master-0 kubenswrapper[31830]: I0319 12:36:20.551159 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.554614 master-0 kubenswrapper[31830]: I0319 12:36:20.554563 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 19 12:36:20.555092 master-0 kubenswrapper[31830]: I0319 12:36:20.554840 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 19 12:36:20.573191 master-0 kubenswrapper[31830]: I0319 12:36:20.573139 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-949dd44b5-vklms"] Mar 19 12:36:20.655790 master-0 kubenswrapper[31830]: I0319 12:36:20.655730 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5x9n\" (UniqueName: \"kubernetes.io/projected/3514735a-13b6-4fed-a4e7-377a12bbc374-kube-api-access-f5x9n\") pod \"neutron-949dd44b5-vklms\" (UID: \"3514735a-13b6-4fed-a4e7-377a12bbc374\") " pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.655790 master-0 kubenswrapper[31830]: I0319 12:36:20.655825 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3514735a-13b6-4fed-a4e7-377a12bbc374-public-tls-certs\") pod \"neutron-949dd44b5-vklms\" (UID: \"3514735a-13b6-4fed-a4e7-377a12bbc374\") " pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.656086 master-0 kubenswrapper[31830]: I0319 12:36:20.655858 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3514735a-13b6-4fed-a4e7-377a12bbc374-ovndb-tls-certs\") pod \"neutron-949dd44b5-vklms\" (UID: \"3514735a-13b6-4fed-a4e7-377a12bbc374\") " pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.656086 master-0 kubenswrapper[31830]: I0319 12:36:20.655907 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3514735a-13b6-4fed-a4e7-377a12bbc374-httpd-config\") pod \"neutron-949dd44b5-vklms\" (UID: \"3514735a-13b6-4fed-a4e7-377a12bbc374\") " pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.656086 master-0 kubenswrapper[31830]: I0319 12:36:20.655933 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3514735a-13b6-4fed-a4e7-377a12bbc374-config\") pod \"neutron-949dd44b5-vklms\" (UID: \"3514735a-13b6-4fed-a4e7-377a12bbc374\") " pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.656086 master-0 kubenswrapper[31830]: I0319 12:36:20.655973 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3514735a-13b6-4fed-a4e7-377a12bbc374-internal-tls-certs\") pod \"neutron-949dd44b5-vklms\" (UID: \"3514735a-13b6-4fed-a4e7-377a12bbc374\") " pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.656086 master-0 kubenswrapper[31830]: I0319 12:36:20.656027 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3514735a-13b6-4fed-a4e7-377a12bbc374-combined-ca-bundle\") pod \"neutron-949dd44b5-vklms\" (UID: \"3514735a-13b6-4fed-a4e7-377a12bbc374\") " pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.687293 master-0 kubenswrapper[31830]: I0319 12:36:20.687099 31830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77777b4857-hrt6t" podUID="a3dcd24b-6811-4650-adb2-352c99e50b99" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.211:5353: connect: connection refused" Mar 19 12:36:20.758869 master-0 kubenswrapper[31830]: I0319 12:36:20.758513 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3514735a-13b6-4fed-a4e7-377a12bbc374-ovndb-tls-certs\") pod \"neutron-949dd44b5-vklms\" (UID: \"3514735a-13b6-4fed-a4e7-377a12bbc374\") " pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.758869 master-0 kubenswrapper[31830]: I0319 12:36:20.758643 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3514735a-13b6-4fed-a4e7-377a12bbc374-httpd-config\") pod \"neutron-949dd44b5-vklms\" (UID: \"3514735a-13b6-4fed-a4e7-377a12bbc374\") " pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.758869 master-0 kubenswrapper[31830]: I0319 12:36:20.758684 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3514735a-13b6-4fed-a4e7-377a12bbc374-config\") pod \"neutron-949dd44b5-vklms\" (UID: \"3514735a-13b6-4fed-a4e7-377a12bbc374\") " pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.758869 master-0 kubenswrapper[31830]: I0319 12:36:20.758741 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3514735a-13b6-4fed-a4e7-377a12bbc374-internal-tls-certs\") pod \"neutron-949dd44b5-vklms\" (UID: \"3514735a-13b6-4fed-a4e7-377a12bbc374\") " pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.759613 master-0 kubenswrapper[31830]: I0319 12:36:20.759553 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3514735a-13b6-4fed-a4e7-377a12bbc374-combined-ca-bundle\") pod \"neutron-949dd44b5-vklms\" (UID: \"3514735a-13b6-4fed-a4e7-377a12bbc374\") " pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.759910 master-0 kubenswrapper[31830]: I0319 12:36:20.759875 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5x9n\" (UniqueName: \"kubernetes.io/projected/3514735a-13b6-4fed-a4e7-377a12bbc374-kube-api-access-f5x9n\") pod \"neutron-949dd44b5-vklms\" (UID: \"3514735a-13b6-4fed-a4e7-377a12bbc374\") " pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.760027 master-0 kubenswrapper[31830]: I0319 12:36:20.760001 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3514735a-13b6-4fed-a4e7-377a12bbc374-public-tls-certs\") pod \"neutron-949dd44b5-vklms\" (UID: \"3514735a-13b6-4fed-a4e7-377a12bbc374\") " pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.762727 master-0 kubenswrapper[31830]: I0319 12:36:20.762675 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3514735a-13b6-4fed-a4e7-377a12bbc374-ovndb-tls-certs\") pod \"neutron-949dd44b5-vklms\" (UID: \"3514735a-13b6-4fed-a4e7-377a12bbc374\") " pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.764460 master-0 kubenswrapper[31830]: I0319 12:36:20.764429 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3514735a-13b6-4fed-a4e7-377a12bbc374-public-tls-certs\") pod \"neutron-949dd44b5-vklms\" (UID: \"3514735a-13b6-4fed-a4e7-377a12bbc374\") " pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.764582 master-0 kubenswrapper[31830]: I0319 12:36:20.764555 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3514735a-13b6-4fed-a4e7-377a12bbc374-config\") pod \"neutron-949dd44b5-vklms\" (UID: \"3514735a-13b6-4fed-a4e7-377a12bbc374\") " pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.764834 master-0 kubenswrapper[31830]: I0319 12:36:20.764775 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3514735a-13b6-4fed-a4e7-377a12bbc374-combined-ca-bundle\") pod \"neutron-949dd44b5-vklms\" (UID: \"3514735a-13b6-4fed-a4e7-377a12bbc374\") " pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.772252 master-0 kubenswrapper[31830]: I0319 12:36:20.772183 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3514735a-13b6-4fed-a4e7-377a12bbc374-internal-tls-certs\") pod \"neutron-949dd44b5-vklms\" (UID: \"3514735a-13b6-4fed-a4e7-377a12bbc374\") " pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.780007 master-0 kubenswrapper[31830]: I0319 12:36:20.779948 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3514735a-13b6-4fed-a4e7-377a12bbc374-httpd-config\") pod \"neutron-949dd44b5-vklms\" (UID: \"3514735a-13b6-4fed-a4e7-377a12bbc374\") " pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.813167 master-0 kubenswrapper[31830]: I0319 12:36:20.813107 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/edpm-b-provisionserver-checksum-discovery-m9sgp" event={"ID":"35f4844e-6e9b-4f93-a711-1e673e39add8","Type":"ContainerStarted","Data":"94708eec96ba6c731b90eff4b60c1dd5d1133f800120f84a99a86a59922034b4"} Mar 19 12:36:20.815277 master-0 kubenswrapper[31830]: I0319 12:36:20.815231 31830 generic.go:334] "Generic (PLEG): container finished" podID="a3dcd24b-6811-4650-adb2-352c99e50b99" containerID="fac1dfd8fe49e8139b79a255b8309437775b2298893d36f2236b561952a3d8e9" exitCode=0 Mar 19 12:36:20.815359 master-0 kubenswrapper[31830]: I0319 12:36:20.815300 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77777b4857-hrt6t" event={"ID":"a3dcd24b-6811-4650-adb2-352c99e50b99","Type":"ContainerDied","Data":"fac1dfd8fe49e8139b79a255b8309437775b2298893d36f2236b561952a3d8e9"} Mar 19 12:36:20.815541 master-0 kubenswrapper[31830]: I0319 12:36:20.815504 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-cce1e-backup-0" podUID="46a3c173-3990-4ec4-9125-086b417b3b69" containerName="cinder-backup" containerID="cri-o://763539fc9243733146a6553966e0a5e874325f8cf74de027693b2e894c092271" gracePeriod=30 Mar 19 12:36:20.815604 master-0 kubenswrapper[31830]: I0319 12:36:20.815531 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-cce1e-backup-0" podUID="46a3c173-3990-4ec4-9125-086b417b3b69" containerName="probe" containerID="cri-o://090c1faf2666575f0113c6ef10434ceb938ebe4596a3f2caeb440faac9ddb1ba" gracePeriod=30 Mar 19 12:36:20.897819 master-0 kubenswrapper[31830]: I0319 12:36:20.897042 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb4qt\" (UniqueName: \"kubernetes.io/projected/d02c596d-10f7-46cc-baef-11d61e942bb3-kube-api-access-bb4qt\") pod \"cinder-cce1e-api-0\" (UID: \"d02c596d-10f7-46cc-baef-11d61e942bb3\") " pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:20.897819 master-0 kubenswrapper[31830]: I0319 12:36:20.897227 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5x9n\" (UniqueName: \"kubernetes.io/projected/3514735a-13b6-4fed-a4e7-377a12bbc374-kube-api-access-f5x9n\") pod \"neutron-949dd44b5-vklms\" (UID: \"3514735a-13b6-4fed-a4e7-377a12bbc374\") " pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:20.938902 master-0 kubenswrapper[31830]: I0319 12:36:20.938230 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:21.175844 master-0 kubenswrapper[31830]: I0319 12:36:21.174678 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:21.696397 master-0 kubenswrapper[31830]: I0319 12:36:21.695895 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e29b9f27-2667-4fa6-9a91-91d92a7950e7" path="/var/lib/kubelet/pods/e29b9f27-2667-4fa6-9a91-91d92a7950e7/volumes" Mar 19 12:36:21.830832 master-0 kubenswrapper[31830]: I0319 12:36:21.828397 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77777b4857-hrt6t" event={"ID":"a3dcd24b-6811-4650-adb2-352c99e50b99","Type":"ContainerDied","Data":"113f26a344afc36dfef635a282617bb09b51c191b4c1d3a109a14bf7007e4b37"} Mar 19 12:36:21.830832 master-0 kubenswrapper[31830]: I0319 12:36:21.828441 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="113f26a344afc36dfef635a282617bb09b51c191b4c1d3a109a14bf7007e4b37" Mar 19 12:36:21.915936 master-0 kubenswrapper[31830]: I0319 12:36:21.915890 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:36:21.992505 master-0 kubenswrapper[31830]: I0319 12:36:21.992452 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-ovsdbserver-sb\") pod \"a3dcd24b-6811-4650-adb2-352c99e50b99\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " Mar 19 12:36:21.992505 master-0 kubenswrapper[31830]: I0319 12:36:21.992505 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-config\") pod \"a3dcd24b-6811-4650-adb2-352c99e50b99\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " Mar 19 12:36:21.992732 master-0 kubenswrapper[31830]: I0319 12:36:21.992524 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-dns-swift-storage-0\") pod \"a3dcd24b-6811-4650-adb2-352c99e50b99\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " Mar 19 12:36:21.992732 master-0 kubenswrapper[31830]: I0319 12:36:21.992608 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-edpm-b\") pod \"a3dcd24b-6811-4650-adb2-352c99e50b99\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " Mar 19 12:36:21.992732 master-0 kubenswrapper[31830]: I0319 12:36:21.992675 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnclb\" (UniqueName: \"kubernetes.io/projected/a3dcd24b-6811-4650-adb2-352c99e50b99-kube-api-access-fnclb\") pod \"a3dcd24b-6811-4650-adb2-352c99e50b99\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " Mar 19 12:36:21.992732 master-0 kubenswrapper[31830]: I0319 12:36:21.992698 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-dns-svc\") pod \"a3dcd24b-6811-4650-adb2-352c99e50b99\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " Mar 19 12:36:21.992891 master-0 kubenswrapper[31830]: I0319 12:36:21.992771 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-ovsdbserver-nb\") pod \"a3dcd24b-6811-4650-adb2-352c99e50b99\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " Mar 19 12:36:21.992891 master-0 kubenswrapper[31830]: I0319 12:36:21.992816 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-edpm-a\") pod \"a3dcd24b-6811-4650-adb2-352c99e50b99\" (UID: \"a3dcd24b-6811-4650-adb2-352c99e50b99\") " Mar 19 12:36:21.998087 master-0 kubenswrapper[31830]: I0319 12:36:21.997629 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3dcd24b-6811-4650-adb2-352c99e50b99-kube-api-access-fnclb" (OuterVolumeSpecName: "kube-api-access-fnclb") pod "a3dcd24b-6811-4650-adb2-352c99e50b99" (UID: "a3dcd24b-6811-4650-adb2-352c99e50b99"). InnerVolumeSpecName "kube-api-access-fnclb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:36:22.043310 master-0 kubenswrapper[31830]: I0319 12:36:22.043234 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a3dcd24b-6811-4650-adb2-352c99e50b99" (UID: "a3dcd24b-6811-4650-adb2-352c99e50b99"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:36:22.044318 master-0 kubenswrapper[31830]: I0319 12:36:22.044269 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-edpm-a" (OuterVolumeSpecName: "edpm-a") pod "a3dcd24b-6811-4650-adb2-352c99e50b99" (UID: "a3dcd24b-6811-4650-adb2-352c99e50b99"). InnerVolumeSpecName "edpm-a". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:36:22.049965 master-0 kubenswrapper[31830]: I0319 12:36:22.049892 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-config" (OuterVolumeSpecName: "config") pod "a3dcd24b-6811-4650-adb2-352c99e50b99" (UID: "a3dcd24b-6811-4650-adb2-352c99e50b99"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:36:22.051046 master-0 kubenswrapper[31830]: I0319 12:36:22.050941 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a3dcd24b-6811-4650-adb2-352c99e50b99" (UID: "a3dcd24b-6811-4650-adb2-352c99e50b99"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:36:22.057360 master-0 kubenswrapper[31830]: I0319 12:36:22.057294 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a3dcd24b-6811-4650-adb2-352c99e50b99" (UID: "a3dcd24b-6811-4650-adb2-352c99e50b99"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:36:22.059623 master-0 kubenswrapper[31830]: I0319 12:36:22.059576 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-edpm-b" (OuterVolumeSpecName: "edpm-b") pod "a3dcd24b-6811-4650-adb2-352c99e50b99" (UID: "a3dcd24b-6811-4650-adb2-352c99e50b99"). InnerVolumeSpecName "edpm-b". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:36:22.067424 master-0 kubenswrapper[31830]: I0319 12:36:22.067307 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a3dcd24b-6811-4650-adb2-352c99e50b99" (UID: "a3dcd24b-6811-4650-adb2-352c99e50b99"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:36:22.096125 master-0 kubenswrapper[31830]: I0319 12:36:22.096033 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:22.096125 master-0 kubenswrapper[31830]: I0319 12:36:22.096113 31830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:22.096403 master-0 kubenswrapper[31830]: I0319 12:36:22.096156 31830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:22.096403 master-0 kubenswrapper[31830]: I0319 12:36:22.096168 31830 reconciler_common.go:293] "Volume detached for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-edpm-b\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:22.096403 master-0 kubenswrapper[31830]: I0319 12:36:22.096177 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnclb\" (UniqueName: \"kubernetes.io/projected/a3dcd24b-6811-4650-adb2-352c99e50b99-kube-api-access-fnclb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:22.096403 master-0 kubenswrapper[31830]: I0319 12:36:22.096187 31830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:22.096403 master-0 kubenswrapper[31830]: I0319 12:36:22.096196 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:22.096403 master-0 kubenswrapper[31830]: I0319 12:36:22.096224 31830 reconciler_common.go:293] "Volume detached for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/a3dcd24b-6811-4650-adb2-352c99e50b99-edpm-a\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:22.839567 master-0 kubenswrapper[31830]: I0319 12:36:22.839426 31830 generic.go:334] "Generic (PLEG): container finished" podID="46a3c173-3990-4ec4-9125-086b417b3b69" containerID="090c1faf2666575f0113c6ef10434ceb938ebe4596a3f2caeb440faac9ddb1ba" exitCode=0 Mar 19 12:36:22.839567 master-0 kubenswrapper[31830]: I0319 12:36:22.839472 31830 generic.go:334] "Generic (PLEG): container finished" podID="46a3c173-3990-4ec4-9125-086b417b3b69" containerID="763539fc9243733146a6553966e0a5e874325f8cf74de027693b2e894c092271" exitCode=0 Mar 19 12:36:22.839567 master-0 kubenswrapper[31830]: I0319 12:36:22.839534 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77777b4857-hrt6t" Mar 19 12:36:22.840329 master-0 kubenswrapper[31830]: I0319 12:36:22.840289 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-backup-0" event={"ID":"46a3c173-3990-4ec4-9125-086b417b3b69","Type":"ContainerDied","Data":"090c1faf2666575f0113c6ef10434ceb938ebe4596a3f2caeb440faac9ddb1ba"} Mar 19 12:36:22.840422 master-0 kubenswrapper[31830]: I0319 12:36:22.840408 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-backup-0" event={"ID":"46a3c173-3990-4ec4-9125-086b417b3b69","Type":"ContainerDied","Data":"763539fc9243733146a6553966e0a5e874325f8cf74de027693b2e894c092271"} Mar 19 12:36:23.274901 master-0 kubenswrapper[31830]: I0319 12:36:23.274784 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cce1e-api-0"] Mar 19 12:36:23.298914 master-0 kubenswrapper[31830]: W0319 12:36:23.296058 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd02c596d_10f7_46cc_baef_11d61e942bb3.slice/crio-13ab74a114a49323aaa172d2c71826cc2ffe6f691e00317032d596e9263351c2 WatchSource:0}: Error finding container 13ab74a114a49323aaa172d2c71826cc2ffe6f691e00317032d596e9263351c2: Status 404 returned error can't find the container with id 13ab74a114a49323aaa172d2c71826cc2ffe6f691e00317032d596e9263351c2 Mar 19 12:36:23.845273 master-0 kubenswrapper[31830]: I0319 12:36:23.841944 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77777b4857-hrt6t"] Mar 19 12:36:23.860997 master-0 kubenswrapper[31830]: I0319 12:36:23.860912 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77777b4857-hrt6t"] Mar 19 12:36:23.906868 master-0 kubenswrapper[31830]: I0319 12:36:23.898010 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-api-0" event={"ID":"d02c596d-10f7-46cc-baef-11d61e942bb3","Type":"ContainerStarted","Data":"13ab74a114a49323aaa172d2c71826cc2ffe6f691e00317032d596e9263351c2"} Mar 19 12:36:24.232066 master-0 kubenswrapper[31830]: I0319 12:36:24.230073 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-949dd44b5-vklms"] Mar 19 12:36:24.491202 master-0 kubenswrapper[31830]: I0319 12:36:24.491153 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:24.514685 master-0 kubenswrapper[31830]: I0319 12:36:24.514650 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:24.539100 master-0 kubenswrapper[31830]: I0319 12:36:24.539057 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-var-locks-brick\") pod \"46a3c173-3990-4ec4-9125-086b417b3b69\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " Mar 19 12:36:24.539524 master-0 kubenswrapper[31830]: I0319 12:36:24.539503 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-combined-ca-bundle\") pod \"46a3c173-3990-4ec4-9125-086b417b3b69\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " Mar 19 12:36:24.539659 master-0 kubenswrapper[31830]: I0319 12:36:24.539645 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-etc-iscsi\") pod \"46a3c173-3990-4ec4-9125-086b417b3b69\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " Mar 19 12:36:24.539767 master-0 kubenswrapper[31830]: I0319 12:36:24.539755 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-sys\") pod \"46a3c173-3990-4ec4-9125-086b417b3b69\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " Mar 19 12:36:24.539897 master-0 kubenswrapper[31830]: I0319 12:36:24.539862 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-scripts\") pod \"46a3c173-3990-4ec4-9125-086b417b3b69\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " Mar 19 12:36:24.539994 master-0 kubenswrapper[31830]: I0319 12:36:24.539981 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-config-data\") pod \"46a3c173-3990-4ec4-9125-086b417b3b69\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " Mar 19 12:36:24.540131 master-0 kubenswrapper[31830]: I0319 12:36:24.540120 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-etc-nvme\") pod \"46a3c173-3990-4ec4-9125-086b417b3b69\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " Mar 19 12:36:24.540209 master-0 kubenswrapper[31830]: I0319 12:36:24.540195 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-etc-machine-id\") pod \"46a3c173-3990-4ec4-9125-086b417b3b69\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " Mar 19 12:36:24.540289 master-0 kubenswrapper[31830]: I0319 12:36:24.540277 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-lib-modules\") pod \"46a3c173-3990-4ec4-9125-086b417b3b69\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " Mar 19 12:36:24.540389 master-0 kubenswrapper[31830]: I0319 12:36:24.540377 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-dev\") pod \"46a3c173-3990-4ec4-9125-086b417b3b69\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " Mar 19 12:36:24.540818 master-0 kubenswrapper[31830]: I0319 12:36:24.539387 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "46a3c173-3990-4ec4-9125-086b417b3b69" (UID: "46a3c173-3990-4ec4-9125-086b417b3b69"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:24.540818 master-0 kubenswrapper[31830]: I0319 12:36:24.540425 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-sys" (OuterVolumeSpecName: "sys") pod "46a3c173-3990-4ec4-9125-086b417b3b69" (UID: "46a3c173-3990-4ec4-9125-086b417b3b69"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:24.540818 master-0 kubenswrapper[31830]: I0319 12:36:24.540491 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "46a3c173-3990-4ec4-9125-086b417b3b69" (UID: "46a3c173-3990-4ec4-9125-086b417b3b69"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:24.540818 master-0 kubenswrapper[31830]: I0319 12:36:24.540437 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "46a3c173-3990-4ec4-9125-086b417b3b69" (UID: "46a3c173-3990-4ec4-9125-086b417b3b69"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:24.540818 master-0 kubenswrapper[31830]: I0319 12:36:24.540450 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "46a3c173-3990-4ec4-9125-086b417b3b69" (UID: "46a3c173-3990-4ec4-9125-086b417b3b69"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:24.540818 master-0 kubenswrapper[31830]: I0319 12:36:24.540599 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "46a3c173-3990-4ec4-9125-086b417b3b69" (UID: "46a3c173-3990-4ec4-9125-086b417b3b69"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:24.540818 master-0 kubenswrapper[31830]: I0319 12:36:24.540648 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-dev" (OuterVolumeSpecName: "dev") pod "46a3c173-3990-4ec4-9125-086b417b3b69" (UID: "46a3c173-3990-4ec4-9125-086b417b3b69"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:24.540818 master-0 kubenswrapper[31830]: I0319 12:36:24.540758 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "46a3c173-3990-4ec4-9125-086b417b3b69" (UID: "46a3c173-3990-4ec4-9125-086b417b3b69"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:24.541173 master-0 kubenswrapper[31830]: I0319 12:36:24.541160 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-var-lib-cinder\") pod \"46a3c173-3990-4ec4-9125-086b417b3b69\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " Mar 19 12:36:24.541276 master-0 kubenswrapper[31830]: I0319 12:36:24.541263 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-config-data-custom\") pod \"46a3c173-3990-4ec4-9125-086b417b3b69\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " Mar 19 12:36:24.541639 master-0 kubenswrapper[31830]: I0319 12:36:24.541627 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-run\") pod \"46a3c173-3990-4ec4-9125-086b417b3b69\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " Mar 19 12:36:24.541815 master-0 kubenswrapper[31830]: I0319 12:36:24.541684 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-run" (OuterVolumeSpecName: "run") pod "46a3c173-3990-4ec4-9125-086b417b3b69" (UID: "46a3c173-3990-4ec4-9125-086b417b3b69"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:24.541930 master-0 kubenswrapper[31830]: I0319 12:36:24.541917 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-var-locks-cinder\") pod \"46a3c173-3990-4ec4-9125-086b417b3b69\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " Mar 19 12:36:24.542059 master-0 kubenswrapper[31830]: I0319 12:36:24.542048 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kckcx\" (UniqueName: \"kubernetes.io/projected/46a3c173-3990-4ec4-9125-086b417b3b69-kube-api-access-kckcx\") pod \"46a3c173-3990-4ec4-9125-086b417b3b69\" (UID: \"46a3c173-3990-4ec4-9125-086b417b3b69\") " Mar 19 12:36:24.542642 master-0 kubenswrapper[31830]: I0319 12:36:24.542013 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "46a3c173-3990-4ec4-9125-086b417b3b69" (UID: "46a3c173-3990-4ec4-9125-086b417b3b69"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:24.548634 master-0 kubenswrapper[31830]: I0319 12:36:24.548595 31830 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-sys\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:24.548863 master-0 kubenswrapper[31830]: I0319 12:36:24.548850 31830 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-etc-nvme\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:24.550271 master-0 kubenswrapper[31830]: I0319 12:36:24.550256 31830 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:24.550363 master-0 kubenswrapper[31830]: I0319 12:36:24.550352 31830 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-lib-modules\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:24.550426 master-0 kubenswrapper[31830]: I0319 12:36:24.550416 31830 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-dev\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:24.550491 master-0 kubenswrapper[31830]: I0319 12:36:24.550481 31830 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:24.550560 master-0 kubenswrapper[31830]: I0319 12:36:24.550550 31830 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-run\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:24.550626 master-0 kubenswrapper[31830]: I0319 12:36:24.550616 31830 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:24.550697 master-0 kubenswrapper[31830]: I0319 12:36:24.550686 31830 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:24.550759 master-0 kubenswrapper[31830]: I0319 12:36:24.550749 31830 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/46a3c173-3990-4ec4-9125-086b417b3b69-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:24.560049 master-0 kubenswrapper[31830]: I0319 12:36:24.559997 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-scripts" (OuterVolumeSpecName: "scripts") pod "46a3c173-3990-4ec4-9125-086b417b3b69" (UID: "46a3c173-3990-4ec4-9125-086b417b3b69"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:24.578257 master-0 kubenswrapper[31830]: I0319 12:36:24.578210 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "46a3c173-3990-4ec4-9125-086b417b3b69" (UID: "46a3c173-3990-4ec4-9125-086b417b3b69"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:24.586101 master-0 kubenswrapper[31830]: I0319 12:36:24.582993 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46a3c173-3990-4ec4-9125-086b417b3b69-kube-api-access-kckcx" (OuterVolumeSpecName: "kube-api-access-kckcx") pod "46a3c173-3990-4ec4-9125-086b417b3b69" (UID: "46a3c173-3990-4ec4-9125-086b417b3b69"). InnerVolumeSpecName "kube-api-access-kckcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:36:24.586101 master-0 kubenswrapper[31830]: I0319 12:36:24.584869 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-cce1e-scheduler-0"] Mar 19 12:36:24.654879 master-0 kubenswrapper[31830]: I0319 12:36:24.652595 31830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:24.654879 master-0 kubenswrapper[31830]: I0319 12:36:24.654233 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kckcx\" (UniqueName: \"kubernetes.io/projected/46a3c173-3990-4ec4-9125-086b417b3b69-kube-api-access-kckcx\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:24.654879 master-0 kubenswrapper[31830]: I0319 12:36:24.654247 31830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:24.691817 master-0 kubenswrapper[31830]: I0319 12:36:24.691391 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46a3c173-3990-4ec4-9125-086b417b3b69" (UID: "46a3c173-3990-4ec4-9125-086b417b3b69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:24.766252 master-0 kubenswrapper[31830]: I0319 12:36:24.763298 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:24.832020 master-0 kubenswrapper[31830]: I0319 12:36:24.831947 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-config-data" (OuterVolumeSpecName: "config-data") pod "46a3c173-3990-4ec4-9125-086b417b3b69" (UID: "46a3c173-3990-4ec4-9125-086b417b3b69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:24.865508 master-0 kubenswrapper[31830]: I0319 12:36:24.865313 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46a3c173-3990-4ec4-9125-086b417b3b69-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:24.900538 master-0 kubenswrapper[31830]: I0319 12:36:24.900485 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:24.939110 master-0 kubenswrapper[31830]: I0319 12:36:24.939029 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-949dd44b5-vklms" event={"ID":"3514735a-13b6-4fed-a4e7-377a12bbc374","Type":"ContainerStarted","Data":"5895abb8ddaabeb7a61d6b07beb44c1ff1a6f3f26c9f82783d66cc2b40c12860"} Mar 19 12:36:24.939110 master-0 kubenswrapper[31830]: I0319 12:36:24.939083 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-949dd44b5-vklms" event={"ID":"3514735a-13b6-4fed-a4e7-377a12bbc374","Type":"ContainerStarted","Data":"352777e774d1fc65f3b5e88bf353c5c96c12a14c380c4db70dedfb7d54b8cdd5"} Mar 19 12:36:24.943444 master-0 kubenswrapper[31830]: I0319 12:36:24.943397 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-api-0" event={"ID":"d02c596d-10f7-46cc-baef-11d61e942bb3","Type":"ContainerStarted","Data":"fe1243957481da50e8ae24668f3337b1d377681246e6b3d27e5ba945c469fe30"} Mar 19 12:36:24.949869 master-0 kubenswrapper[31830]: I0319 12:36:24.947776 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-cce1e-scheduler-0" podUID="569ec673-0799-4639-80f6-44155889d03c" containerName="cinder-scheduler" containerID="cri-o://ac960a84a8a49d25a9eed1f44861b73518354897fd89c2e88671a72f60eb44c3" gracePeriod=30 Mar 19 12:36:24.949869 master-0 kubenswrapper[31830]: I0319 12:36:24.948151 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:24.949869 master-0 kubenswrapper[31830]: I0319 12:36:24.949003 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-backup-0" event={"ID":"46a3c173-3990-4ec4-9125-086b417b3b69","Type":"ContainerDied","Data":"740da661bda19353216548d3f5edfb3f73813bbbcb6b60bcad7ef05c1964cd6b"} Mar 19 12:36:24.949869 master-0 kubenswrapper[31830]: I0319 12:36:24.949056 31830 scope.go:117] "RemoveContainer" containerID="090c1faf2666575f0113c6ef10434ceb938ebe4596a3f2caeb440faac9ddb1ba" Mar 19 12:36:24.949869 master-0 kubenswrapper[31830]: I0319 12:36:24.949094 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-cce1e-scheduler-0" podUID="569ec673-0799-4639-80f6-44155889d03c" containerName="probe" containerID="cri-o://c41ba551b1eb4593e27123776d95ef1602f087c8156505e6d5c6abee484a6e21" gracePeriod=30 Mar 19 12:36:24.990126 master-0 kubenswrapper[31830]: I0319 12:36:24.990047 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-cce1e-volume-lvm-iscsi-0"] Mar 19 12:36:25.033000 master-0 kubenswrapper[31830]: I0319 12:36:25.031394 31830 scope.go:117] "RemoveContainer" containerID="763539fc9243733146a6553966e0a5e874325f8cf74de027693b2e894c092271" Mar 19 12:36:25.093975 master-0 kubenswrapper[31830]: I0319 12:36:25.093874 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-cce1e-backup-0"] Mar 19 12:36:25.103865 master-0 kubenswrapper[31830]: I0319 12:36:25.103791 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-cce1e-backup-0"] Mar 19 12:36:25.170934 master-0 kubenswrapper[31830]: I0319 12:36:25.170866 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-cce1e-backup-0"] Mar 19 12:36:25.173195 master-0 kubenswrapper[31830]: E0319 12:36:25.171472 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3dcd24b-6811-4650-adb2-352c99e50b99" containerName="dnsmasq-dns" Mar 19 12:36:25.173195 master-0 kubenswrapper[31830]: I0319 12:36:25.171495 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3dcd24b-6811-4650-adb2-352c99e50b99" containerName="dnsmasq-dns" Mar 19 12:36:25.173195 master-0 kubenswrapper[31830]: E0319 12:36:25.171529 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a3c173-3990-4ec4-9125-086b417b3b69" containerName="probe" Mar 19 12:36:25.173195 master-0 kubenswrapper[31830]: I0319 12:36:25.171538 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a3c173-3990-4ec4-9125-086b417b3b69" containerName="probe" Mar 19 12:36:25.173195 master-0 kubenswrapper[31830]: E0319 12:36:25.171555 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a3c173-3990-4ec4-9125-086b417b3b69" containerName="cinder-backup" Mar 19 12:36:25.173195 master-0 kubenswrapper[31830]: I0319 12:36:25.171565 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a3c173-3990-4ec4-9125-086b417b3b69" containerName="cinder-backup" Mar 19 12:36:25.173195 master-0 kubenswrapper[31830]: E0319 12:36:25.171585 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3dcd24b-6811-4650-adb2-352c99e50b99" containerName="init" Mar 19 12:36:25.173195 master-0 kubenswrapper[31830]: I0319 12:36:25.171593 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3dcd24b-6811-4650-adb2-352c99e50b99" containerName="init" Mar 19 12:36:25.173195 master-0 kubenswrapper[31830]: I0319 12:36:25.171911 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3dcd24b-6811-4650-adb2-352c99e50b99" containerName="dnsmasq-dns" Mar 19 12:36:25.173195 master-0 kubenswrapper[31830]: I0319 12:36:25.171934 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="46a3c173-3990-4ec4-9125-086b417b3b69" containerName="cinder-backup" Mar 19 12:36:25.173195 master-0 kubenswrapper[31830]: I0319 12:36:25.171951 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="46a3c173-3990-4ec4-9125-086b417b3b69" containerName="probe" Mar 19 12:36:25.173625 master-0 kubenswrapper[31830]: I0319 12:36:25.173471 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.199857 master-0 kubenswrapper[31830]: I0319 12:36:25.198590 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cce1e-backup-0"] Mar 19 12:36:25.210863 master-0 kubenswrapper[31830]: I0319 12:36:25.204934 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cce1e-backup-config-data" Mar 19 12:36:25.276748 master-0 kubenswrapper[31830]: I0319 12:36:25.276680 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-etc-iscsi\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.276967 master-0 kubenswrapper[31830]: I0319 12:36:25.276865 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-var-locks-brick\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.276967 master-0 kubenswrapper[31830]: I0319 12:36:25.276900 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/416ec9bb-4708-40d4-84c4-b5aec90024b6-config-data-custom\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.276967 master-0 kubenswrapper[31830]: I0319 12:36:25.276940 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416ec9bb-4708-40d4-84c4-b5aec90024b6-combined-ca-bundle\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.277067 master-0 kubenswrapper[31830]: I0319 12:36:25.276968 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-var-locks-cinder\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.277067 master-0 kubenswrapper[31830]: I0319 12:36:25.276993 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-etc-nvme\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.277067 master-0 kubenswrapper[31830]: I0319 12:36:25.277014 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-run\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.277160 master-0 kubenswrapper[31830]: I0319 12:36:25.277064 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/416ec9bb-4708-40d4-84c4-b5aec90024b6-scripts\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.277160 master-0 kubenswrapper[31830]: I0319 12:36:25.277121 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-dev\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.277160 master-0 kubenswrapper[31830]: I0319 12:36:25.277154 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-var-lib-cinder\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.277328 master-0 kubenswrapper[31830]: I0319 12:36:25.277300 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416ec9bb-4708-40d4-84c4-b5aec90024b6-config-data\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.277377 master-0 kubenswrapper[31830]: I0319 12:36:25.277353 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhh7t\" (UniqueName: \"kubernetes.io/projected/416ec9bb-4708-40d4-84c4-b5aec90024b6-kube-api-access-qhh7t\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.277412 master-0 kubenswrapper[31830]: I0319 12:36:25.277385 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-lib-modules\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.277486 master-0 kubenswrapper[31830]: I0319 12:36:25.277456 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-sys\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.277532 master-0 kubenswrapper[31830]: I0319 12:36:25.277512 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-etc-machine-id\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.379454 master-0 kubenswrapper[31830]: I0319 12:36:25.379389 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/416ec9bb-4708-40d4-84c4-b5aec90024b6-config-data-custom\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.379662 master-0 kubenswrapper[31830]: I0319 12:36:25.379469 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416ec9bb-4708-40d4-84c4-b5aec90024b6-combined-ca-bundle\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.379662 master-0 kubenswrapper[31830]: I0319 12:36:25.379497 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-var-locks-cinder\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.379662 master-0 kubenswrapper[31830]: I0319 12:36:25.379516 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-etc-nvme\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.379662 master-0 kubenswrapper[31830]: I0319 12:36:25.379541 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-run\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.379662 master-0 kubenswrapper[31830]: I0319 12:36:25.379598 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/416ec9bb-4708-40d4-84c4-b5aec90024b6-scripts\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.379662 master-0 kubenswrapper[31830]: I0319 12:36:25.379639 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-dev\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.379662 master-0 kubenswrapper[31830]: I0319 12:36:25.379661 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-var-lib-cinder\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.379977 master-0 kubenswrapper[31830]: I0319 12:36:25.379705 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416ec9bb-4708-40d4-84c4-b5aec90024b6-config-data\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.379977 master-0 kubenswrapper[31830]: I0319 12:36:25.379734 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-lib-modules\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.379977 master-0 kubenswrapper[31830]: I0319 12:36:25.379750 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhh7t\" (UniqueName: \"kubernetes.io/projected/416ec9bb-4708-40d4-84c4-b5aec90024b6-kube-api-access-qhh7t\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.379977 master-0 kubenswrapper[31830]: I0319 12:36:25.379813 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-sys\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.379977 master-0 kubenswrapper[31830]: I0319 12:36:25.379846 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-etc-machine-id\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.379977 master-0 kubenswrapper[31830]: I0319 12:36:25.379872 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-etc-iscsi\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.379977 master-0 kubenswrapper[31830]: I0319 12:36:25.379957 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-var-locks-brick\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.380197 master-0 kubenswrapper[31830]: I0319 12:36:25.380062 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-var-locks-brick\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.390665 master-0 kubenswrapper[31830]: I0319 12:36:25.386706 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-var-lib-cinder\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.390665 master-0 kubenswrapper[31830]: I0319 12:36:25.390056 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-sys\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.390665 master-0 kubenswrapper[31830]: I0319 12:36:25.390155 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-etc-machine-id\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.390665 master-0 kubenswrapper[31830]: I0319 12:36:25.390206 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-etc-iscsi\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.390665 master-0 kubenswrapper[31830]: I0319 12:36:25.390297 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-run\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.390665 master-0 kubenswrapper[31830]: I0319 12:36:25.390359 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-var-locks-cinder\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.390665 master-0 kubenswrapper[31830]: I0319 12:36:25.390402 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-etc-nvme\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.391586 master-0 kubenswrapper[31830]: I0319 12:36:25.391531 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416ec9bb-4708-40d4-84c4-b5aec90024b6-combined-ca-bundle\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.394406 master-0 kubenswrapper[31830]: I0319 12:36:25.394172 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/416ec9bb-4708-40d4-84c4-b5aec90024b6-config-data-custom\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.394406 master-0 kubenswrapper[31830]: I0319 12:36:25.394214 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-lib-modules\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.394406 master-0 kubenswrapper[31830]: I0319 12:36:25.394343 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/416ec9bb-4708-40d4-84c4-b5aec90024b6-dev\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.398187 master-0 kubenswrapper[31830]: I0319 12:36:25.398032 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416ec9bb-4708-40d4-84c4-b5aec90024b6-config-data\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.412880 master-0 kubenswrapper[31830]: I0319 12:36:25.412758 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/416ec9bb-4708-40d4-84c4-b5aec90024b6-scripts\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.413139 master-0 kubenswrapper[31830]: I0319 12:36:25.412900 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhh7t\" (UniqueName: \"kubernetes.io/projected/416ec9bb-4708-40d4-84c4-b5aec90024b6-kube-api-access-qhh7t\") pod \"cinder-cce1e-backup-0\" (UID: \"416ec9bb-4708-40d4-84c4-b5aec90024b6\") " pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.510227 master-0 kubenswrapper[31830]: I0319 12:36:25.510162 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:25.718578 master-0 kubenswrapper[31830]: I0319 12:36:25.718545 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46a3c173-3990-4ec4-9125-086b417b3b69" path="/var/lib/kubelet/pods/46a3c173-3990-4ec4-9125-086b417b3b69/volumes" Mar 19 12:36:25.719383 master-0 kubenswrapper[31830]: I0319 12:36:25.719367 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3dcd24b-6811-4650-adb2-352c99e50b99" path="/var/lib/kubelet/pods/a3dcd24b-6811-4650-adb2-352c99e50b99/volumes" Mar 19 12:36:25.988521 master-0 kubenswrapper[31830]: I0319 12:36:25.983047 31830 generic.go:334] "Generic (PLEG): container finished" podID="f33cd3dd-af62-465e-8e5d-6a2ad7e86748" containerID="dd0b18994ca266cf34b06540b08eedc5c31e1c3a1dcc6208aa709913cac07117" exitCode=0 Mar 19 12:36:25.988521 master-0 kubenswrapper[31830]: I0319 12:36:25.983183 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/edpm-a-provisionserver-checksum-discovery-g8vpc" event={"ID":"f33cd3dd-af62-465e-8e5d-6a2ad7e86748","Type":"ContainerDied","Data":"dd0b18994ca266cf34b06540b08eedc5c31e1c3a1dcc6208aa709913cac07117"} Mar 19 12:36:25.997205 master-0 kubenswrapper[31830]: I0319 12:36:25.995640 31830 generic.go:334] "Generic (PLEG): container finished" podID="569ec673-0799-4639-80f6-44155889d03c" containerID="c41ba551b1eb4593e27123776d95ef1602f087c8156505e6d5c6abee484a6e21" exitCode=0 Mar 19 12:36:25.997205 master-0 kubenswrapper[31830]: I0319 12:36:25.995744 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-scheduler-0" event={"ID":"569ec673-0799-4639-80f6-44155889d03c","Type":"ContainerDied","Data":"c41ba551b1eb4593e27123776d95ef1602f087c8156505e6d5c6abee484a6e21"} Mar 19 12:36:26.000075 master-0 kubenswrapper[31830]: I0319 12:36:25.999297 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-949dd44b5-vklms" event={"ID":"3514735a-13b6-4fed-a4e7-377a12bbc374","Type":"ContainerStarted","Data":"f965376b951e205a03faa063e116300bef165a80865e8809870f00c48f6167f3"} Mar 19 12:36:26.000075 master-0 kubenswrapper[31830]: I0319 12:36:25.999871 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:26.014903 master-0 kubenswrapper[31830]: I0319 12:36:26.011173 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-api-0" event={"ID":"d02c596d-10f7-46cc-baef-11d61e942bb3","Type":"ContainerStarted","Data":"fa0cd30e2ad21b941a95ee58c418821d41c6bd8c1ce737d0155832571f874531"} Mar 19 12:36:26.014903 master-0 kubenswrapper[31830]: I0319 12:36:26.012216 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:26.017028 master-0 kubenswrapper[31830]: I0319 12:36:26.016564 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" podUID="1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" containerName="cinder-volume" containerID="cri-o://d43eaee996917ceb6745545f47af700ee3a2bc4511923c9a37c04aa30afb3c04" gracePeriod=30 Mar 19 12:36:26.017028 master-0 kubenswrapper[31830]: I0319 12:36:26.016681 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" podUID="1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" containerName="probe" containerID="cri-o://4788f446ec61a1f61a6723b78d4ea832d97a6cca661d0c232e999296c0f3155b" gracePeriod=30 Mar 19 12:36:26.066571 master-0 kubenswrapper[31830]: I0319 12:36:26.066290 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-949dd44b5-vklms" podStartSLOduration=6.066259429 podStartE2EDuration="6.066259429s" podCreationTimestamp="2026-03-19 12:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:36:26.047123126 +0000 UTC m=+1324.596083830" watchObservedRunningTime="2026-03-19 12:36:26.066259429 +0000 UTC m=+1324.615220153" Mar 19 12:36:26.166209 master-0 kubenswrapper[31830]: I0319 12:36:26.164711 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-cce1e-api-0" podStartSLOduration=6.164675452 podStartE2EDuration="6.164675452s" podCreationTimestamp="2026-03-19 12:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:36:26.084485585 +0000 UTC m=+1324.633446289" watchObservedRunningTime="2026-03-19 12:36:26.164675452 +0000 UTC m=+1324.713636156" Mar 19 12:36:26.268840 master-0 kubenswrapper[31830]: I0319 12:36:26.268046 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cce1e-backup-0"] Mar 19 12:36:27.126207 master-0 kubenswrapper[31830]: I0319 12:36:27.126085 31830 generic.go:334] "Generic (PLEG): container finished" podID="569ec673-0799-4639-80f6-44155889d03c" containerID="ac960a84a8a49d25a9eed1f44861b73518354897fd89c2e88671a72f60eb44c3" exitCode=0 Mar 19 12:36:27.127022 master-0 kubenswrapper[31830]: I0319 12:36:27.126203 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-scheduler-0" event={"ID":"569ec673-0799-4639-80f6-44155889d03c","Type":"ContainerDied","Data":"ac960a84a8a49d25a9eed1f44861b73518354897fd89c2e88671a72f60eb44c3"} Mar 19 12:36:27.134663 master-0 kubenswrapper[31830]: I0319 12:36:27.133660 31830 generic.go:334] "Generic (PLEG): container finished" podID="1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" containerID="d43eaee996917ceb6745545f47af700ee3a2bc4511923c9a37c04aa30afb3c04" exitCode=0 Mar 19 12:36:27.134663 master-0 kubenswrapper[31830]: I0319 12:36:27.133761 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" event={"ID":"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e","Type":"ContainerDied","Data":"d43eaee996917ceb6745545f47af700ee3a2bc4511923c9a37c04aa30afb3c04"} Mar 19 12:36:27.152997 master-0 kubenswrapper[31830]: I0319 12:36:27.152163 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-backup-0" event={"ID":"416ec9bb-4708-40d4-84c4-b5aec90024b6","Type":"ContainerStarted","Data":"0e3c2dda9dc624ba390ce5efbde2931db336f4e20b7451b0d12d8dbdb8303638"} Mar 19 12:36:27.152997 master-0 kubenswrapper[31830]: I0319 12:36:27.152210 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-backup-0" event={"ID":"416ec9bb-4708-40d4-84c4-b5aec90024b6","Type":"ContainerStarted","Data":"90d9cda034330764ae8949cefdde3e6b7291ee2f814ede54e5771cd5b266d86b"} Mar 19 12:36:27.705393 master-0 kubenswrapper[31830]: I0319 12:36:27.704703 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:27.708818 master-0 kubenswrapper[31830]: I0319 12:36:27.707693 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:27.781867 master-0 kubenswrapper[31830]: I0319 12:36:27.781775 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-scripts\") pod \"569ec673-0799-4639-80f6-44155889d03c\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " Mar 19 12:36:27.782097 master-0 kubenswrapper[31830]: I0319 12:36:27.781960 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrqxt\" (UniqueName: \"kubernetes.io/projected/569ec673-0799-4639-80f6-44155889d03c-kube-api-access-qrqxt\") pod \"569ec673-0799-4639-80f6-44155889d03c\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " Mar 19 12:36:27.782097 master-0 kubenswrapper[31830]: I0319 12:36:27.782090 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/569ec673-0799-4639-80f6-44155889d03c-etc-machine-id\") pod \"569ec673-0799-4639-80f6-44155889d03c\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " Mar 19 12:36:27.782266 master-0 kubenswrapper[31830]: I0319 12:36:27.782113 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-config-data\") pod \"569ec673-0799-4639-80f6-44155889d03c\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " Mar 19 12:36:27.782266 master-0 kubenswrapper[31830]: I0319 12:36:27.782194 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-combined-ca-bundle\") pod \"569ec673-0799-4639-80f6-44155889d03c\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " Mar 19 12:36:27.782266 master-0 kubenswrapper[31830]: I0319 12:36:27.782239 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-config-data-custom\") pod \"569ec673-0799-4639-80f6-44155889d03c\" (UID: \"569ec673-0799-4639-80f6-44155889d03c\") " Mar 19 12:36:27.782431 master-0 kubenswrapper[31830]: I0319 12:36:27.782415 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/569ec673-0799-4639-80f6-44155889d03c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "569ec673-0799-4639-80f6-44155889d03c" (UID: "569ec673-0799-4639-80f6-44155889d03c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:27.786021 master-0 kubenswrapper[31830]: I0319 12:36:27.782952 31830 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/569ec673-0799-4639-80f6-44155889d03c-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:27.816889 master-0 kubenswrapper[31830]: I0319 12:36:27.811833 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "569ec673-0799-4639-80f6-44155889d03c" (UID: "569ec673-0799-4639-80f6-44155889d03c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:27.816889 master-0 kubenswrapper[31830]: I0319 12:36:27.812028 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/569ec673-0799-4639-80f6-44155889d03c-kube-api-access-qrqxt" (OuterVolumeSpecName: "kube-api-access-qrqxt") pod "569ec673-0799-4639-80f6-44155889d03c" (UID: "569ec673-0799-4639-80f6-44155889d03c"). InnerVolumeSpecName "kube-api-access-qrqxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:36:27.846605 master-0 kubenswrapper[31830]: I0319 12:36:27.846562 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-scripts" (OuterVolumeSpecName: "scripts") pod "569ec673-0799-4639-80f6-44155889d03c" (UID: "569ec673-0799-4639-80f6-44155889d03c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:27.884474 master-0 kubenswrapper[31830]: I0319 12:36:27.884416 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-etc-nvme\") pod \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " Mar 19 12:36:27.884474 master-0 kubenswrapper[31830]: I0319 12:36:27.884469 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-etc-machine-id\") pod \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " Mar 19 12:36:27.885001 master-0 kubenswrapper[31830]: I0319 12:36:27.884559 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-dev\") pod \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " Mar 19 12:36:27.885001 master-0 kubenswrapper[31830]: I0319 12:36:27.884584 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-combined-ca-bundle\") pod \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " Mar 19 12:36:27.885001 master-0 kubenswrapper[31830]: I0319 12:36:27.884633 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-lib-modules\") pod \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " Mar 19 12:36:27.885001 master-0 kubenswrapper[31830]: I0319 12:36:27.884648 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-var-lib-cinder\") pod \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " Mar 19 12:36:27.885001 master-0 kubenswrapper[31830]: I0319 12:36:27.884708 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-var-locks-cinder\") pod \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " Mar 19 12:36:27.885001 master-0 kubenswrapper[31830]: I0319 12:36:27.884737 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-config-data\") pod \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " Mar 19 12:36:27.885001 master-0 kubenswrapper[31830]: I0319 12:36:27.884833 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-config-data-custom\") pod \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " Mar 19 12:36:27.885001 master-0 kubenswrapper[31830]: I0319 12:36:27.884902 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-run\") pod \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " Mar 19 12:36:27.885485 master-0 kubenswrapper[31830]: I0319 12:36:27.885088 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" (UID: "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:27.885485 master-0 kubenswrapper[31830]: I0319 12:36:27.885171 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" (UID: "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:27.885485 master-0 kubenswrapper[31830]: I0319 12:36:27.885200 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" (UID: "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:27.885485 master-0 kubenswrapper[31830]: I0319 12:36:27.885223 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-dev" (OuterVolumeSpecName: "dev") pod "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" (UID: "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:27.886383 master-0 kubenswrapper[31830]: I0319 12:36:27.885757 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" (UID: "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:27.886383 master-0 kubenswrapper[31830]: I0319 12:36:27.885851 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" (UID: "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:27.886383 master-0 kubenswrapper[31830]: I0319 12:36:27.886234 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-scripts\") pod \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " Mar 19 12:36:27.886383 master-0 kubenswrapper[31830]: I0319 12:36:27.886283 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-etc-iscsi\") pod \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " Mar 19 12:36:27.886383 master-0 kubenswrapper[31830]: I0319 12:36:27.886346 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htlnm\" (UniqueName: \"kubernetes.io/projected/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-kube-api-access-htlnm\") pod \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " Mar 19 12:36:27.886619 master-0 kubenswrapper[31830]: I0319 12:36:27.886392 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-var-locks-brick\") pod \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " Mar 19 12:36:27.886619 master-0 kubenswrapper[31830]: I0319 12:36:27.886408 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-sys\") pod \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\" (UID: \"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e\") " Mar 19 12:36:27.887165 master-0 kubenswrapper[31830]: I0319 12:36:27.886876 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" (UID: "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:27.887165 master-0 kubenswrapper[31830]: I0319 12:36:27.886916 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" (UID: "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:27.887165 master-0 kubenswrapper[31830]: I0319 12:36:27.886943 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-run" (OuterVolumeSpecName: "run") pod "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" (UID: "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:27.887165 master-0 kubenswrapper[31830]: I0319 12:36:27.886965 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-sys" (OuterVolumeSpecName: "sys") pod "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" (UID: "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:36:27.887165 master-0 kubenswrapper[31830]: I0319 12:36:27.887145 31830 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-run\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:27.887165 master-0 kubenswrapper[31830]: I0319 12:36:27.887161 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrqxt\" (UniqueName: \"kubernetes.io/projected/569ec673-0799-4639-80f6-44155889d03c-kube-api-access-qrqxt\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:27.887165 master-0 kubenswrapper[31830]: I0319 12:36:27.887172 31830 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:27.887525 master-0 kubenswrapper[31830]: I0319 12:36:27.887181 31830 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:27.887525 master-0 kubenswrapper[31830]: I0319 12:36:27.887190 31830 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-sys\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:27.887525 master-0 kubenswrapper[31830]: I0319 12:36:27.887198 31830 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-etc-nvme\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:27.887525 master-0 kubenswrapper[31830]: I0319 12:36:27.887206 31830 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:27.887525 master-0 kubenswrapper[31830]: I0319 12:36:27.887215 31830 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-dev\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:27.887525 master-0 kubenswrapper[31830]: I0319 12:36:27.887223 31830 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-lib-modules\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:27.887525 master-0 kubenswrapper[31830]: I0319 12:36:27.887231 31830 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:27.887525 master-0 kubenswrapper[31830]: I0319 12:36:27.887240 31830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:27.887525 master-0 kubenswrapper[31830]: I0319 12:36:27.887249 31830 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:27.887525 master-0 kubenswrapper[31830]: I0319 12:36:27.887278 31830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:27.893984 master-0 kubenswrapper[31830]: I0319 12:36:27.893873 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-scripts" (OuterVolumeSpecName: "scripts") pod "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" (UID: "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:27.902400 master-0 kubenswrapper[31830]: I0319 12:36:27.898782 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" (UID: "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:27.904476 master-0 kubenswrapper[31830]: I0319 12:36:27.904411 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-kube-api-access-htlnm" (OuterVolumeSpecName: "kube-api-access-htlnm") pod "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" (UID: "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e"). InnerVolumeSpecName "kube-api-access-htlnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:36:27.928053 master-0 kubenswrapper[31830]: I0319 12:36:27.927985 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "569ec673-0799-4639-80f6-44155889d03c" (UID: "569ec673-0799-4639-80f6-44155889d03c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:27.951614 master-0 kubenswrapper[31830]: I0319 12:36:27.951413 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-config-data" (OuterVolumeSpecName: "config-data") pod "569ec673-0799-4639-80f6-44155889d03c" (UID: "569ec673-0799-4639-80f6-44155889d03c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:27.973417 master-0 kubenswrapper[31830]: I0319 12:36:27.973336 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" (UID: "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:27.989264 master-0 kubenswrapper[31830]: I0319 12:36:27.989186 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htlnm\" (UniqueName: \"kubernetes.io/projected/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-kube-api-access-htlnm\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:27.989264 master-0 kubenswrapper[31830]: I0319 12:36:27.989243 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:27.989264 master-0 kubenswrapper[31830]: I0319 12:36:27.989257 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:27.989264 master-0 kubenswrapper[31830]: I0319 12:36:27.989270 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569ec673-0799-4639-80f6-44155889d03c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:27.989567 master-0 kubenswrapper[31830]: I0319 12:36:27.989285 31830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:27.989567 master-0 kubenswrapper[31830]: I0319 12:36:27.989298 31830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:28.086093 master-0 kubenswrapper[31830]: I0319 12:36:28.086022 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-config-data" (OuterVolumeSpecName: "config-data") pod "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" (UID: "1176695c-7f77-4a4c-91ef-86fb8eeaaf7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:36:28.090947 master-0 kubenswrapper[31830]: I0319 12:36:28.090831 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:28.213461 master-0 kubenswrapper[31830]: I0319 12:36:28.209918 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:28.216577 master-0 kubenswrapper[31830]: I0319 12:36:28.215070 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-scheduler-0" event={"ID":"569ec673-0799-4639-80f6-44155889d03c","Type":"ContainerDied","Data":"770f1f688bad92e37cabe794c84b8e3f66124d6941f781e4f81a701e3a5b0e20"} Mar 19 12:36:28.216577 master-0 kubenswrapper[31830]: I0319 12:36:28.215160 31830 scope.go:117] "RemoveContainer" containerID="c41ba551b1eb4593e27123776d95ef1602f087c8156505e6d5c6abee484a6e21" Mar 19 12:36:28.227105 master-0 kubenswrapper[31830]: I0319 12:36:28.227023 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" event={"ID":"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e","Type":"ContainerDied","Data":"4788f446ec61a1f61a6723b78d4ea832d97a6cca661d0c232e999296c0f3155b"} Mar 19 12:36:28.227327 master-0 kubenswrapper[31830]: I0319 12:36:28.227246 31830 generic.go:334] "Generic (PLEG): container finished" podID="1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" containerID="4788f446ec61a1f61a6723b78d4ea832d97a6cca661d0c232e999296c0f3155b" exitCode=0 Mar 19 12:36:28.227327 master-0 kubenswrapper[31830]: I0319 12:36:28.227347 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" event={"ID":"1176695c-7f77-4a4c-91ef-86fb8eeaaf7e","Type":"ContainerDied","Data":"9ee71ae8ceec8486323653c9fb76f98534c51fbf3817d7ece85f18f147f1dd7f"} Mar 19 12:36:28.227327 master-0 kubenswrapper[31830]: I0319 12:36:28.228221 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.230994 master-0 kubenswrapper[31830]: I0319 12:36:28.230933 31830 generic.go:334] "Generic (PLEG): container finished" podID="35f4844e-6e9b-4f93-a711-1e673e39add8" containerID="94708eec96ba6c731b90eff4b60c1dd5d1133f800120f84a99a86a59922034b4" exitCode=0 Mar 19 12:36:28.231242 master-0 kubenswrapper[31830]: I0319 12:36:28.231028 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/edpm-b-provisionserver-checksum-discovery-m9sgp" event={"ID":"35f4844e-6e9b-4f93-a711-1e673e39add8","Type":"ContainerDied","Data":"94708eec96ba6c731b90eff4b60c1dd5d1133f800120f84a99a86a59922034b4"} Mar 19 12:36:28.238125 master-0 kubenswrapper[31830]: I0319 12:36:28.237956 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-backup-0" event={"ID":"416ec9bb-4708-40d4-84c4-b5aec90024b6","Type":"ContainerStarted","Data":"543bf43a475b07e74f9432ad97139c4ccdb66ee7bd7e0b6009ae9870dcca8ee7"} Mar 19 12:36:28.348127 master-0 kubenswrapper[31830]: I0319 12:36:28.347654 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-cce1e-backup-0" podStartSLOduration=3.347628171 podStartE2EDuration="3.347628171s" podCreationTimestamp="2026-03-19 12:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:36:28.319143687 +0000 UTC m=+1326.868104391" watchObservedRunningTime="2026-03-19 12:36:28.347628171 +0000 UTC m=+1326.896588885" Mar 19 12:36:28.439856 master-0 kubenswrapper[31830]: I0319 12:36:28.436312 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-cce1e-volume-lvm-iscsi-0"] Mar 19 12:36:28.481180 master-0 kubenswrapper[31830]: I0319 12:36:28.481084 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-cce1e-volume-lvm-iscsi-0"] Mar 19 12:36:28.533845 master-0 kubenswrapper[31830]: I0319 12:36:28.529877 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-cce1e-scheduler-0"] Mar 19 12:36:28.560862 master-0 kubenswrapper[31830]: I0319 12:36:28.560694 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-cce1e-scheduler-0"] Mar 19 12:36:28.571322 master-0 kubenswrapper[31830]: I0319 12:36:28.570960 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-cce1e-volume-lvm-iscsi-0"] Mar 19 12:36:28.573153 master-0 kubenswrapper[31830]: E0319 12:36:28.571573 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="569ec673-0799-4639-80f6-44155889d03c" containerName="cinder-scheduler" Mar 19 12:36:28.573153 master-0 kubenswrapper[31830]: I0319 12:36:28.571598 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="569ec673-0799-4639-80f6-44155889d03c" containerName="cinder-scheduler" Mar 19 12:36:28.573153 master-0 kubenswrapper[31830]: E0319 12:36:28.571664 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" containerName="cinder-volume" Mar 19 12:36:28.573153 master-0 kubenswrapper[31830]: I0319 12:36:28.571673 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" containerName="cinder-volume" Mar 19 12:36:28.573153 master-0 kubenswrapper[31830]: E0319 12:36:28.571740 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="569ec673-0799-4639-80f6-44155889d03c" containerName="probe" Mar 19 12:36:28.573153 master-0 kubenswrapper[31830]: I0319 12:36:28.571751 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="569ec673-0799-4639-80f6-44155889d03c" containerName="probe" Mar 19 12:36:28.573153 master-0 kubenswrapper[31830]: E0319 12:36:28.571780 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" containerName="probe" Mar 19 12:36:28.573153 master-0 kubenswrapper[31830]: I0319 12:36:28.571788 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" containerName="probe" Mar 19 12:36:28.573153 master-0 kubenswrapper[31830]: I0319 12:36:28.572104 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="569ec673-0799-4639-80f6-44155889d03c" containerName="cinder-scheduler" Mar 19 12:36:28.573153 master-0 kubenswrapper[31830]: I0319 12:36:28.572152 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" containerName="probe" Mar 19 12:36:28.573153 master-0 kubenswrapper[31830]: I0319 12:36:28.572177 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="569ec673-0799-4639-80f6-44155889d03c" containerName="probe" Mar 19 12:36:28.573153 master-0 kubenswrapper[31830]: I0319 12:36:28.572196 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" containerName="cinder-volume" Mar 19 12:36:28.573773 master-0 kubenswrapper[31830]: I0319 12:36:28.573625 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.585214 master-0 kubenswrapper[31830]: I0319 12:36:28.583335 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cce1e-volume-lvm-iscsi-config-data" Mar 19 12:36:28.589747 master-0 kubenswrapper[31830]: I0319 12:36:28.589637 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cce1e-volume-lvm-iscsi-0"] Mar 19 12:36:28.602590 master-0 kubenswrapper[31830]: I0319 12:36:28.600396 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-cce1e-scheduler-0"] Mar 19 12:36:28.603070 master-0 kubenswrapper[31830]: I0319 12:36:28.603014 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:28.607135 master-0 kubenswrapper[31830]: I0319 12:36:28.604864 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cce1e-scheduler-config-data" Mar 19 12:36:28.607676 master-0 kubenswrapper[31830]: I0319 12:36:28.607612 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cce1e-scheduler-0"] Mar 19 12:36:28.729469 master-0 kubenswrapper[31830]: I0319 12:36:28.729393 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3949bf7f-94ca-404b-ab0a-37fbed571a00-config-data-custom\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.729469 master-0 kubenswrapper[31830]: I0319 12:36:28.729479 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44170cd5-1ea2-462a-bfff-dc6f881e6138-combined-ca-bundle\") pod \"cinder-cce1e-scheduler-0\" (UID: \"44170cd5-1ea2-462a-bfff-dc6f881e6138\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:28.729738 master-0 kubenswrapper[31830]: I0319 12:36:28.729506 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44170cd5-1ea2-462a-bfff-dc6f881e6138-config-data-custom\") pod \"cinder-cce1e-scheduler-0\" (UID: \"44170cd5-1ea2-462a-bfff-dc6f881e6138\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:28.729738 master-0 kubenswrapper[31830]: I0319 12:36:28.729537 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-etc-machine-id\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.729738 master-0 kubenswrapper[31830]: I0319 12:36:28.729559 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-etc-nvme\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.729738 master-0 kubenswrapper[31830]: I0319 12:36:28.729600 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3949bf7f-94ca-404b-ab0a-37fbed571a00-combined-ca-bundle\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.729738 master-0 kubenswrapper[31830]: I0319 12:36:28.729650 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-run\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.729738 master-0 kubenswrapper[31830]: I0319 12:36:28.729684 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-var-lib-cinder\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.729738 master-0 kubenswrapper[31830]: I0319 12:36:28.729708 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-sys\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.730104 master-0 kubenswrapper[31830]: I0319 12:36:28.729744 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44170cd5-1ea2-462a-bfff-dc6f881e6138-config-data\") pod \"cinder-cce1e-scheduler-0\" (UID: \"44170cd5-1ea2-462a-bfff-dc6f881e6138\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:28.730104 master-0 kubenswrapper[31830]: I0319 12:36:28.729817 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-dev\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.730104 master-0 kubenswrapper[31830]: I0319 12:36:28.729844 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-lib-modules\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.730104 master-0 kubenswrapper[31830]: I0319 12:36:28.729990 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44170cd5-1ea2-462a-bfff-dc6f881e6138-scripts\") pod \"cinder-cce1e-scheduler-0\" (UID: \"44170cd5-1ea2-462a-bfff-dc6f881e6138\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:28.730104 master-0 kubenswrapper[31830]: I0319 12:36:28.730041 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4sdg\" (UniqueName: \"kubernetes.io/projected/44170cd5-1ea2-462a-bfff-dc6f881e6138-kube-api-access-x4sdg\") pod \"cinder-cce1e-scheduler-0\" (UID: \"44170cd5-1ea2-462a-bfff-dc6f881e6138\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:28.730260 master-0 kubenswrapper[31830]: I0319 12:36:28.730163 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3949bf7f-94ca-404b-ab0a-37fbed571a00-scripts\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.730260 master-0 kubenswrapper[31830]: I0319 12:36:28.730219 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-etc-iscsi\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.730260 master-0 kubenswrapper[31830]: I0319 12:36:28.730254 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3949bf7f-94ca-404b-ab0a-37fbed571a00-config-data\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.730346 master-0 kubenswrapper[31830]: I0319 12:36:28.730317 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/44170cd5-1ea2-462a-bfff-dc6f881e6138-etc-machine-id\") pod \"cinder-cce1e-scheduler-0\" (UID: \"44170cd5-1ea2-462a-bfff-dc6f881e6138\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:28.730385 master-0 kubenswrapper[31830]: I0319 12:36:28.730356 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-var-locks-brick\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.730385 master-0 kubenswrapper[31830]: I0319 12:36:28.730372 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-var-locks-cinder\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.730446 master-0 kubenswrapper[31830]: I0319 12:36:28.730390 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7fgp\" (UniqueName: \"kubernetes.io/projected/3949bf7f-94ca-404b-ab0a-37fbed571a00-kube-api-access-j7fgp\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.832735 master-0 kubenswrapper[31830]: I0319 12:36:28.832594 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3949bf7f-94ca-404b-ab0a-37fbed571a00-combined-ca-bundle\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.832735 master-0 kubenswrapper[31830]: I0319 12:36:28.832709 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-run\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.833018 master-0 kubenswrapper[31830]: I0319 12:36:28.832761 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-var-lib-cinder\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.833018 master-0 kubenswrapper[31830]: I0319 12:36:28.832789 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-sys\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.833018 master-0 kubenswrapper[31830]: I0319 12:36:28.832863 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44170cd5-1ea2-462a-bfff-dc6f881e6138-config-data\") pod \"cinder-cce1e-scheduler-0\" (UID: \"44170cd5-1ea2-462a-bfff-dc6f881e6138\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:28.833018 master-0 kubenswrapper[31830]: I0319 12:36:28.832893 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-dev\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.833018 master-0 kubenswrapper[31830]: I0319 12:36:28.832919 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-lib-modules\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.833018 master-0 kubenswrapper[31830]: I0319 12:36:28.832962 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44170cd5-1ea2-462a-bfff-dc6f881e6138-scripts\") pod \"cinder-cce1e-scheduler-0\" (UID: \"44170cd5-1ea2-462a-bfff-dc6f881e6138\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:28.833018 master-0 kubenswrapper[31830]: I0319 12:36:28.832989 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4sdg\" (UniqueName: \"kubernetes.io/projected/44170cd5-1ea2-462a-bfff-dc6f881e6138-kube-api-access-x4sdg\") pod \"cinder-cce1e-scheduler-0\" (UID: \"44170cd5-1ea2-462a-bfff-dc6f881e6138\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:28.833313 master-0 kubenswrapper[31830]: I0319 12:36:28.833036 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3949bf7f-94ca-404b-ab0a-37fbed571a00-scripts\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.833313 master-0 kubenswrapper[31830]: I0319 12:36:28.833068 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-etc-iscsi\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.833313 master-0 kubenswrapper[31830]: I0319 12:36:28.833100 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3949bf7f-94ca-404b-ab0a-37fbed571a00-config-data\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.833313 master-0 kubenswrapper[31830]: I0319 12:36:28.833143 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/44170cd5-1ea2-462a-bfff-dc6f881e6138-etc-machine-id\") pod \"cinder-cce1e-scheduler-0\" (UID: \"44170cd5-1ea2-462a-bfff-dc6f881e6138\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:28.833313 master-0 kubenswrapper[31830]: I0319 12:36:28.833192 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-var-locks-brick\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.833313 master-0 kubenswrapper[31830]: I0319 12:36:28.833214 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-var-locks-cinder\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.833313 master-0 kubenswrapper[31830]: I0319 12:36:28.833240 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7fgp\" (UniqueName: \"kubernetes.io/projected/3949bf7f-94ca-404b-ab0a-37fbed571a00-kube-api-access-j7fgp\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.833598 master-0 kubenswrapper[31830]: I0319 12:36:28.833331 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3949bf7f-94ca-404b-ab0a-37fbed571a00-config-data-custom\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.833598 master-0 kubenswrapper[31830]: I0319 12:36:28.833387 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44170cd5-1ea2-462a-bfff-dc6f881e6138-combined-ca-bundle\") pod \"cinder-cce1e-scheduler-0\" (UID: \"44170cd5-1ea2-462a-bfff-dc6f881e6138\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:28.833598 master-0 kubenswrapper[31830]: I0319 12:36:28.833417 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44170cd5-1ea2-462a-bfff-dc6f881e6138-config-data-custom\") pod \"cinder-cce1e-scheduler-0\" (UID: \"44170cd5-1ea2-462a-bfff-dc6f881e6138\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:28.833598 master-0 kubenswrapper[31830]: I0319 12:36:28.833455 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-etc-machine-id\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.833598 master-0 kubenswrapper[31830]: I0319 12:36:28.833482 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-etc-nvme\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.834370 master-0 kubenswrapper[31830]: I0319 12:36:28.834328 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/44170cd5-1ea2-462a-bfff-dc6f881e6138-etc-machine-id\") pod \"cinder-cce1e-scheduler-0\" (UID: \"44170cd5-1ea2-462a-bfff-dc6f881e6138\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:28.834755 master-0 kubenswrapper[31830]: I0319 12:36:28.834718 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-var-locks-brick\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.834854 master-0 kubenswrapper[31830]: I0319 12:36:28.834775 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-var-locks-cinder\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.835389 master-0 kubenswrapper[31830]: I0319 12:36:28.835280 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-run\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.835953 master-0 kubenswrapper[31830]: I0319 12:36:28.835898 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-etc-machine-id\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.835953 master-0 kubenswrapper[31830]: I0319 12:36:28.835921 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-etc-nvme\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.836302 master-0 kubenswrapper[31830]: I0319 12:36:28.836264 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-dev\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.836374 master-0 kubenswrapper[31830]: I0319 12:36:28.836315 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-lib-modules\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.837256 master-0 kubenswrapper[31830]: I0319 12:36:28.837223 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3949bf7f-94ca-404b-ab0a-37fbed571a00-combined-ca-bundle\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.837337 master-0 kubenswrapper[31830]: I0319 12:36:28.837248 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-sys\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.837337 master-0 kubenswrapper[31830]: I0319 12:36:28.837294 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-var-lib-cinder\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.837337 master-0 kubenswrapper[31830]: I0319 12:36:28.837215 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3949bf7f-94ca-404b-ab0a-37fbed571a00-etc-iscsi\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.839635 master-0 kubenswrapper[31830]: I0319 12:36:28.839589 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3949bf7f-94ca-404b-ab0a-37fbed571a00-config-data-custom\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.840680 master-0 kubenswrapper[31830]: I0319 12:36:28.840637 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44170cd5-1ea2-462a-bfff-dc6f881e6138-config-data-custom\") pod \"cinder-cce1e-scheduler-0\" (UID: \"44170cd5-1ea2-462a-bfff-dc6f881e6138\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:28.841304 master-0 kubenswrapper[31830]: I0319 12:36:28.841257 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44170cd5-1ea2-462a-bfff-dc6f881e6138-scripts\") pod \"cinder-cce1e-scheduler-0\" (UID: \"44170cd5-1ea2-462a-bfff-dc6f881e6138\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:28.842987 master-0 kubenswrapper[31830]: I0319 12:36:28.842316 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44170cd5-1ea2-462a-bfff-dc6f881e6138-combined-ca-bundle\") pod \"cinder-cce1e-scheduler-0\" (UID: \"44170cd5-1ea2-462a-bfff-dc6f881e6138\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:28.845074 master-0 kubenswrapper[31830]: I0319 12:36:28.844330 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3949bf7f-94ca-404b-ab0a-37fbed571a00-config-data\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.849193 master-0 kubenswrapper[31830]: I0319 12:36:28.849124 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3949bf7f-94ca-404b-ab0a-37fbed571a00-scripts\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:28.856108 master-0 kubenswrapper[31830]: I0319 12:36:28.856058 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44170cd5-1ea2-462a-bfff-dc6f881e6138-config-data\") pod \"cinder-cce1e-scheduler-0\" (UID: \"44170cd5-1ea2-462a-bfff-dc6f881e6138\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:29.692898 master-0 kubenswrapper[31830]: I0319 12:36:29.692778 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1176695c-7f77-4a4c-91ef-86fb8eeaaf7e" path="/var/lib/kubelet/pods/1176695c-7f77-4a4c-91ef-86fb8eeaaf7e/volumes" Mar 19 12:36:29.693729 master-0 kubenswrapper[31830]: I0319 12:36:29.693695 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="569ec673-0799-4639-80f6-44155889d03c" path="/var/lib/kubelet/pods/569ec673-0799-4639-80f6-44155889d03c/volumes" Mar 19 12:36:30.175641 master-0 kubenswrapper[31830]: I0319 12:36:30.175567 31830 scope.go:117] "RemoveContainer" containerID="ac960a84a8a49d25a9eed1f44861b73518354897fd89c2e88671a72f60eb44c3" Mar 19 12:36:30.214810 master-0 kubenswrapper[31830]: I0319 12:36:30.214710 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7fgp\" (UniqueName: \"kubernetes.io/projected/3949bf7f-94ca-404b-ab0a-37fbed571a00-kube-api-access-j7fgp\") pod \"cinder-cce1e-volume-lvm-iscsi-0\" (UID: \"3949bf7f-94ca-404b-ab0a-37fbed571a00\") " pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:30.221558 master-0 kubenswrapper[31830]: I0319 12:36:30.221495 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4sdg\" (UniqueName: \"kubernetes.io/projected/44170cd5-1ea2-462a-bfff-dc6f881e6138-kube-api-access-x4sdg\") pod \"cinder-cce1e-scheduler-0\" (UID: \"44170cd5-1ea2-462a-bfff-dc6f881e6138\") " pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:30.236709 master-0 kubenswrapper[31830]: I0319 12:36:30.236624 31830 scope.go:117] "RemoveContainer" containerID="4788f446ec61a1f61a6723b78d4ea832d97a6cca661d0c232e999296c0f3155b" Mar 19 12:36:30.392517 master-0 kubenswrapper[31830]: I0319 12:36:30.392469 31830 scope.go:117] "RemoveContainer" containerID="d43eaee996917ceb6745545f47af700ee3a2bc4511923c9a37c04aa30afb3c04" Mar 19 12:36:30.432741 master-0 kubenswrapper[31830]: I0319 12:36:30.432669 31830 scope.go:117] "RemoveContainer" containerID="4788f446ec61a1f61a6723b78d4ea832d97a6cca661d0c232e999296c0f3155b" Mar 19 12:36:30.433429 master-0 kubenswrapper[31830]: E0319 12:36:30.433396 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4788f446ec61a1f61a6723b78d4ea832d97a6cca661d0c232e999296c0f3155b\": container with ID starting with 4788f446ec61a1f61a6723b78d4ea832d97a6cca661d0c232e999296c0f3155b not found: ID does not exist" containerID="4788f446ec61a1f61a6723b78d4ea832d97a6cca661d0c232e999296c0f3155b" Mar 19 12:36:30.433500 master-0 kubenswrapper[31830]: I0319 12:36:30.433435 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4788f446ec61a1f61a6723b78d4ea832d97a6cca661d0c232e999296c0f3155b"} err="failed to get container status \"4788f446ec61a1f61a6723b78d4ea832d97a6cca661d0c232e999296c0f3155b\": rpc error: code = NotFound desc = could not find container \"4788f446ec61a1f61a6723b78d4ea832d97a6cca661d0c232e999296c0f3155b\": container with ID starting with 4788f446ec61a1f61a6723b78d4ea832d97a6cca661d0c232e999296c0f3155b not found: ID does not exist" Mar 19 12:36:30.433500 master-0 kubenswrapper[31830]: I0319 12:36:30.433460 31830 scope.go:117] "RemoveContainer" containerID="d43eaee996917ceb6745545f47af700ee3a2bc4511923c9a37c04aa30afb3c04" Mar 19 12:36:30.435341 master-0 kubenswrapper[31830]: E0319 12:36:30.435305 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d43eaee996917ceb6745545f47af700ee3a2bc4511923c9a37c04aa30afb3c04\": container with ID starting with d43eaee996917ceb6745545f47af700ee3a2bc4511923c9a37c04aa30afb3c04 not found: ID does not exist" containerID="d43eaee996917ceb6745545f47af700ee3a2bc4511923c9a37c04aa30afb3c04" Mar 19 12:36:30.435416 master-0 kubenswrapper[31830]: I0319 12:36:30.435344 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d43eaee996917ceb6745545f47af700ee3a2bc4511923c9a37c04aa30afb3c04"} err="failed to get container status \"d43eaee996917ceb6745545f47af700ee3a2bc4511923c9a37c04aa30afb3c04\": rpc error: code = NotFound desc = could not find container \"d43eaee996917ceb6745545f47af700ee3a2bc4511923c9a37c04aa30afb3c04\": container with ID starting with d43eaee996917ceb6745545f47af700ee3a2bc4511923c9a37c04aa30afb3c04 not found: ID does not exist" Mar 19 12:36:30.436932 master-0 kubenswrapper[31830]: I0319 12:36:30.436850 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:30.447960 master-0 kubenswrapper[31830]: I0319 12:36:30.447900 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:30.514281 master-0 kubenswrapper[31830]: I0319 12:36:30.513994 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:31.923379 master-0 kubenswrapper[31830]: I0319 12:36:31.908828 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cce1e-volume-lvm-iscsi-0"] Mar 19 12:36:31.940483 master-0 kubenswrapper[31830]: W0319 12:36:31.940418 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3949bf7f_94ca_404b_ab0a_37fbed571a00.slice/crio-33eeb4ad5a56d309afd5df314411f7a6844c8ada8947f2ce20be8de986ae807b WatchSource:0}: Error finding container 33eeb4ad5a56d309afd5df314411f7a6844c8ada8947f2ce20be8de986ae807b: Status 404 returned error can't find the container with id 33eeb4ad5a56d309afd5df314411f7a6844c8ada8947f2ce20be8de986ae807b Mar 19 12:36:32.064097 master-0 kubenswrapper[31830]: I0319 12:36:32.055676 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cce1e-scheduler-0"] Mar 19 12:36:32.335457 master-0 kubenswrapper[31830]: I0319 12:36:32.335388 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" event={"ID":"3949bf7f-94ca-404b-ab0a-37fbed571a00","Type":"ContainerStarted","Data":"33eeb4ad5a56d309afd5df314411f7a6844c8ada8947f2ce20be8de986ae807b"} Mar 19 12:36:32.338052 master-0 kubenswrapper[31830]: I0319 12:36:32.338003 31830 generic.go:334] "Generic (PLEG): container finished" podID="35f4844e-6e9b-4f93-a711-1e673e39add8" containerID="5d5134864377ff6ef0c678afb80694340e7ba20296330cbdce0ce81558d2ee8d" exitCode=0 Mar 19 12:36:32.338116 master-0 kubenswrapper[31830]: I0319 12:36:32.338073 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/edpm-b-provisionserver-checksum-discovery-m9sgp" event={"ID":"35f4844e-6e9b-4f93-a711-1e673e39add8","Type":"ContainerDied","Data":"5d5134864377ff6ef0c678afb80694340e7ba20296330cbdce0ce81558d2ee8d"} Mar 19 12:36:32.342193 master-0 kubenswrapper[31830]: I0319 12:36:32.342119 31830 generic.go:334] "Generic (PLEG): container finished" podID="f33cd3dd-af62-465e-8e5d-6a2ad7e86748" containerID="a89e35e083fda0f7b8496b39ebf3184418372ea80a01637cafd88d8463a543c1" exitCode=0 Mar 19 12:36:32.342295 master-0 kubenswrapper[31830]: I0319 12:36:32.342183 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/edpm-a-provisionserver-checksum-discovery-g8vpc" event={"ID":"f33cd3dd-af62-465e-8e5d-6a2ad7e86748","Type":"ContainerDied","Data":"a89e35e083fda0f7b8496b39ebf3184418372ea80a01637cafd88d8463a543c1"} Mar 19 12:36:32.344306 master-0 kubenswrapper[31830]: I0319 12:36:32.344269 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-scheduler-0" event={"ID":"44170cd5-1ea2-462a-bfff-dc6f881e6138","Type":"ContainerStarted","Data":"22456e3760fc851b72fba2483f7d8cae2648ea5d01005ee93e8e319f31b35c8c"} Mar 19 12:36:33.464563 master-0 kubenswrapper[31830]: I0319 12:36:33.457644 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-scheduler-0" event={"ID":"44170cd5-1ea2-462a-bfff-dc6f881e6138","Type":"ContainerStarted","Data":"246ab852b29701c218acb97e6e72038d62d0e3929291b5ee052dfaceb89c08e5"} Mar 19 12:36:33.485835 master-0 kubenswrapper[31830]: I0319 12:36:33.476652 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" event={"ID":"3949bf7f-94ca-404b-ab0a-37fbed571a00","Type":"ContainerStarted","Data":"da80bf35768f81538c1f370142375791017d59bacd1f2d2795f564b60b383bbe"} Mar 19 12:36:34.400825 master-0 kubenswrapper[31830]: I0319 12:36:34.400095 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/edpm-a-provisionserver-checksum-discovery-g8vpc" Mar 19 12:36:34.424540 master-0 kubenswrapper[31830]: I0319 12:36:34.424280 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/edpm-b-provisionserver-checksum-discovery-m9sgp" Mar 19 12:36:34.549343 master-0 kubenswrapper[31830]: I0319 12:36:34.537028 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-data\" (UniqueName: \"kubernetes.io/empty-dir/35f4844e-6e9b-4f93-a711-1e673e39add8-image-data\") pod \"35f4844e-6e9b-4f93-a711-1e673e39add8\" (UID: \"35f4844e-6e9b-4f93-a711-1e673e39add8\") " Mar 19 12:36:34.549343 master-0 kubenswrapper[31830]: I0319 12:36:34.537101 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-data\" (UniqueName: \"kubernetes.io/empty-dir/f33cd3dd-af62-465e-8e5d-6a2ad7e86748-image-data\") pod \"f33cd3dd-af62-465e-8e5d-6a2ad7e86748\" (UID: \"f33cd3dd-af62-465e-8e5d-6a2ad7e86748\") " Mar 19 12:36:34.549343 master-0 kubenswrapper[31830]: I0319 12:36:34.537159 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57rf7\" (UniqueName: \"kubernetes.io/projected/35f4844e-6e9b-4f93-a711-1e673e39add8-kube-api-access-57rf7\") pod \"35f4844e-6e9b-4f93-a711-1e673e39add8\" (UID: \"35f4844e-6e9b-4f93-a711-1e673e39add8\") " Mar 19 12:36:34.549343 master-0 kubenswrapper[31830]: I0319 12:36:34.537492 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nqx2\" (UniqueName: \"kubernetes.io/projected/f33cd3dd-af62-465e-8e5d-6a2ad7e86748-kube-api-access-8nqx2\") pod \"f33cd3dd-af62-465e-8e5d-6a2ad7e86748\" (UID: \"f33cd3dd-af62-465e-8e5d-6a2ad7e86748\") " Mar 19 12:36:34.552722 master-0 kubenswrapper[31830]: I0319 12:36:34.552321 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" event={"ID":"3949bf7f-94ca-404b-ab0a-37fbed571a00","Type":"ContainerStarted","Data":"30baef178ddc586781afc45cbf4d399c638dc3d18e00588b7883f12af4929c98"} Mar 19 12:36:34.557963 master-0 kubenswrapper[31830]: I0319 12:36:34.557912 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/edpm-b-provisionserver-checksum-discovery-m9sgp" event={"ID":"35f4844e-6e9b-4f93-a711-1e673e39add8","Type":"ContainerDied","Data":"7817689c1558fce7d9388fadfc8374ac6460faeb8b9811f3e745003951e303b3"} Mar 19 12:36:34.558166 master-0 kubenswrapper[31830]: I0319 12:36:34.557976 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7817689c1558fce7d9388fadfc8374ac6460faeb8b9811f3e745003951e303b3" Mar 19 12:36:34.558166 master-0 kubenswrapper[31830]: I0319 12:36:34.558058 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/edpm-b-provisionserver-checksum-discovery-m9sgp" Mar 19 12:36:34.562872 master-0 kubenswrapper[31830]: I0319 12:36:34.559726 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35f4844e-6e9b-4f93-a711-1e673e39add8-kube-api-access-57rf7" (OuterVolumeSpecName: "kube-api-access-57rf7") pod "35f4844e-6e9b-4f93-a711-1e673e39add8" (UID: "35f4844e-6e9b-4f93-a711-1e673e39add8"). InnerVolumeSpecName "kube-api-access-57rf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:36:34.573263 master-0 kubenswrapper[31830]: I0319 12:36:34.573070 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33cd3dd-af62-465e-8e5d-6a2ad7e86748-kube-api-access-8nqx2" (OuterVolumeSpecName: "kube-api-access-8nqx2") pod "f33cd3dd-af62-465e-8e5d-6a2ad7e86748" (UID: "f33cd3dd-af62-465e-8e5d-6a2ad7e86748"). InnerVolumeSpecName "kube-api-access-8nqx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:36:34.593399 master-0 kubenswrapper[31830]: I0319 12:36:34.592255 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/edpm-a-provisionserver-checksum-discovery-g8vpc" event={"ID":"f33cd3dd-af62-465e-8e5d-6a2ad7e86748","Type":"ContainerDied","Data":"ca2307f3bdb6b4b04319a9540a4149f2bae1b83c7f78569206f15e70bb15556e"} Mar 19 12:36:34.593399 master-0 kubenswrapper[31830]: I0319 12:36:34.592326 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca2307f3bdb6b4b04319a9540a4149f2bae1b83c7f78569206f15e70bb15556e" Mar 19 12:36:34.593399 master-0 kubenswrapper[31830]: I0319 12:36:34.592411 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/edpm-a-provisionserver-checksum-discovery-g8vpc" Mar 19 12:36:34.669841 master-0 kubenswrapper[31830]: I0319 12:36:34.666687 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57rf7\" (UniqueName: \"kubernetes.io/projected/35f4844e-6e9b-4f93-a711-1e673e39add8-kube-api-access-57rf7\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:34.669841 master-0 kubenswrapper[31830]: I0319 12:36:34.666737 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nqx2\" (UniqueName: \"kubernetes.io/projected/f33cd3dd-af62-465e-8e5d-6a2ad7e86748-kube-api-access-8nqx2\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:34.674930 master-0 kubenswrapper[31830]: I0319 12:36:34.673892 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" podStartSLOduration=6.673869815 podStartE2EDuration="6.673869815s" podCreationTimestamp="2026-03-19 12:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:36:34.593684498 +0000 UTC m=+1333.142645202" watchObservedRunningTime="2026-03-19 12:36:34.673869815 +0000 UTC m=+1333.222830519" Mar 19 12:36:34.756846 master-0 kubenswrapper[31830]: I0319 12:36:34.756466 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-cce1e-api-0" Mar 19 12:36:34.860942 master-0 kubenswrapper[31830]: I0319 12:36:34.856230 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f33cd3dd-af62-465e-8e5d-6a2ad7e86748-image-data" (OuterVolumeSpecName: "image-data") pod "f33cd3dd-af62-465e-8e5d-6a2ad7e86748" (UID: "f33cd3dd-af62-465e-8e5d-6a2ad7e86748"). InnerVolumeSpecName "image-data". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:36:34.860942 master-0 kubenswrapper[31830]: I0319 12:36:34.859446 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35f4844e-6e9b-4f93-a711-1e673e39add8-image-data" (OuterVolumeSpecName: "image-data") pod "35f4844e-6e9b-4f93-a711-1e673e39add8" (UID: "35f4844e-6e9b-4f93-a711-1e673e39add8"). InnerVolumeSpecName "image-data". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:36:34.871875 master-0 kubenswrapper[31830]: I0319 12:36:34.871518 31830 reconciler_common.go:293] "Volume detached for volume \"image-data\" (UniqueName: \"kubernetes.io/empty-dir/35f4844e-6e9b-4f93-a711-1e673e39add8-image-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:34.871875 master-0 kubenswrapper[31830]: I0319 12:36:34.871573 31830 reconciler_common.go:293] "Volume detached for volume \"image-data\" (UniqueName: \"kubernetes.io/empty-dir/f33cd3dd-af62-465e-8e5d-6a2ad7e86748-image-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:36:35.154413 master-0 kubenswrapper[31830]: I0319 12:36:35.154374 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:35.208947 master-0 kubenswrapper[31830]: I0319 12:36:35.208859 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:36:35.444047 master-0 kubenswrapper[31830]: I0319 12:36:35.437420 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:35.630810 master-0 kubenswrapper[31830]: I0319 12:36:35.630743 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cce1e-scheduler-0" event={"ID":"44170cd5-1ea2-462a-bfff-dc6f881e6138","Type":"ContainerStarted","Data":"c93bf8a02eec212adef48cb58efb7b1d1975e82986cd0eb5dd092fcaae4892f3"} Mar 19 12:36:35.656573 master-0 kubenswrapper[31830]: I0319 12:36:35.655250 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7d99c66444-6vrxg"] Mar 19 12:36:35.656573 master-0 kubenswrapper[31830]: E0319 12:36:35.655738 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f4844e-6e9b-4f93-a711-1e673e39add8" containerName="init" Mar 19 12:36:35.656573 master-0 kubenswrapper[31830]: I0319 12:36:35.655753 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f4844e-6e9b-4f93-a711-1e673e39add8" containerName="init" Mar 19 12:36:35.656573 master-0 kubenswrapper[31830]: E0319 12:36:35.655782 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33cd3dd-af62-465e-8e5d-6a2ad7e86748" containerName="init" Mar 19 12:36:35.656573 master-0 kubenswrapper[31830]: I0319 12:36:35.655788 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33cd3dd-af62-465e-8e5d-6a2ad7e86748" containerName="init" Mar 19 12:36:35.656573 master-0 kubenswrapper[31830]: E0319 12:36:35.655826 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33cd3dd-af62-465e-8e5d-6a2ad7e86748" containerName="edpm-a-provisionserver-checksum-discovery" Mar 19 12:36:35.656573 master-0 kubenswrapper[31830]: I0319 12:36:35.655833 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33cd3dd-af62-465e-8e5d-6a2ad7e86748" containerName="edpm-a-provisionserver-checksum-discovery" Mar 19 12:36:35.656573 master-0 kubenswrapper[31830]: E0319 12:36:35.655867 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f4844e-6e9b-4f93-a711-1e673e39add8" containerName="edpm-b-provisionserver-checksum-discovery" Mar 19 12:36:35.656573 master-0 kubenswrapper[31830]: I0319 12:36:35.655875 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f4844e-6e9b-4f93-a711-1e673e39add8" containerName="edpm-b-provisionserver-checksum-discovery" Mar 19 12:36:35.673452 master-0 kubenswrapper[31830]: I0319 12:36:35.662161 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f4844e-6e9b-4f93-a711-1e673e39add8" containerName="edpm-b-provisionserver-checksum-discovery" Mar 19 12:36:35.673452 master-0 kubenswrapper[31830]: I0319 12:36:35.662226 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f33cd3dd-af62-465e-8e5d-6a2ad7e86748" containerName="edpm-a-provisionserver-checksum-discovery" Mar 19 12:36:35.673452 master-0 kubenswrapper[31830]: I0319 12:36:35.663595 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:35.715828 master-0 kubenswrapper[31830]: I0319 12:36:35.707466 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-cce1e-scheduler-0" podStartSLOduration=7.707445218 podStartE2EDuration="7.707445218s" podCreationTimestamp="2026-03-19 12:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:36:35.691439202 +0000 UTC m=+1334.240399906" watchObservedRunningTime="2026-03-19 12:36:35.707445218 +0000 UTC m=+1334.256405922" Mar 19 12:36:35.740847 master-0 kubenswrapper[31830]: I0319 12:36:35.721555 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d99c66444-6vrxg"] Mar 19 12:36:35.830908 master-0 kubenswrapper[31830]: I0319 12:36:35.827634 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29793e73-ea31-4460-9aa6-85235971e586-logs\") pod \"placement-7d99c66444-6vrxg\" (UID: \"29793e73-ea31-4460-9aa6-85235971e586\") " pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:35.830908 master-0 kubenswrapper[31830]: I0319 12:36:35.827719 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29793e73-ea31-4460-9aa6-85235971e586-public-tls-certs\") pod \"placement-7d99c66444-6vrxg\" (UID: \"29793e73-ea31-4460-9aa6-85235971e586\") " pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:35.830908 master-0 kubenswrapper[31830]: I0319 12:36:35.827967 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29793e73-ea31-4460-9aa6-85235971e586-combined-ca-bundle\") pod \"placement-7d99c66444-6vrxg\" (UID: \"29793e73-ea31-4460-9aa6-85235971e586\") " pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:35.830908 master-0 kubenswrapper[31830]: I0319 12:36:35.828004 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74l6r\" (UniqueName: \"kubernetes.io/projected/29793e73-ea31-4460-9aa6-85235971e586-kube-api-access-74l6r\") pod \"placement-7d99c66444-6vrxg\" (UID: \"29793e73-ea31-4460-9aa6-85235971e586\") " pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:35.830908 master-0 kubenswrapper[31830]: I0319 12:36:35.828046 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29793e73-ea31-4460-9aa6-85235971e586-internal-tls-certs\") pod \"placement-7d99c66444-6vrxg\" (UID: \"29793e73-ea31-4460-9aa6-85235971e586\") " pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:35.830908 master-0 kubenswrapper[31830]: I0319 12:36:35.828094 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29793e73-ea31-4460-9aa6-85235971e586-scripts\") pod \"placement-7d99c66444-6vrxg\" (UID: \"29793e73-ea31-4460-9aa6-85235971e586\") " pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:35.830908 master-0 kubenswrapper[31830]: I0319 12:36:35.828115 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29793e73-ea31-4460-9aa6-85235971e586-config-data\") pod \"placement-7d99c66444-6vrxg\" (UID: \"29793e73-ea31-4460-9aa6-85235971e586\") " pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:35.932334 master-0 kubenswrapper[31830]: I0319 12:36:35.930471 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29793e73-ea31-4460-9aa6-85235971e586-internal-tls-certs\") pod \"placement-7d99c66444-6vrxg\" (UID: \"29793e73-ea31-4460-9aa6-85235971e586\") " pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:35.932334 master-0 kubenswrapper[31830]: I0319 12:36:35.930540 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29793e73-ea31-4460-9aa6-85235971e586-scripts\") pod \"placement-7d99c66444-6vrxg\" (UID: \"29793e73-ea31-4460-9aa6-85235971e586\") " pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:35.932334 master-0 kubenswrapper[31830]: I0319 12:36:35.930568 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29793e73-ea31-4460-9aa6-85235971e586-config-data\") pod \"placement-7d99c66444-6vrxg\" (UID: \"29793e73-ea31-4460-9aa6-85235971e586\") " pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:35.932334 master-0 kubenswrapper[31830]: I0319 12:36:35.930617 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29793e73-ea31-4460-9aa6-85235971e586-logs\") pod \"placement-7d99c66444-6vrxg\" (UID: \"29793e73-ea31-4460-9aa6-85235971e586\") " pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:35.932334 master-0 kubenswrapper[31830]: I0319 12:36:35.930642 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29793e73-ea31-4460-9aa6-85235971e586-public-tls-certs\") pod \"placement-7d99c66444-6vrxg\" (UID: \"29793e73-ea31-4460-9aa6-85235971e586\") " pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:35.932334 master-0 kubenswrapper[31830]: I0319 12:36:35.930760 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29793e73-ea31-4460-9aa6-85235971e586-combined-ca-bundle\") pod \"placement-7d99c66444-6vrxg\" (UID: \"29793e73-ea31-4460-9aa6-85235971e586\") " pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:35.932334 master-0 kubenswrapper[31830]: I0319 12:36:35.930784 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74l6r\" (UniqueName: \"kubernetes.io/projected/29793e73-ea31-4460-9aa6-85235971e586-kube-api-access-74l6r\") pod \"placement-7d99c66444-6vrxg\" (UID: \"29793e73-ea31-4460-9aa6-85235971e586\") " pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:35.932334 master-0 kubenswrapper[31830]: I0319 12:36:35.931248 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29793e73-ea31-4460-9aa6-85235971e586-logs\") pod \"placement-7d99c66444-6vrxg\" (UID: \"29793e73-ea31-4460-9aa6-85235971e586\") " pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:35.940830 master-0 kubenswrapper[31830]: I0319 12:36:35.938744 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29793e73-ea31-4460-9aa6-85235971e586-scripts\") pod \"placement-7d99c66444-6vrxg\" (UID: \"29793e73-ea31-4460-9aa6-85235971e586\") " pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:35.940830 master-0 kubenswrapper[31830]: I0319 12:36:35.938980 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29793e73-ea31-4460-9aa6-85235971e586-combined-ca-bundle\") pod \"placement-7d99c66444-6vrxg\" (UID: \"29793e73-ea31-4460-9aa6-85235971e586\") " pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:35.940830 master-0 kubenswrapper[31830]: I0319 12:36:35.940602 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29793e73-ea31-4460-9aa6-85235971e586-internal-tls-certs\") pod \"placement-7d99c66444-6vrxg\" (UID: \"29793e73-ea31-4460-9aa6-85235971e586\") " pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:35.944277 master-0 kubenswrapper[31830]: I0319 12:36:35.941216 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29793e73-ea31-4460-9aa6-85235971e586-public-tls-certs\") pod \"placement-7d99c66444-6vrxg\" (UID: \"29793e73-ea31-4460-9aa6-85235971e586\") " pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:35.956825 master-0 kubenswrapper[31830]: I0319 12:36:35.956416 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74l6r\" (UniqueName: \"kubernetes.io/projected/29793e73-ea31-4460-9aa6-85235971e586-kube-api-access-74l6r\") pod \"placement-7d99c66444-6vrxg\" (UID: \"29793e73-ea31-4460-9aa6-85235971e586\") " pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:35.956825 master-0 kubenswrapper[31830]: I0319 12:36:35.956470 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29793e73-ea31-4460-9aa6-85235971e586-config-data\") pod \"placement-7d99c66444-6vrxg\" (UID: \"29793e73-ea31-4460-9aa6-85235971e586\") " pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:36.055821 master-0 kubenswrapper[31830]: I0319 12:36:36.054323 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-cce1e-backup-0" Mar 19 12:36:36.055821 master-0 kubenswrapper[31830]: I0319 12:36:36.054993 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:36.659817 master-0 kubenswrapper[31830]: I0319 12:36:36.653544 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d99c66444-6vrxg"] Mar 19 12:36:37.668574 master-0 kubenswrapper[31830]: I0319 12:36:37.668500 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d99c66444-6vrxg" event={"ID":"29793e73-ea31-4460-9aa6-85235971e586","Type":"ContainerStarted","Data":"eb8a0d79fcdd0e76c91e945b093c4222252a7e7ab808b9b84dd5ba0904e8eb35"} Mar 19 12:36:37.669170 master-0 kubenswrapper[31830]: I0319 12:36:37.668606 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d99c66444-6vrxg" event={"ID":"29793e73-ea31-4460-9aa6-85235971e586","Type":"ContainerStarted","Data":"48f157023f775e9d9565561c6615fb523eff4eb289f8dbbed6163c82d8b89ce3"} Mar 19 12:36:37.669170 master-0 kubenswrapper[31830]: I0319 12:36:37.668627 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d99c66444-6vrxg" event={"ID":"29793e73-ea31-4460-9aa6-85235971e586","Type":"ContainerStarted","Data":"7551b2ae5d9e9b979422dfc9c97626674f7bc12e309ba9d96561e3fc44fc1eef"} Mar 19 12:36:37.711025 master-0 kubenswrapper[31830]: I0319 12:36:37.710577 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7d99c66444-6vrxg" podStartSLOduration=2.71055378 podStartE2EDuration="2.71055378s" podCreationTimestamp="2026-03-19 12:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:36:37.699373164 +0000 UTC m=+1336.248333868" watchObservedRunningTime="2026-03-19 12:36:37.71055378 +0000 UTC m=+1336.259514474" Mar 19 12:36:38.682354 master-0 kubenswrapper[31830]: I0319 12:36:38.682288 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:38.682354 master-0 kubenswrapper[31830]: I0319 12:36:38.682331 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:36:40.103296 master-0 kubenswrapper[31830]: I0319 12:36:40.103228 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-64b98cb88d-7qp8f" Mar 19 12:36:40.455889 master-0 kubenswrapper[31830]: I0319 12:36:40.450185 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:40.891610 master-0 kubenswrapper[31830]: I0319 12:36:40.891551 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-cce1e-scheduler-0" Mar 19 12:36:40.988240 master-0 kubenswrapper[31830]: I0319 12:36:40.988169 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-cce1e-volume-lvm-iscsi-0" Mar 19 12:36:42.283338 master-0 kubenswrapper[31830]: I0319 12:36:42.283275 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-569d794d4c-pmgr5" Mar 19 12:36:43.726410 master-0 kubenswrapper[31830]: I0319 12:36:43.726285 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 19 12:36:43.729313 master-0 kubenswrapper[31830]: I0319 12:36:43.727877 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 19 12:36:43.735106 master-0 kubenswrapper[31830]: I0319 12:36:43.734686 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 19 12:36:43.735106 master-0 kubenswrapper[31830]: I0319 12:36:43.734889 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 19 12:36:43.766019 master-0 kubenswrapper[31830]: I0319 12:36:43.755508 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 19 12:36:43.915911 master-0 kubenswrapper[31830]: I0319 12:36:43.915678 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c928f8ae-cc84-4887-9b3b-dc1900338aab-openstack-config\") pod \"openstackclient\" (UID: \"c928f8ae-cc84-4887-9b3b-dc1900338aab\") " pod="openstack/openstackclient" Mar 19 12:36:43.915911 master-0 kubenswrapper[31830]: I0319 12:36:43.915748 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcq4s\" (UniqueName: \"kubernetes.io/projected/c928f8ae-cc84-4887-9b3b-dc1900338aab-kube-api-access-rcq4s\") pod \"openstackclient\" (UID: \"c928f8ae-cc84-4887-9b3b-dc1900338aab\") " pod="openstack/openstackclient" Mar 19 12:36:43.915911 master-0 kubenswrapper[31830]: I0319 12:36:43.915788 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c928f8ae-cc84-4887-9b3b-dc1900338aab-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c928f8ae-cc84-4887-9b3b-dc1900338aab\") " pod="openstack/openstackclient" Mar 19 12:36:43.915911 master-0 kubenswrapper[31830]: I0319 12:36:43.915852 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c928f8ae-cc84-4887-9b3b-dc1900338aab-openstack-config-secret\") pod \"openstackclient\" (UID: \"c928f8ae-cc84-4887-9b3b-dc1900338aab\") " pod="openstack/openstackclient" Mar 19 12:36:44.017838 master-0 kubenswrapper[31830]: I0319 12:36:44.017771 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c928f8ae-cc84-4887-9b3b-dc1900338aab-openstack-config\") pod \"openstackclient\" (UID: \"c928f8ae-cc84-4887-9b3b-dc1900338aab\") " pod="openstack/openstackclient" Mar 19 12:36:44.018071 master-0 kubenswrapper[31830]: I0319 12:36:44.017855 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcq4s\" (UniqueName: \"kubernetes.io/projected/c928f8ae-cc84-4887-9b3b-dc1900338aab-kube-api-access-rcq4s\") pod \"openstackclient\" (UID: \"c928f8ae-cc84-4887-9b3b-dc1900338aab\") " pod="openstack/openstackclient" Mar 19 12:36:44.018071 master-0 kubenswrapper[31830]: I0319 12:36:44.017897 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c928f8ae-cc84-4887-9b3b-dc1900338aab-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c928f8ae-cc84-4887-9b3b-dc1900338aab\") " pod="openstack/openstackclient" Mar 19 12:36:44.018071 master-0 kubenswrapper[31830]: I0319 12:36:44.017924 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c928f8ae-cc84-4887-9b3b-dc1900338aab-openstack-config-secret\") pod \"openstackclient\" (UID: \"c928f8ae-cc84-4887-9b3b-dc1900338aab\") " pod="openstack/openstackclient" Mar 19 12:36:44.019173 master-0 kubenswrapper[31830]: I0319 12:36:44.019149 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c928f8ae-cc84-4887-9b3b-dc1900338aab-openstack-config\") pod \"openstackclient\" (UID: \"c928f8ae-cc84-4887-9b3b-dc1900338aab\") " pod="openstack/openstackclient" Mar 19 12:36:44.021937 master-0 kubenswrapper[31830]: I0319 12:36:44.021820 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c928f8ae-cc84-4887-9b3b-dc1900338aab-openstack-config-secret\") pod \"openstackclient\" (UID: \"c928f8ae-cc84-4887-9b3b-dc1900338aab\") " pod="openstack/openstackclient" Mar 19 12:36:44.034857 master-0 kubenswrapper[31830]: I0319 12:36:44.034792 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c928f8ae-cc84-4887-9b3b-dc1900338aab-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c928f8ae-cc84-4887-9b3b-dc1900338aab\") " pod="openstack/openstackclient" Mar 19 12:36:44.047103 master-0 kubenswrapper[31830]: I0319 12:36:44.047043 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcq4s\" (UniqueName: \"kubernetes.io/projected/c928f8ae-cc84-4887-9b3b-dc1900338aab-kube-api-access-rcq4s\") pod \"openstackclient\" (UID: \"c928f8ae-cc84-4887-9b3b-dc1900338aab\") " pod="openstack/openstackclient" Mar 19 12:36:44.064960 master-0 kubenswrapper[31830]: I0319 12:36:44.064919 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 19 12:36:44.599821 master-0 kubenswrapper[31830]: W0319 12:36:44.597334 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc928f8ae_cc84_4887_9b3b_dc1900338aab.slice/crio-225446c2983945630a33658fec72330a7199a4a1f587eed33faec504bc941b74 WatchSource:0}: Error finding container 225446c2983945630a33658fec72330a7199a4a1f587eed33faec504bc941b74: Status 404 returned error can't find the container with id 225446c2983945630a33658fec72330a7199a4a1f587eed33faec504bc941b74 Mar 19 12:36:44.602172 master-0 kubenswrapper[31830]: I0319 12:36:44.601819 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 19 12:36:44.779467 master-0 kubenswrapper[31830]: I0319 12:36:44.779404 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c928f8ae-cc84-4887-9b3b-dc1900338aab","Type":"ContainerStarted","Data":"225446c2983945630a33658fec72330a7199a4a1f587eed33faec504bc941b74"} Mar 19 12:36:47.304939 master-0 kubenswrapper[31830]: I0319 12:36:47.296602 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-f6fffcbf4-vwj74"] Mar 19 12:36:47.315823 master-0 kubenswrapper[31830]: I0319 12:36:47.313036 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.320904 master-0 kubenswrapper[31830]: I0319 12:36:47.317616 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 19 12:36:47.321126 master-0 kubenswrapper[31830]: I0319 12:36:47.320940 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 19 12:36:47.321749 master-0 kubenswrapper[31830]: I0319 12:36:47.321232 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 19 12:36:47.325384 master-0 kubenswrapper[31830]: I0319 12:36:47.324868 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-f6fffcbf4-vwj74"] Mar 19 12:36:47.409455 master-0 kubenswrapper[31830]: I0319 12:36:47.409313 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ee17-ae52-4dac-829c-7217ec01755d-public-tls-certs\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.409455 master-0 kubenswrapper[31830]: I0319 12:36:47.409383 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c3ee17-ae52-4dac-829c-7217ec01755d-config-data\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.409713 master-0 kubenswrapper[31830]: I0319 12:36:47.409493 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ee17-ae52-4dac-829c-7217ec01755d-internal-tls-certs\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.409713 master-0 kubenswrapper[31830]: I0319 12:36:47.409553 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c3ee17-ae52-4dac-829c-7217ec01755d-combined-ca-bundle\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.409713 master-0 kubenswrapper[31830]: I0319 12:36:47.409577 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9c3ee17-ae52-4dac-829c-7217ec01755d-log-httpd\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.409713 master-0 kubenswrapper[31830]: I0319 12:36:47.409596 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a9c3ee17-ae52-4dac-829c-7217ec01755d-etc-swift\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.409713 master-0 kubenswrapper[31830]: I0319 12:36:47.409660 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9c3ee17-ae52-4dac-829c-7217ec01755d-run-httpd\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.409713 master-0 kubenswrapper[31830]: I0319 12:36:47.409690 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcf9c\" (UniqueName: \"kubernetes.io/projected/a9c3ee17-ae52-4dac-829c-7217ec01755d-kube-api-access-kcf9c\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.512107 master-0 kubenswrapper[31830]: I0319 12:36:47.512077 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9c3ee17-ae52-4dac-829c-7217ec01755d-run-httpd\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.512862 master-0 kubenswrapper[31830]: I0319 12:36:47.512842 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcf9c\" (UniqueName: \"kubernetes.io/projected/a9c3ee17-ae52-4dac-829c-7217ec01755d-kube-api-access-kcf9c\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.513026 master-0 kubenswrapper[31830]: I0319 12:36:47.512976 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ee17-ae52-4dac-829c-7217ec01755d-public-tls-certs\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.513156 master-0 kubenswrapper[31830]: I0319 12:36:47.513143 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c3ee17-ae52-4dac-829c-7217ec01755d-config-data\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.513406 master-0 kubenswrapper[31830]: I0319 12:36:47.513392 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ee17-ae52-4dac-829c-7217ec01755d-internal-tls-certs\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.513580 master-0 kubenswrapper[31830]: I0319 12:36:47.513549 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c3ee17-ae52-4dac-829c-7217ec01755d-combined-ca-bundle\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.513677 master-0 kubenswrapper[31830]: I0319 12:36:47.512672 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9c3ee17-ae52-4dac-829c-7217ec01755d-run-httpd\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.514426 master-0 kubenswrapper[31830]: I0319 12:36:47.514270 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9c3ee17-ae52-4dac-829c-7217ec01755d-log-httpd\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.514426 master-0 kubenswrapper[31830]: I0319 12:36:47.514348 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a9c3ee17-ae52-4dac-829c-7217ec01755d-etc-swift\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.515050 master-0 kubenswrapper[31830]: I0319 12:36:47.515016 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9c3ee17-ae52-4dac-829c-7217ec01755d-log-httpd\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.529830 master-0 kubenswrapper[31830]: I0319 12:36:47.522268 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ee17-ae52-4dac-829c-7217ec01755d-internal-tls-certs\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.531183 master-0 kubenswrapper[31830]: I0319 12:36:47.530893 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c3ee17-ae52-4dac-829c-7217ec01755d-combined-ca-bundle\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.532987 master-0 kubenswrapper[31830]: I0319 12:36:47.532909 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ee17-ae52-4dac-829c-7217ec01755d-public-tls-certs\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.536721 master-0 kubenswrapper[31830]: I0319 12:36:47.534755 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c3ee17-ae52-4dac-829c-7217ec01755d-config-data\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.541134 master-0 kubenswrapper[31830]: I0319 12:36:47.541073 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcf9c\" (UniqueName: \"kubernetes.io/projected/a9c3ee17-ae52-4dac-829c-7217ec01755d-kube-api-access-kcf9c\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.551138 master-0 kubenswrapper[31830]: I0319 12:36:47.551065 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a9c3ee17-ae52-4dac-829c-7217ec01755d-etc-swift\") pod \"swift-proxy-f6fffcbf4-vwj74\" (UID: \"a9c3ee17-ae52-4dac-829c-7217ec01755d\") " pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:47.660831 master-0 kubenswrapper[31830]: I0319 12:36:47.660722 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:48.175960 master-0 kubenswrapper[31830]: I0319 12:36:48.175906 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-f6fffcbf4-vwj74"] Mar 19 12:36:48.851911 master-0 kubenswrapper[31830]: I0319 12:36:48.848284 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f6fffcbf4-vwj74" event={"ID":"a9c3ee17-ae52-4dac-829c-7217ec01755d","Type":"ContainerStarted","Data":"1519b57685c751b8cadddfcc4e3b0bb664cdc20ebc5b5433dc36ad12127c41a4"} Mar 19 12:36:48.851911 master-0 kubenswrapper[31830]: I0319 12:36:48.848334 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f6fffcbf4-vwj74" event={"ID":"a9c3ee17-ae52-4dac-829c-7217ec01755d","Type":"ContainerStarted","Data":"2437d0e30fb9cac647bb9c617c886d17b747ab5cbd04337e58f290b6c189afec"} Mar 19 12:36:48.851911 master-0 kubenswrapper[31830]: I0319 12:36:48.848344 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f6fffcbf4-vwj74" event={"ID":"a9c3ee17-ae52-4dac-829c-7217ec01755d","Type":"ContainerStarted","Data":"2fb7d10f836a223afc14a74d342d33db5cd69c59e8dc80e808c4344cfdda5df4"} Mar 19 12:36:48.851911 master-0 kubenswrapper[31830]: I0319 12:36:48.848726 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:48.851911 master-0 kubenswrapper[31830]: I0319 12:36:48.848895 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:48.916389 master-0 kubenswrapper[31830]: I0319 12:36:48.914757 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-f6fffcbf4-vwj74" podStartSLOduration=1.914734353 podStartE2EDuration="1.914734353s" podCreationTimestamp="2026-03-19 12:36:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:36:48.892419741 +0000 UTC m=+1347.441380465" watchObservedRunningTime="2026-03-19 12:36:48.914734353 +0000 UTC m=+1347.463695057" Mar 19 12:36:51.193081 master-0 kubenswrapper[31830]: I0319 12:36:51.193021 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-949dd44b5-vklms" Mar 19 12:36:51.302546 master-0 kubenswrapper[31830]: I0319 12:36:51.302494 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64b98cb88d-7qp8f"] Mar 19 12:36:51.303080 master-0 kubenswrapper[31830]: I0319 12:36:51.302765 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64b98cb88d-7qp8f" podUID="f63713e2-7d18-4053-b79c-86ab7b8e1e57" containerName="neutron-api" containerID="cri-o://c1e486bc1b061db94e8c2a39ba8abda61e5e754c92bcec99626f94dd2915ed34" gracePeriod=30 Mar 19 12:36:51.309233 master-0 kubenswrapper[31830]: I0319 12:36:51.303291 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64b98cb88d-7qp8f" podUID="f63713e2-7d18-4053-b79c-86ab7b8e1e57" containerName="neutron-httpd" containerID="cri-o://cb6af135f4ae69eedbb4aec9e3cbe89d878ef397b2a48c0d77f21c32471ee978" gracePeriod=30 Mar 19 12:36:52.578955 master-0 kubenswrapper[31830]: I0319 12:36:52.578790 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-n2tdc"] Mar 19 12:36:52.581601 master-0 kubenswrapper[31830]: I0319 12:36:52.580515 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-n2tdc" Mar 19 12:36:52.609827 master-0 kubenswrapper[31830]: I0319 12:36:52.606959 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-n2tdc"] Mar 19 12:36:52.677849 master-0 kubenswrapper[31830]: I0319 12:36:52.676930 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-2lrh7"] Mar 19 12:36:52.687816 master-0 kubenswrapper[31830]: I0319 12:36:52.682025 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2lrh7" Mar 19 12:36:52.702832 master-0 kubenswrapper[31830]: I0319 12:36:52.693661 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baac9526-2845-4c1c-8a75-3ed2dcc2f3fb-operator-scripts\") pod \"nova-api-db-create-n2tdc\" (UID: \"baac9526-2845-4c1c-8a75-3ed2dcc2f3fb\") " pod="openstack/nova-api-db-create-n2tdc" Mar 19 12:36:52.702832 master-0 kubenswrapper[31830]: I0319 12:36:52.693728 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f54cs\" (UniqueName: \"kubernetes.io/projected/baac9526-2845-4c1c-8a75-3ed2dcc2f3fb-kube-api-access-f54cs\") pod \"nova-api-db-create-n2tdc\" (UID: \"baac9526-2845-4c1c-8a75-3ed2dcc2f3fb\") " pod="openstack/nova-api-db-create-n2tdc" Mar 19 12:36:52.721863 master-0 kubenswrapper[31830]: I0319 12:36:52.711237 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2lrh7"] Mar 19 12:36:52.800866 master-0 kubenswrapper[31830]: I0319 12:36:52.800493 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f54cs\" (UniqueName: \"kubernetes.io/projected/baac9526-2845-4c1c-8a75-3ed2dcc2f3fb-kube-api-access-f54cs\") pod \"nova-api-db-create-n2tdc\" (UID: \"baac9526-2845-4c1c-8a75-3ed2dcc2f3fb\") " pod="openstack/nova-api-db-create-n2tdc" Mar 19 12:36:52.800866 master-0 kubenswrapper[31830]: I0319 12:36:52.800674 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj2p9\" (UniqueName: \"kubernetes.io/projected/16baa845-ba2a-42b4-b0a8-3745b32f4f3e-kube-api-access-pj2p9\") pod \"nova-cell0-db-create-2lrh7\" (UID: \"16baa845-ba2a-42b4-b0a8-3745b32f4f3e\") " pod="openstack/nova-cell0-db-create-2lrh7" Mar 19 12:36:52.801152 master-0 kubenswrapper[31830]: I0319 12:36:52.800975 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16baa845-ba2a-42b4-b0a8-3745b32f4f3e-operator-scripts\") pod \"nova-cell0-db-create-2lrh7\" (UID: \"16baa845-ba2a-42b4-b0a8-3745b32f4f3e\") " pod="openstack/nova-cell0-db-create-2lrh7" Mar 19 12:36:52.801152 master-0 kubenswrapper[31830]: I0319 12:36:52.801032 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baac9526-2845-4c1c-8a75-3ed2dcc2f3fb-operator-scripts\") pod \"nova-api-db-create-n2tdc\" (UID: \"baac9526-2845-4c1c-8a75-3ed2dcc2f3fb\") " pod="openstack/nova-api-db-create-n2tdc" Mar 19 12:36:52.823845 master-0 kubenswrapper[31830]: I0319 12:36:52.823067 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baac9526-2845-4c1c-8a75-3ed2dcc2f3fb-operator-scripts\") pod \"nova-api-db-create-n2tdc\" (UID: \"baac9526-2845-4c1c-8a75-3ed2dcc2f3fb\") " pod="openstack/nova-api-db-create-n2tdc" Mar 19 12:36:52.853911 master-0 kubenswrapper[31830]: I0319 12:36:52.843013 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f54cs\" (UniqueName: \"kubernetes.io/projected/baac9526-2845-4c1c-8a75-3ed2dcc2f3fb-kube-api-access-f54cs\") pod \"nova-api-db-create-n2tdc\" (UID: \"baac9526-2845-4c1c-8a75-3ed2dcc2f3fb\") " pod="openstack/nova-api-db-create-n2tdc" Mar 19 12:36:52.858835 master-0 kubenswrapper[31830]: I0319 12:36:52.857733 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-q72gc"] Mar 19 12:36:52.862845 master-0 kubenswrapper[31830]: I0319 12:36:52.859947 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q72gc" Mar 19 12:36:52.886835 master-0 kubenswrapper[31830]: I0319 12:36:52.885945 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-4ebf-account-create-update-kdfvk"] Mar 19 12:36:52.893836 master-0 kubenswrapper[31830]: I0319 12:36:52.887836 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4ebf-account-create-update-kdfvk" Mar 19 12:36:52.904047 master-0 kubenswrapper[31830]: I0319 12:36:52.896740 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 19 12:36:52.921834 master-0 kubenswrapper[31830]: I0319 12:36:52.921193 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj2p9\" (UniqueName: \"kubernetes.io/projected/16baa845-ba2a-42b4-b0a8-3745b32f4f3e-kube-api-access-pj2p9\") pod \"nova-cell0-db-create-2lrh7\" (UID: \"16baa845-ba2a-42b4-b0a8-3745b32f4f3e\") " pod="openstack/nova-cell0-db-create-2lrh7" Mar 19 12:36:52.921834 master-0 kubenswrapper[31830]: I0319 12:36:52.921382 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16baa845-ba2a-42b4-b0a8-3745b32f4f3e-operator-scripts\") pod \"nova-cell0-db-create-2lrh7\" (UID: \"16baa845-ba2a-42b4-b0a8-3745b32f4f3e\") " pod="openstack/nova-cell0-db-create-2lrh7" Mar 19 12:36:52.925840 master-0 kubenswrapper[31830]: I0319 12:36:52.922541 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16baa845-ba2a-42b4-b0a8-3745b32f4f3e-operator-scripts\") pod \"nova-cell0-db-create-2lrh7\" (UID: \"16baa845-ba2a-42b4-b0a8-3745b32f4f3e\") " pod="openstack/nova-cell0-db-create-2lrh7" Mar 19 12:36:52.944857 master-0 kubenswrapper[31830]: I0319 12:36:52.938616 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-q72gc"] Mar 19 12:36:52.964982 master-0 kubenswrapper[31830]: I0319 12:36:52.964909 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-n2tdc" Mar 19 12:36:52.971774 master-0 kubenswrapper[31830]: I0319 12:36:52.967841 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj2p9\" (UniqueName: \"kubernetes.io/projected/16baa845-ba2a-42b4-b0a8-3745b32f4f3e-kube-api-access-pj2p9\") pod \"nova-cell0-db-create-2lrh7\" (UID: \"16baa845-ba2a-42b4-b0a8-3745b32f4f3e\") " pod="openstack/nova-cell0-db-create-2lrh7" Mar 19 12:36:52.991115 master-0 kubenswrapper[31830]: I0319 12:36:52.990339 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4ebf-account-create-update-kdfvk"] Mar 19 12:36:53.008337 master-0 kubenswrapper[31830]: I0319 12:36:53.008276 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2lrh7" Mar 19 12:36:53.032731 master-0 kubenswrapper[31830]: I0319 12:36:53.032679 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abc7c0ae-fdcf-449d-86d3-51d23d43be6c-operator-scripts\") pod \"nova-api-4ebf-account-create-update-kdfvk\" (UID: \"abc7c0ae-fdcf-449d-86d3-51d23d43be6c\") " pod="openstack/nova-api-4ebf-account-create-update-kdfvk" Mar 19 12:36:53.032852 master-0 kubenswrapper[31830]: I0319 12:36:53.032799 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qthmk\" (UniqueName: \"kubernetes.io/projected/2a04e9d8-ceee-449a-9e77-ebbe7b230aa7-kube-api-access-qthmk\") pod \"nova-cell1-db-create-q72gc\" (UID: \"2a04e9d8-ceee-449a-9e77-ebbe7b230aa7\") " pod="openstack/nova-cell1-db-create-q72gc" Mar 19 12:36:53.032913 master-0 kubenswrapper[31830]: I0319 12:36:53.032899 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a04e9d8-ceee-449a-9e77-ebbe7b230aa7-operator-scripts\") pod \"nova-cell1-db-create-q72gc\" (UID: \"2a04e9d8-ceee-449a-9e77-ebbe7b230aa7\") " pod="openstack/nova-cell1-db-create-q72gc" Mar 19 12:36:53.033015 master-0 kubenswrapper[31830]: I0319 12:36:53.032968 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8cjw\" (UniqueName: \"kubernetes.io/projected/abc7c0ae-fdcf-449d-86d3-51d23d43be6c-kube-api-access-x8cjw\") pod \"nova-api-4ebf-account-create-update-kdfvk\" (UID: \"abc7c0ae-fdcf-449d-86d3-51d23d43be6c\") " pod="openstack/nova-api-4ebf-account-create-update-kdfvk" Mar 19 12:36:53.075153 master-0 kubenswrapper[31830]: I0319 12:36:53.074940 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-eb56-account-create-update-6w8tc"] Mar 19 12:36:53.079820 master-0 kubenswrapper[31830]: I0319 12:36:53.079762 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-eb56-account-create-update-6w8tc" Mar 19 12:36:53.084257 master-0 kubenswrapper[31830]: I0319 12:36:53.084210 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 19 12:36:53.095620 master-0 kubenswrapper[31830]: I0319 12:36:53.095400 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-eb56-account-create-update-6w8tc"] Mar 19 12:36:53.135757 master-0 kubenswrapper[31830]: I0319 12:36:53.135623 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qthmk\" (UniqueName: \"kubernetes.io/projected/2a04e9d8-ceee-449a-9e77-ebbe7b230aa7-kube-api-access-qthmk\") pod \"nova-cell1-db-create-q72gc\" (UID: \"2a04e9d8-ceee-449a-9e77-ebbe7b230aa7\") " pod="openstack/nova-cell1-db-create-q72gc" Mar 19 12:36:53.135757 master-0 kubenswrapper[31830]: I0319 12:36:53.135720 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a04e9d8-ceee-449a-9e77-ebbe7b230aa7-operator-scripts\") pod \"nova-cell1-db-create-q72gc\" (UID: \"2a04e9d8-ceee-449a-9e77-ebbe7b230aa7\") " pod="openstack/nova-cell1-db-create-q72gc" Mar 19 12:36:53.136106 master-0 kubenswrapper[31830]: I0319 12:36:53.135762 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8cjw\" (UniqueName: \"kubernetes.io/projected/abc7c0ae-fdcf-449d-86d3-51d23d43be6c-kube-api-access-x8cjw\") pod \"nova-api-4ebf-account-create-update-kdfvk\" (UID: \"abc7c0ae-fdcf-449d-86d3-51d23d43be6c\") " pod="openstack/nova-api-4ebf-account-create-update-kdfvk" Mar 19 12:36:53.136106 master-0 kubenswrapper[31830]: I0319 12:36:53.135913 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abc7c0ae-fdcf-449d-86d3-51d23d43be6c-operator-scripts\") pod \"nova-api-4ebf-account-create-update-kdfvk\" (UID: \"abc7c0ae-fdcf-449d-86d3-51d23d43be6c\") " pod="openstack/nova-api-4ebf-account-create-update-kdfvk" Mar 19 12:36:53.139067 master-0 kubenswrapper[31830]: I0319 12:36:53.136957 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a04e9d8-ceee-449a-9e77-ebbe7b230aa7-operator-scripts\") pod \"nova-cell1-db-create-q72gc\" (UID: \"2a04e9d8-ceee-449a-9e77-ebbe7b230aa7\") " pod="openstack/nova-cell1-db-create-q72gc" Mar 19 12:36:53.139067 master-0 kubenswrapper[31830]: I0319 12:36:53.136966 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abc7c0ae-fdcf-449d-86d3-51d23d43be6c-operator-scripts\") pod \"nova-api-4ebf-account-create-update-kdfvk\" (UID: \"abc7c0ae-fdcf-449d-86d3-51d23d43be6c\") " pod="openstack/nova-api-4ebf-account-create-update-kdfvk" Mar 19 12:36:53.168974 master-0 kubenswrapper[31830]: I0319 12:36:53.166049 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8cjw\" (UniqueName: \"kubernetes.io/projected/abc7c0ae-fdcf-449d-86d3-51d23d43be6c-kube-api-access-x8cjw\") pod \"nova-api-4ebf-account-create-update-kdfvk\" (UID: \"abc7c0ae-fdcf-449d-86d3-51d23d43be6c\") " pod="openstack/nova-api-4ebf-account-create-update-kdfvk" Mar 19 12:36:53.177841 master-0 kubenswrapper[31830]: I0319 12:36:53.175136 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qthmk\" (UniqueName: \"kubernetes.io/projected/2a04e9d8-ceee-449a-9e77-ebbe7b230aa7-kube-api-access-qthmk\") pod \"nova-cell1-db-create-q72gc\" (UID: \"2a04e9d8-ceee-449a-9e77-ebbe7b230aa7\") " pod="openstack/nova-cell1-db-create-q72gc" Mar 19 12:36:53.198070 master-0 kubenswrapper[31830]: I0319 12:36:53.198034 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-fd15-account-create-update-bn6tm"] Mar 19 12:36:53.209954 master-0 kubenswrapper[31830]: I0319 12:36:53.201425 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fd15-account-create-update-bn6tm" Mar 19 12:36:53.209954 master-0 kubenswrapper[31830]: I0319 12:36:53.209265 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 19 12:36:53.211309 master-0 kubenswrapper[31830]: I0319 12:36:53.210418 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-fd15-account-create-update-bn6tm"] Mar 19 12:36:53.238101 master-0 kubenswrapper[31830]: I0319 12:36:53.237207 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q72gc" Mar 19 12:36:53.238101 master-0 kubenswrapper[31830]: I0319 12:36:53.237780 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ab21923-ce35-4200-b1d5-d0d20931131c-operator-scripts\") pod \"nova-cell0-eb56-account-create-update-6w8tc\" (UID: \"6ab21923-ce35-4200-b1d5-d0d20931131c\") " pod="openstack/nova-cell0-eb56-account-create-update-6w8tc" Mar 19 12:36:53.238101 master-0 kubenswrapper[31830]: I0319 12:36:53.237995 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms7sp\" (UniqueName: \"kubernetes.io/projected/6ab21923-ce35-4200-b1d5-d0d20931131c-kube-api-access-ms7sp\") pod \"nova-cell0-eb56-account-create-update-6w8tc\" (UID: \"6ab21923-ce35-4200-b1d5-d0d20931131c\") " pod="openstack/nova-cell0-eb56-account-create-update-6w8tc" Mar 19 12:36:53.238393 master-0 kubenswrapper[31830]: I0319 12:36:53.238365 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4ebf-account-create-update-kdfvk" Mar 19 12:36:53.341685 master-0 kubenswrapper[31830]: I0319 12:36:53.339528 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b65v6\" (UniqueName: \"kubernetes.io/projected/617862b3-8acc-478a-a829-74116f0d4a3d-kube-api-access-b65v6\") pod \"nova-cell1-fd15-account-create-update-bn6tm\" (UID: \"617862b3-8acc-478a-a829-74116f0d4a3d\") " pod="openstack/nova-cell1-fd15-account-create-update-bn6tm" Mar 19 12:36:53.341685 master-0 kubenswrapper[31830]: I0319 12:36:53.339610 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms7sp\" (UniqueName: \"kubernetes.io/projected/6ab21923-ce35-4200-b1d5-d0d20931131c-kube-api-access-ms7sp\") pod \"nova-cell0-eb56-account-create-update-6w8tc\" (UID: \"6ab21923-ce35-4200-b1d5-d0d20931131c\") " pod="openstack/nova-cell0-eb56-account-create-update-6w8tc" Mar 19 12:36:53.341685 master-0 kubenswrapper[31830]: I0319 12:36:53.339665 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/617862b3-8acc-478a-a829-74116f0d4a3d-operator-scripts\") pod \"nova-cell1-fd15-account-create-update-bn6tm\" (UID: \"617862b3-8acc-478a-a829-74116f0d4a3d\") " pod="openstack/nova-cell1-fd15-account-create-update-bn6tm" Mar 19 12:36:53.341685 master-0 kubenswrapper[31830]: I0319 12:36:53.339760 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ab21923-ce35-4200-b1d5-d0d20931131c-operator-scripts\") pod \"nova-cell0-eb56-account-create-update-6w8tc\" (UID: \"6ab21923-ce35-4200-b1d5-d0d20931131c\") " pod="openstack/nova-cell0-eb56-account-create-update-6w8tc" Mar 19 12:36:53.341685 master-0 kubenswrapper[31830]: I0319 12:36:53.340432 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ab21923-ce35-4200-b1d5-d0d20931131c-operator-scripts\") pod \"nova-cell0-eb56-account-create-update-6w8tc\" (UID: \"6ab21923-ce35-4200-b1d5-d0d20931131c\") " pod="openstack/nova-cell0-eb56-account-create-update-6w8tc" Mar 19 12:36:53.687145 master-0 kubenswrapper[31830]: I0319 12:36:53.686978 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b65v6\" (UniqueName: \"kubernetes.io/projected/617862b3-8acc-478a-a829-74116f0d4a3d-kube-api-access-b65v6\") pod \"nova-cell1-fd15-account-create-update-bn6tm\" (UID: \"617862b3-8acc-478a-a829-74116f0d4a3d\") " pod="openstack/nova-cell1-fd15-account-create-update-bn6tm" Mar 19 12:36:53.687728 master-0 kubenswrapper[31830]: I0319 12:36:53.687176 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/617862b3-8acc-478a-a829-74116f0d4a3d-operator-scripts\") pod \"nova-cell1-fd15-account-create-update-bn6tm\" (UID: \"617862b3-8acc-478a-a829-74116f0d4a3d\") " pod="openstack/nova-cell1-fd15-account-create-update-bn6tm" Mar 19 12:36:53.690957 master-0 kubenswrapper[31830]: I0319 12:36:53.688180 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/617862b3-8acc-478a-a829-74116f0d4a3d-operator-scripts\") pod \"nova-cell1-fd15-account-create-update-bn6tm\" (UID: \"617862b3-8acc-478a-a829-74116f0d4a3d\") " pod="openstack/nova-cell1-fd15-account-create-update-bn6tm" Mar 19 12:36:53.731320 master-0 kubenswrapper[31830]: I0319 12:36:53.716463 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b65v6\" (UniqueName: \"kubernetes.io/projected/617862b3-8acc-478a-a829-74116f0d4a3d-kube-api-access-b65v6\") pod \"nova-cell1-fd15-account-create-update-bn6tm\" (UID: \"617862b3-8acc-478a-a829-74116f0d4a3d\") " pod="openstack/nova-cell1-fd15-account-create-update-bn6tm" Mar 19 12:36:53.731320 master-0 kubenswrapper[31830]: I0319 12:36:53.716551 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms7sp\" (UniqueName: \"kubernetes.io/projected/6ab21923-ce35-4200-b1d5-d0d20931131c-kube-api-access-ms7sp\") pod \"nova-cell0-eb56-account-create-update-6w8tc\" (UID: \"6ab21923-ce35-4200-b1d5-d0d20931131c\") " pod="openstack/nova-cell0-eb56-account-create-update-6w8tc" Mar 19 12:36:53.889538 master-0 kubenswrapper[31830]: I0319 12:36:53.889452 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fd15-account-create-update-bn6tm" Mar 19 12:36:54.012348 master-0 kubenswrapper[31830]: I0319 12:36:54.012307 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-eb56-account-create-update-6w8tc" Mar 19 12:36:57.666085 master-0 kubenswrapper[31830]: I0319 12:36:57.666015 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:57.667531 master-0 kubenswrapper[31830]: I0319 12:36:57.667493 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-f6fffcbf4-vwj74" Mar 19 12:36:58.016216 master-0 kubenswrapper[31830]: I0319 12:36:58.016150 31830 generic.go:334] "Generic (PLEG): container finished" podID="f63713e2-7d18-4053-b79c-86ab7b8e1e57" containerID="cb6af135f4ae69eedbb4aec9e3cbe89d878ef397b2a48c0d77f21c32471ee978" exitCode=0 Mar 19 12:36:58.016216 master-0 kubenswrapper[31830]: I0319 12:36:58.016199 31830 generic.go:334] "Generic (PLEG): container finished" podID="f63713e2-7d18-4053-b79c-86ab7b8e1e57" containerID="c1e486bc1b061db94e8c2a39ba8abda61e5e754c92bcec99626f94dd2915ed34" exitCode=0 Mar 19 12:36:58.016964 master-0 kubenswrapper[31830]: I0319 12:36:58.016898 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64b98cb88d-7qp8f" event={"ID":"f63713e2-7d18-4053-b79c-86ab7b8e1e57","Type":"ContainerDied","Data":"cb6af135f4ae69eedbb4aec9e3cbe89d878ef397b2a48c0d77f21c32471ee978"} Mar 19 12:36:58.017037 master-0 kubenswrapper[31830]: I0319 12:36:58.016973 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64b98cb88d-7qp8f" event={"ID":"f63713e2-7d18-4053-b79c-86ab7b8e1e57","Type":"ContainerDied","Data":"c1e486bc1b061db94e8c2a39ba8abda61e5e754c92bcec99626f94dd2915ed34"} Mar 19 12:36:59.021524 master-0 kubenswrapper[31830]: I0319 12:36:59.021468 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-fd15-account-create-update-bn6tm"] Mar 19 12:36:59.253900 master-0 kubenswrapper[31830]: I0319 12:36:59.249603 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4ebf-account-create-update-kdfvk"] Mar 19 12:36:59.269828 master-0 kubenswrapper[31830]: I0319 12:36:59.269725 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-n2tdc"] Mar 19 12:36:59.287797 master-0 kubenswrapper[31830]: I0319 12:36:59.287735 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-eb56-account-create-update-6w8tc"] Mar 19 12:36:59.302150 master-0 kubenswrapper[31830]: I0319 12:36:59.299857 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-q72gc"] Mar 19 12:36:59.310762 master-0 kubenswrapper[31830]: I0319 12:36:59.310701 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2lrh7"] Mar 19 12:36:59.532289 master-0 kubenswrapper[31830]: W0319 12:36:59.532232 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod617862b3_8acc_478a_a829_74116f0d4a3d.slice/crio-0817f7585f192ef94707c836ca2c96999c7ceb9c6541c5ea996aa1d537c6bc99 WatchSource:0}: Error finding container 0817f7585f192ef94707c836ca2c96999c7ceb9c6541c5ea996aa1d537c6bc99: Status 404 returned error can't find the container with id 0817f7585f192ef94707c836ca2c96999c7ceb9c6541c5ea996aa1d537c6bc99 Mar 19 12:36:59.535621 master-0 kubenswrapper[31830]: W0319 12:36:59.535583 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16baa845_ba2a_42b4_b0a8_3745b32f4f3e.slice/crio-2468fa93dd924f5ac542a6bd945e1e9b89eb80e148c0dddd1e8409760cdc715c WatchSource:0}: Error finding container 2468fa93dd924f5ac542a6bd945e1e9b89eb80e148c0dddd1e8409760cdc715c: Status 404 returned error can't find the container with id 2468fa93dd924f5ac542a6bd945e1e9b89eb80e148c0dddd1e8409760cdc715c Mar 19 12:36:59.553189 master-0 kubenswrapper[31830]: W0319 12:36:59.551333 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabc7c0ae_fdcf_449d_86d3_51d23d43be6c.slice/crio-026352c205a5e4d6c3973f1a5721311fc976841b6899d63c0ec1231e396beba3 WatchSource:0}: Error finding container 026352c205a5e4d6c3973f1a5721311fc976841b6899d63c0ec1231e396beba3: Status 404 returned error can't find the container with id 026352c205a5e4d6c3973f1a5721311fc976841b6899d63c0ec1231e396beba3 Mar 19 12:37:00.049928 master-0 kubenswrapper[31830]: I0319 12:37:00.049870 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2lrh7" event={"ID":"16baa845-ba2a-42b4-b0a8-3745b32f4f3e","Type":"ContainerStarted","Data":"2468fa93dd924f5ac542a6bd945e1e9b89eb80e148c0dddd1e8409760cdc715c"} Mar 19 12:37:00.052368 master-0 kubenswrapper[31830]: I0319 12:37:00.052319 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-eb56-account-create-update-6w8tc" event={"ID":"6ab21923-ce35-4200-b1d5-d0d20931131c","Type":"ContainerStarted","Data":"bd85dcee1fc4f15ce257f19fd7890b57931c57e2bfe3c8943f9fc8986becd004"} Mar 19 12:37:00.054355 master-0 kubenswrapper[31830]: I0319 12:37:00.054302 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4ebf-account-create-update-kdfvk" event={"ID":"abc7c0ae-fdcf-449d-86d3-51d23d43be6c","Type":"ContainerStarted","Data":"026352c205a5e4d6c3973f1a5721311fc976841b6899d63c0ec1231e396beba3"} Mar 19 12:37:00.055564 master-0 kubenswrapper[31830]: I0319 12:37:00.055438 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-n2tdc" event={"ID":"baac9526-2845-4c1c-8a75-3ed2dcc2f3fb","Type":"ContainerStarted","Data":"5ccad972b632763d788fa228cadd37ae558a0cb38a9d15b5a8ca931ca2d69d72"} Mar 19 12:37:00.057286 master-0 kubenswrapper[31830]: I0319 12:37:00.057259 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64b98cb88d-7qp8f" event={"ID":"f63713e2-7d18-4053-b79c-86ab7b8e1e57","Type":"ContainerDied","Data":"4c0b32a194c064efd865c0225819704a1830c5b8e49a78d509bfd0a4cc84e5b9"} Mar 19 12:37:00.057378 master-0 kubenswrapper[31830]: I0319 12:37:00.057288 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c0b32a194c064efd865c0225819704a1830c5b8e49a78d509bfd0a4cc84e5b9" Mar 19 12:37:00.058447 master-0 kubenswrapper[31830]: I0319 12:37:00.058417 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q72gc" event={"ID":"2a04e9d8-ceee-449a-9e77-ebbe7b230aa7","Type":"ContainerStarted","Data":"d2df703a759faf0500ab804a7957eba7df5693b451232f3ed445ae89bc763bf4"} Mar 19 12:37:00.059742 master-0 kubenswrapper[31830]: I0319 12:37:00.059714 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fd15-account-create-update-bn6tm" event={"ID":"617862b3-8acc-478a-a829-74116f0d4a3d","Type":"ContainerStarted","Data":"0817f7585f192ef94707c836ca2c96999c7ceb9c6541c5ea996aa1d537c6bc99"} Mar 19 12:37:00.121845 master-0 kubenswrapper[31830]: I0319 12:37:00.121787 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64b98cb88d-7qp8f" Mar 19 12:37:00.193712 master-0 kubenswrapper[31830]: I0319 12:37:00.192298 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-combined-ca-bundle\") pod \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\" (UID: \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\") " Mar 19 12:37:00.193712 master-0 kubenswrapper[31830]: I0319 12:37:00.192389 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-ovndb-tls-certs\") pod \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\" (UID: \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\") " Mar 19 12:37:00.193712 master-0 kubenswrapper[31830]: I0319 12:37:00.192453 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9ghf\" (UniqueName: \"kubernetes.io/projected/f63713e2-7d18-4053-b79c-86ab7b8e1e57-kube-api-access-d9ghf\") pod \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\" (UID: \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\") " Mar 19 12:37:00.193712 master-0 kubenswrapper[31830]: I0319 12:37:00.192549 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-httpd-config\") pod \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\" (UID: \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\") " Mar 19 12:37:00.204074 master-0 kubenswrapper[31830]: I0319 12:37:00.197305 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f63713e2-7d18-4053-b79c-86ab7b8e1e57-kube-api-access-d9ghf" (OuterVolumeSpecName: "kube-api-access-d9ghf") pod "f63713e2-7d18-4053-b79c-86ab7b8e1e57" (UID: "f63713e2-7d18-4053-b79c-86ab7b8e1e57"). InnerVolumeSpecName "kube-api-access-d9ghf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:37:00.204074 master-0 kubenswrapper[31830]: I0319 12:37:00.197467 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f63713e2-7d18-4053-b79c-86ab7b8e1e57" (UID: "f63713e2-7d18-4053-b79c-86ab7b8e1e57"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:37:00.294528 master-0 kubenswrapper[31830]: I0319 12:37:00.294429 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-config\") pod \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\" (UID: \"f63713e2-7d18-4053-b79c-86ab7b8e1e57\") " Mar 19 12:37:00.295893 master-0 kubenswrapper[31830]: I0319 12:37:00.295862 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9ghf\" (UniqueName: \"kubernetes.io/projected/f63713e2-7d18-4053-b79c-86ab7b8e1e57-kube-api-access-d9ghf\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:00.296079 master-0 kubenswrapper[31830]: I0319 12:37:00.296052 31830 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-httpd-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:00.331334 master-0 kubenswrapper[31830]: I0319 12:37:00.331275 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f4e38-default-internal-api-0"] Mar 19 12:37:00.331553 master-0 kubenswrapper[31830]: I0319 12:37:00.331515 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-f4e38-default-internal-api-0" podUID="3e4ccffc-3539-4e3a-b507-3fa51250d5a6" containerName="glance-log" containerID="cri-o://a662b6953e099e8046c0f19e2f43fe2830ffd4d8ed5268bb5b5772d761645370" gracePeriod=30 Mar 19 12:37:00.331674 master-0 kubenswrapper[31830]: I0319 12:37:00.331653 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-f4e38-default-internal-api-0" podUID="3e4ccffc-3539-4e3a-b507-3fa51250d5a6" containerName="glance-httpd" containerID="cri-o://b69e125e42a299fc4c60cee3320600e1ac8a82dfc8fed27137eaceadec37c002" gracePeriod=30 Mar 19 12:37:00.439630 master-0 kubenswrapper[31830]: I0319 12:37:00.439563 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f63713e2-7d18-4053-b79c-86ab7b8e1e57" (UID: "f63713e2-7d18-4053-b79c-86ab7b8e1e57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:37:00.459429 master-0 kubenswrapper[31830]: I0319 12:37:00.459333 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-config" (OuterVolumeSpecName: "config") pod "f63713e2-7d18-4053-b79c-86ab7b8e1e57" (UID: "f63713e2-7d18-4053-b79c-86ab7b8e1e57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:37:00.474994 master-0 kubenswrapper[31830]: I0319 12:37:00.474925 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "f63713e2-7d18-4053-b79c-86ab7b8e1e57" (UID: "f63713e2-7d18-4053-b79c-86ab7b8e1e57"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:37:00.501437 master-0 kubenswrapper[31830]: I0319 12:37:00.500459 31830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:00.501437 master-0 kubenswrapper[31830]: I0319 12:37:00.500500 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:00.501437 master-0 kubenswrapper[31830]: I0319 12:37:00.500511 31830 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f63713e2-7d18-4053-b79c-86ab7b8e1e57-ovndb-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:01.077700 master-0 kubenswrapper[31830]: I0319 12:37:01.077635 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c928f8ae-cc84-4887-9b3b-dc1900338aab","Type":"ContainerStarted","Data":"abe2a622e9b24ca2c09c9fddf3f1d80cda2d2b7872c78aa1100e8908e2c78c5a"} Mar 19 12:37:01.079948 master-0 kubenswrapper[31830]: I0319 12:37:01.079905 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-n2tdc" event={"ID":"baac9526-2845-4c1c-8a75-3ed2dcc2f3fb","Type":"ContainerStarted","Data":"e49918a5328c9e8fc609ee5bfdeb6285aab3f4a17f79dd20701d65d315b463d2"} Mar 19 12:37:01.082313 master-0 kubenswrapper[31830]: I0319 12:37:01.082176 31830 generic.go:334] "Generic (PLEG): container finished" podID="3e4ccffc-3539-4e3a-b507-3fa51250d5a6" containerID="a662b6953e099e8046c0f19e2f43fe2830ffd4d8ed5268bb5b5772d761645370" exitCode=143 Mar 19 12:37:01.082313 master-0 kubenswrapper[31830]: I0319 12:37:01.082246 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f4e38-default-internal-api-0" event={"ID":"3e4ccffc-3539-4e3a-b507-3fa51250d5a6","Type":"ContainerDied","Data":"a662b6953e099e8046c0f19e2f43fe2830ffd4d8ed5268bb5b5772d761645370"} Mar 19 12:37:01.084222 master-0 kubenswrapper[31830]: I0319 12:37:01.083956 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q72gc" event={"ID":"2a04e9d8-ceee-449a-9e77-ebbe7b230aa7","Type":"ContainerStarted","Data":"5b6a4c0fbb52f639b00b943872c63c86cc221d3ba41fe0d38720ea5f5a4d4ab6"} Mar 19 12:37:01.087861 master-0 kubenswrapper[31830]: I0319 12:37:01.085778 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fd15-account-create-update-bn6tm" event={"ID":"617862b3-8acc-478a-a829-74116f0d4a3d","Type":"ContainerStarted","Data":"83c2b6d970e047f0cbe3cad688f6a2442f0248cfab20efd8c147fea71e37daff"} Mar 19 12:37:01.092060 master-0 kubenswrapper[31830]: I0319 12:37:01.091610 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2lrh7" event={"ID":"16baa845-ba2a-42b4-b0a8-3745b32f4f3e","Type":"ContainerStarted","Data":"41680a64ff551eb443dedace1d9017d81651ef0c8b09f3989fcecb0b1193b8cb"} Mar 19 12:37:01.094407 master-0 kubenswrapper[31830]: I0319 12:37:01.094353 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-eb56-account-create-update-6w8tc" event={"ID":"6ab21923-ce35-4200-b1d5-d0d20931131c","Type":"ContainerStarted","Data":"ccd4901cb7f40b867767eb4b65967f7c160f009a63bd042f805a8cc274ee1215"} Mar 19 12:37:01.097689 master-0 kubenswrapper[31830]: I0319 12:37:01.097085 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64b98cb88d-7qp8f" Mar 19 12:37:01.097689 master-0 kubenswrapper[31830]: I0319 12:37:01.097327 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4ebf-account-create-update-kdfvk" event={"ID":"abc7c0ae-fdcf-449d-86d3-51d23d43be6c","Type":"ContainerStarted","Data":"c6864d98e56e75c2b05fbdbcc589365e61373a6916698cf5fad181408914debb"} Mar 19 12:37:01.381861 master-0 kubenswrapper[31830]: I0319 12:37:01.378827 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.335461943 podStartE2EDuration="18.378779996s" podCreationTimestamp="2026-03-19 12:36:43 +0000 UTC" firstStartedPulling="2026-03-19 12:36:44.599762084 +0000 UTC m=+1343.148722788" lastFinishedPulling="2026-03-19 12:36:59.643080137 +0000 UTC m=+1358.192040841" observedRunningTime="2026-03-19 12:37:01.377860197 +0000 UTC m=+1359.926820931" watchObservedRunningTime="2026-03-19 12:37:01.378779996 +0000 UTC m=+1359.927740700" Mar 19 12:37:01.555967 master-0 kubenswrapper[31830]: I0319 12:37:01.555881 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-2lrh7" podStartSLOduration=9.555853957 podStartE2EDuration="9.555853957s" podCreationTimestamp="2026-03-19 12:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:37:01.548875691 +0000 UTC m=+1360.097836415" watchObservedRunningTime="2026-03-19 12:37:01.555853957 +0000 UTC m=+1360.104814681" Mar 19 12:37:01.729582 master-0 kubenswrapper[31830]: I0319 12:37:01.729496 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-4ebf-account-create-update-kdfvk" podStartSLOduration=9.729479361 podStartE2EDuration="9.729479361s" podCreationTimestamp="2026-03-19 12:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:37:01.728420099 +0000 UTC m=+1360.277380803" watchObservedRunningTime="2026-03-19 12:37:01.729479361 +0000 UTC m=+1360.278440065" Mar 19 12:37:02.068972 master-0 kubenswrapper[31830]: I0319 12:37:02.068872 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-fd15-account-create-update-bn6tm" podStartSLOduration=9.068854046 podStartE2EDuration="9.068854046s" podCreationTimestamp="2026-03-19 12:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:37:02.060848548 +0000 UTC m=+1360.609809252" watchObservedRunningTime="2026-03-19 12:37:02.068854046 +0000 UTC m=+1360.617814750" Mar 19 12:37:02.108956 master-0 kubenswrapper[31830]: I0319 12:37:02.108888 31830 generic.go:334] "Generic (PLEG): container finished" podID="abc7c0ae-fdcf-449d-86d3-51d23d43be6c" containerID="c6864d98e56e75c2b05fbdbcc589365e61373a6916698cf5fad181408914debb" exitCode=0 Mar 19 12:37:02.109547 master-0 kubenswrapper[31830]: I0319 12:37:02.109002 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4ebf-account-create-update-kdfvk" event={"ID":"abc7c0ae-fdcf-449d-86d3-51d23d43be6c","Type":"ContainerDied","Data":"c6864d98e56e75c2b05fbdbcc589365e61373a6916698cf5fad181408914debb"} Mar 19 12:37:02.111311 master-0 kubenswrapper[31830]: I0319 12:37:02.111233 31830 generic.go:334] "Generic (PLEG): container finished" podID="baac9526-2845-4c1c-8a75-3ed2dcc2f3fb" containerID="e49918a5328c9e8fc609ee5bfdeb6285aab3f4a17f79dd20701d65d315b463d2" exitCode=0 Mar 19 12:37:02.111404 master-0 kubenswrapper[31830]: I0319 12:37:02.111360 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-n2tdc" event={"ID":"baac9526-2845-4c1c-8a75-3ed2dcc2f3fb","Type":"ContainerDied","Data":"e49918a5328c9e8fc609ee5bfdeb6285aab3f4a17f79dd20701d65d315b463d2"} Mar 19 12:37:02.113959 master-0 kubenswrapper[31830]: I0319 12:37:02.113931 31830 generic.go:334] "Generic (PLEG): container finished" podID="2a04e9d8-ceee-449a-9e77-ebbe7b230aa7" containerID="5b6a4c0fbb52f639b00b943872c63c86cc221d3ba41fe0d38720ea5f5a4d4ab6" exitCode=0 Mar 19 12:37:02.114061 master-0 kubenswrapper[31830]: I0319 12:37:02.114001 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q72gc" event={"ID":"2a04e9d8-ceee-449a-9e77-ebbe7b230aa7","Type":"ContainerDied","Data":"5b6a4c0fbb52f639b00b943872c63c86cc221d3ba41fe0d38720ea5f5a4d4ab6"} Mar 19 12:37:02.115785 master-0 kubenswrapper[31830]: I0319 12:37:02.115751 31830 generic.go:334] "Generic (PLEG): container finished" podID="617862b3-8acc-478a-a829-74116f0d4a3d" containerID="83c2b6d970e047f0cbe3cad688f6a2442f0248cfab20efd8c147fea71e37daff" exitCode=0 Mar 19 12:37:02.115938 master-0 kubenswrapper[31830]: I0319 12:37:02.115828 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fd15-account-create-update-bn6tm" event={"ID":"617862b3-8acc-478a-a829-74116f0d4a3d","Type":"ContainerDied","Data":"83c2b6d970e047f0cbe3cad688f6a2442f0248cfab20efd8c147fea71e37daff"} Mar 19 12:37:02.117842 master-0 kubenswrapper[31830]: I0319 12:37:02.117787 31830 generic.go:334] "Generic (PLEG): container finished" podID="16baa845-ba2a-42b4-b0a8-3745b32f4f3e" containerID="41680a64ff551eb443dedace1d9017d81651ef0c8b09f3989fcecb0b1193b8cb" exitCode=0 Mar 19 12:37:02.117935 master-0 kubenswrapper[31830]: I0319 12:37:02.117885 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2lrh7" event={"ID":"16baa845-ba2a-42b4-b0a8-3745b32f4f3e","Type":"ContainerDied","Data":"41680a64ff551eb443dedace1d9017d81651ef0c8b09f3989fcecb0b1193b8cb"} Mar 19 12:37:02.120079 master-0 kubenswrapper[31830]: I0319 12:37:02.120052 31830 generic.go:334] "Generic (PLEG): container finished" podID="6ab21923-ce35-4200-b1d5-d0d20931131c" containerID="ccd4901cb7f40b867767eb4b65967f7c160f009a63bd042f805a8cc274ee1215" exitCode=0 Mar 19 12:37:02.120178 master-0 kubenswrapper[31830]: I0319 12:37:02.120138 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-eb56-account-create-update-6w8tc" event={"ID":"6ab21923-ce35-4200-b1d5-d0d20931131c","Type":"ContainerDied","Data":"ccd4901cb7f40b867767eb4b65967f7c160f009a63bd042f805a8cc274ee1215"} Mar 19 12:37:02.706881 master-0 kubenswrapper[31830]: I0319 12:37:02.706717 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-eb56-account-create-update-6w8tc" podStartSLOduration=10.706681197 podStartE2EDuration="10.706681197s" podCreationTimestamp="2026-03-19 12:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:37:02.706491401 +0000 UTC m=+1361.255452105" watchObservedRunningTime="2026-03-19 12:37:02.706681197 +0000 UTC m=+1361.255641901" Mar 19 12:37:02.988786 master-0 kubenswrapper[31830]: I0319 12:37:02.988683 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-n2tdc" podStartSLOduration=10.988659733 podStartE2EDuration="10.988659733s" podCreationTimestamp="2026-03-19 12:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:37:02.984659658 +0000 UTC m=+1361.533620362" watchObservedRunningTime="2026-03-19 12:37:02.988659733 +0000 UTC m=+1361.537620437" Mar 19 12:37:03.385892 master-0 kubenswrapper[31830]: I0319 12:37:03.383769 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-q72gc" podStartSLOduration=11.383748264 podStartE2EDuration="11.383748264s" podCreationTimestamp="2026-03-19 12:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:37:03.335350814 +0000 UTC m=+1361.884311538" watchObservedRunningTime="2026-03-19 12:37:03.383748264 +0000 UTC m=+1361.932708968" Mar 19 12:37:03.439116 master-0 kubenswrapper[31830]: I0319 12:37:03.437490 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64b98cb88d-7qp8f"] Mar 19 12:37:03.538434 master-0 kubenswrapper[31830]: I0319 12:37:03.538356 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-64b98cb88d-7qp8f"] Mar 19 12:37:03.692674 master-0 kubenswrapper[31830]: I0319 12:37:03.692605 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f63713e2-7d18-4053-b79c-86ab7b8e1e57" path="/var/lib/kubelet/pods/f63713e2-7d18-4053-b79c-86ab7b8e1e57/volumes" Mar 19 12:37:04.011920 master-0 kubenswrapper[31830]: I0319 12:37:04.011880 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2lrh7" Mar 19 12:37:04.140566 master-0 kubenswrapper[31830]: I0319 12:37:04.137399 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj2p9\" (UniqueName: \"kubernetes.io/projected/16baa845-ba2a-42b4-b0a8-3745b32f4f3e-kube-api-access-pj2p9\") pod \"16baa845-ba2a-42b4-b0a8-3745b32f4f3e\" (UID: \"16baa845-ba2a-42b4-b0a8-3745b32f4f3e\") " Mar 19 12:37:04.140566 master-0 kubenswrapper[31830]: I0319 12:37:04.137713 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16baa845-ba2a-42b4-b0a8-3745b32f4f3e-operator-scripts\") pod \"16baa845-ba2a-42b4-b0a8-3745b32f4f3e\" (UID: \"16baa845-ba2a-42b4-b0a8-3745b32f4f3e\") " Mar 19 12:37:04.140566 master-0 kubenswrapper[31830]: I0319 12:37:04.139765 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16baa845-ba2a-42b4-b0a8-3745b32f4f3e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "16baa845-ba2a-42b4-b0a8-3745b32f4f3e" (UID: "16baa845-ba2a-42b4-b0a8-3745b32f4f3e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:37:04.148374 master-0 kubenswrapper[31830]: I0319 12:37:04.148295 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16baa845-ba2a-42b4-b0a8-3745b32f4f3e-kube-api-access-pj2p9" (OuterVolumeSpecName: "kube-api-access-pj2p9") pod "16baa845-ba2a-42b4-b0a8-3745b32f4f3e" (UID: "16baa845-ba2a-42b4-b0a8-3745b32f4f3e"). InnerVolumeSpecName "kube-api-access-pj2p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:37:04.160677 master-0 kubenswrapper[31830]: I0319 12:37:04.160622 31830 generic.go:334] "Generic (PLEG): container finished" podID="3e4ccffc-3539-4e3a-b507-3fa51250d5a6" containerID="b69e125e42a299fc4c60cee3320600e1ac8a82dfc8fed27137eaceadec37c002" exitCode=0 Mar 19 12:37:04.160775 master-0 kubenswrapper[31830]: I0319 12:37:04.160731 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f4e38-default-internal-api-0" event={"ID":"3e4ccffc-3539-4e3a-b507-3fa51250d5a6","Type":"ContainerDied","Data":"b69e125e42a299fc4c60cee3320600e1ac8a82dfc8fed27137eaceadec37c002"} Mar 19 12:37:04.164528 master-0 kubenswrapper[31830]: I0319 12:37:04.163614 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q72gc" event={"ID":"2a04e9d8-ceee-449a-9e77-ebbe7b230aa7","Type":"ContainerDied","Data":"d2df703a759faf0500ab804a7957eba7df5693b451232f3ed445ae89bc763bf4"} Mar 19 12:37:04.164528 master-0 kubenswrapper[31830]: I0319 12:37:04.163663 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2df703a759faf0500ab804a7957eba7df5693b451232f3ed445ae89bc763bf4" Mar 19 12:37:04.165605 master-0 kubenswrapper[31830]: I0319 12:37:04.165544 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fd15-account-create-update-bn6tm" event={"ID":"617862b3-8acc-478a-a829-74116f0d4a3d","Type":"ContainerDied","Data":"0817f7585f192ef94707c836ca2c96999c7ceb9c6541c5ea996aa1d537c6bc99"} Mar 19 12:37:04.165605 master-0 kubenswrapper[31830]: I0319 12:37:04.165575 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0817f7585f192ef94707c836ca2c96999c7ceb9c6541c5ea996aa1d537c6bc99" Mar 19 12:37:04.168779 master-0 kubenswrapper[31830]: I0319 12:37:04.168732 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2lrh7" event={"ID":"16baa845-ba2a-42b4-b0a8-3745b32f4f3e","Type":"ContainerDied","Data":"2468fa93dd924f5ac542a6bd945e1e9b89eb80e148c0dddd1e8409760cdc715c"} Mar 19 12:37:04.168779 master-0 kubenswrapper[31830]: I0319 12:37:04.168780 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2468fa93dd924f5ac542a6bd945e1e9b89eb80e148c0dddd1e8409760cdc715c" Mar 19 12:37:04.169097 master-0 kubenswrapper[31830]: I0319 12:37:04.168882 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2lrh7" Mar 19 12:37:04.176334 master-0 kubenswrapper[31830]: I0319 12:37:04.176188 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-eb56-account-create-update-6w8tc" event={"ID":"6ab21923-ce35-4200-b1d5-d0d20931131c","Type":"ContainerDied","Data":"bd85dcee1fc4f15ce257f19fd7890b57931c57e2bfe3c8943f9fc8986becd004"} Mar 19 12:37:04.176334 master-0 kubenswrapper[31830]: I0319 12:37:04.176245 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd85dcee1fc4f15ce257f19fd7890b57931c57e2bfe3c8943f9fc8986becd004" Mar 19 12:37:04.182784 master-0 kubenswrapper[31830]: I0319 12:37:04.182475 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4ebf-account-create-update-kdfvk" event={"ID":"abc7c0ae-fdcf-449d-86d3-51d23d43be6c","Type":"ContainerDied","Data":"026352c205a5e4d6c3973f1a5721311fc976841b6899d63c0ec1231e396beba3"} Mar 19 12:37:04.182784 master-0 kubenswrapper[31830]: I0319 12:37:04.182519 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="026352c205a5e4d6c3973f1a5721311fc976841b6899d63c0ec1231e396beba3" Mar 19 12:37:04.185308 master-0 kubenswrapper[31830]: I0319 12:37:04.185259 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-n2tdc" event={"ID":"baac9526-2845-4c1c-8a75-3ed2dcc2f3fb","Type":"ContainerDied","Data":"5ccad972b632763d788fa228cadd37ae558a0cb38a9d15b5a8ca931ca2d69d72"} Mar 19 12:37:04.185308 master-0 kubenswrapper[31830]: I0319 12:37:04.185296 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ccad972b632763d788fa228cadd37ae558a0cb38a9d15b5a8ca931ca2d69d72" Mar 19 12:37:04.209140 master-0 kubenswrapper[31830]: I0319 12:37:04.207551 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-eb56-account-create-update-6w8tc" Mar 19 12:37:04.221034 master-0 kubenswrapper[31830]: I0319 12:37:04.220728 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q72gc" Mar 19 12:37:04.231115 master-0 kubenswrapper[31830]: I0319 12:37:04.230564 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4ebf-account-create-update-kdfvk" Mar 19 12:37:04.242537 master-0 kubenswrapper[31830]: I0319 12:37:04.242001 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj2p9\" (UniqueName: \"kubernetes.io/projected/16baa845-ba2a-42b4-b0a8-3745b32f4f3e-kube-api-access-pj2p9\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:04.242537 master-0 kubenswrapper[31830]: I0319 12:37:04.242056 31830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16baa845-ba2a-42b4-b0a8-3745b32f4f3e-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:04.246426 master-0 kubenswrapper[31830]: I0319 12:37:04.246384 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fd15-account-create-update-bn6tm" Mar 19 12:37:04.257206 master-0 kubenswrapper[31830]: I0319 12:37:04.255888 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-n2tdc" Mar 19 12:37:04.343619 master-0 kubenswrapper[31830]: I0319 12:37:04.343548 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qthmk\" (UniqueName: \"kubernetes.io/projected/2a04e9d8-ceee-449a-9e77-ebbe7b230aa7-kube-api-access-qthmk\") pod \"2a04e9d8-ceee-449a-9e77-ebbe7b230aa7\" (UID: \"2a04e9d8-ceee-449a-9e77-ebbe7b230aa7\") " Mar 19 12:37:04.344038 master-0 kubenswrapper[31830]: I0319 12:37:04.343654 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ab21923-ce35-4200-b1d5-d0d20931131c-operator-scripts\") pod \"6ab21923-ce35-4200-b1d5-d0d20931131c\" (UID: \"6ab21923-ce35-4200-b1d5-d0d20931131c\") " Mar 19 12:37:04.344038 master-0 kubenswrapper[31830]: I0319 12:37:04.343675 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms7sp\" (UniqueName: \"kubernetes.io/projected/6ab21923-ce35-4200-b1d5-d0d20931131c-kube-api-access-ms7sp\") pod \"6ab21923-ce35-4200-b1d5-d0d20931131c\" (UID: \"6ab21923-ce35-4200-b1d5-d0d20931131c\") " Mar 19 12:37:04.344038 master-0 kubenswrapper[31830]: I0319 12:37:04.343741 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abc7c0ae-fdcf-449d-86d3-51d23d43be6c-operator-scripts\") pod \"abc7c0ae-fdcf-449d-86d3-51d23d43be6c\" (UID: \"abc7c0ae-fdcf-449d-86d3-51d23d43be6c\") " Mar 19 12:37:04.344038 master-0 kubenswrapper[31830]: I0319 12:37:04.343775 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/617862b3-8acc-478a-a829-74116f0d4a3d-operator-scripts\") pod \"617862b3-8acc-478a-a829-74116f0d4a3d\" (UID: \"617862b3-8acc-478a-a829-74116f0d4a3d\") " Mar 19 12:37:04.344038 master-0 kubenswrapper[31830]: I0319 12:37:04.343845 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8cjw\" (UniqueName: \"kubernetes.io/projected/abc7c0ae-fdcf-449d-86d3-51d23d43be6c-kube-api-access-x8cjw\") pod \"abc7c0ae-fdcf-449d-86d3-51d23d43be6c\" (UID: \"abc7c0ae-fdcf-449d-86d3-51d23d43be6c\") " Mar 19 12:37:04.344038 master-0 kubenswrapper[31830]: I0319 12:37:04.343875 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b65v6\" (UniqueName: \"kubernetes.io/projected/617862b3-8acc-478a-a829-74116f0d4a3d-kube-api-access-b65v6\") pod \"617862b3-8acc-478a-a829-74116f0d4a3d\" (UID: \"617862b3-8acc-478a-a829-74116f0d4a3d\") " Mar 19 12:37:04.344317 master-0 kubenswrapper[31830]: I0319 12:37:04.344065 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a04e9d8-ceee-449a-9e77-ebbe7b230aa7-operator-scripts\") pod \"2a04e9d8-ceee-449a-9e77-ebbe7b230aa7\" (UID: \"2a04e9d8-ceee-449a-9e77-ebbe7b230aa7\") " Mar 19 12:37:04.345355 master-0 kubenswrapper[31830]: I0319 12:37:04.344950 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a04e9d8-ceee-449a-9e77-ebbe7b230aa7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a04e9d8-ceee-449a-9e77-ebbe7b230aa7" (UID: "2a04e9d8-ceee-449a-9e77-ebbe7b230aa7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:37:04.345864 master-0 kubenswrapper[31830]: I0319 12:37:04.345672 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abc7c0ae-fdcf-449d-86d3-51d23d43be6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "abc7c0ae-fdcf-449d-86d3-51d23d43be6c" (UID: "abc7c0ae-fdcf-449d-86d3-51d23d43be6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:37:04.346550 master-0 kubenswrapper[31830]: I0319 12:37:04.346093 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ab21923-ce35-4200-b1d5-d0d20931131c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ab21923-ce35-4200-b1d5-d0d20931131c" (UID: "6ab21923-ce35-4200-b1d5-d0d20931131c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:37:04.348746 master-0 kubenswrapper[31830]: I0319 12:37:04.346853 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/617862b3-8acc-478a-a829-74116f0d4a3d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "617862b3-8acc-478a-a829-74116f0d4a3d" (UID: "617862b3-8acc-478a-a829-74116f0d4a3d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:37:04.348746 master-0 kubenswrapper[31830]: I0319 12:37:04.348083 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a04e9d8-ceee-449a-9e77-ebbe7b230aa7-kube-api-access-qthmk" (OuterVolumeSpecName: "kube-api-access-qthmk") pod "2a04e9d8-ceee-449a-9e77-ebbe7b230aa7" (UID: "2a04e9d8-ceee-449a-9e77-ebbe7b230aa7"). InnerVolumeSpecName "kube-api-access-qthmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:37:04.350292 master-0 kubenswrapper[31830]: I0319 12:37:04.349498 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ab21923-ce35-4200-b1d5-d0d20931131c-kube-api-access-ms7sp" (OuterVolumeSpecName: "kube-api-access-ms7sp") pod "6ab21923-ce35-4200-b1d5-d0d20931131c" (UID: "6ab21923-ce35-4200-b1d5-d0d20931131c"). InnerVolumeSpecName "kube-api-access-ms7sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:37:04.350388 master-0 kubenswrapper[31830]: I0319 12:37:04.350318 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/617862b3-8acc-478a-a829-74116f0d4a3d-kube-api-access-b65v6" (OuterVolumeSpecName: "kube-api-access-b65v6") pod "617862b3-8acc-478a-a829-74116f0d4a3d" (UID: "617862b3-8acc-478a-a829-74116f0d4a3d"). InnerVolumeSpecName "kube-api-access-b65v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:37:04.351382 master-0 kubenswrapper[31830]: I0319 12:37:04.350558 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abc7c0ae-fdcf-449d-86d3-51d23d43be6c-kube-api-access-x8cjw" (OuterVolumeSpecName: "kube-api-access-x8cjw") pod "abc7c0ae-fdcf-449d-86d3-51d23d43be6c" (UID: "abc7c0ae-fdcf-449d-86d3-51d23d43be6c"). InnerVolumeSpecName "kube-api-access-x8cjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:37:04.446642 master-0 kubenswrapper[31830]: I0319 12:37:04.445870 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f54cs\" (UniqueName: \"kubernetes.io/projected/baac9526-2845-4c1c-8a75-3ed2dcc2f3fb-kube-api-access-f54cs\") pod \"baac9526-2845-4c1c-8a75-3ed2dcc2f3fb\" (UID: \"baac9526-2845-4c1c-8a75-3ed2dcc2f3fb\") " Mar 19 12:37:04.446642 master-0 kubenswrapper[31830]: I0319 12:37:04.446203 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baac9526-2845-4c1c-8a75-3ed2dcc2f3fb-operator-scripts\") pod \"baac9526-2845-4c1c-8a75-3ed2dcc2f3fb\" (UID: \"baac9526-2845-4c1c-8a75-3ed2dcc2f3fb\") " Mar 19 12:37:04.447267 master-0 kubenswrapper[31830]: I0319 12:37:04.447096 31830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a04e9d8-ceee-449a-9e77-ebbe7b230aa7-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:04.447267 master-0 kubenswrapper[31830]: I0319 12:37:04.447121 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qthmk\" (UniqueName: \"kubernetes.io/projected/2a04e9d8-ceee-449a-9e77-ebbe7b230aa7-kube-api-access-qthmk\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:04.447267 master-0 kubenswrapper[31830]: I0319 12:37:04.447137 31830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ab21923-ce35-4200-b1d5-d0d20931131c-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:04.447267 master-0 kubenswrapper[31830]: I0319 12:37:04.447149 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms7sp\" (UniqueName: \"kubernetes.io/projected/6ab21923-ce35-4200-b1d5-d0d20931131c-kube-api-access-ms7sp\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:04.447267 master-0 kubenswrapper[31830]: I0319 12:37:04.447161 31830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abc7c0ae-fdcf-449d-86d3-51d23d43be6c-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:04.447267 master-0 kubenswrapper[31830]: I0319 12:37:04.447172 31830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/617862b3-8acc-478a-a829-74116f0d4a3d-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:04.447267 master-0 kubenswrapper[31830]: I0319 12:37:04.447183 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8cjw\" (UniqueName: \"kubernetes.io/projected/abc7c0ae-fdcf-449d-86d3-51d23d43be6c-kube-api-access-x8cjw\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:04.447267 master-0 kubenswrapper[31830]: I0319 12:37:04.447197 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b65v6\" (UniqueName: \"kubernetes.io/projected/617862b3-8acc-478a-a829-74116f0d4a3d-kube-api-access-b65v6\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:04.448173 master-0 kubenswrapper[31830]: I0319 12:37:04.447699 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baac9526-2845-4c1c-8a75-3ed2dcc2f3fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "baac9526-2845-4c1c-8a75-3ed2dcc2f3fb" (UID: "baac9526-2845-4c1c-8a75-3ed2dcc2f3fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:37:04.451741 master-0 kubenswrapper[31830]: I0319 12:37:04.450020 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baac9526-2845-4c1c-8a75-3ed2dcc2f3fb-kube-api-access-f54cs" (OuterVolumeSpecName: "kube-api-access-f54cs") pod "baac9526-2845-4c1c-8a75-3ed2dcc2f3fb" (UID: "baac9526-2845-4c1c-8a75-3ed2dcc2f3fb"). InnerVolumeSpecName "kube-api-access-f54cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:37:04.542556 master-0 kubenswrapper[31830]: I0319 12:37:04.541887 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f4e38-default-external-api-0"] Mar 19 12:37:04.542556 master-0 kubenswrapper[31830]: I0319 12:37:04.542177 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-f4e38-default-external-api-0" podUID="2df405a8-816c-4e6f-a3a1-fb4e350d0188" containerName="glance-log" containerID="cri-o://540715b44bf4f1fb44c15bd059ddd42ec745d87db740c1bcad0d1b93567610e4" gracePeriod=30 Mar 19 12:37:04.542556 master-0 kubenswrapper[31830]: I0319 12:37:04.542318 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-f4e38-default-external-api-0" podUID="2df405a8-816c-4e6f-a3a1-fb4e350d0188" containerName="glance-httpd" containerID="cri-o://882becff49bfa95bb73cd4b31ade5255291e0c70a9eb13fc2dda996634ebf04f" gracePeriod=30 Mar 19 12:37:04.549620 master-0 kubenswrapper[31830]: I0319 12:37:04.549499 31830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baac9526-2845-4c1c-8a75-3ed2dcc2f3fb-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:04.549620 master-0 kubenswrapper[31830]: I0319 12:37:04.549538 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f54cs\" (UniqueName: \"kubernetes.io/projected/baac9526-2845-4c1c-8a75-3ed2dcc2f3fb-kube-api-access-f54cs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:05.199535 master-0 kubenswrapper[31830]: I0319 12:37:05.199371 31830 generic.go:334] "Generic (PLEG): container finished" podID="2df405a8-816c-4e6f-a3a1-fb4e350d0188" containerID="540715b44bf4f1fb44c15bd059ddd42ec745d87db740c1bcad0d1b93567610e4" exitCode=143 Mar 19 12:37:05.199918 master-0 kubenswrapper[31830]: I0319 12:37:05.199537 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4ebf-account-create-update-kdfvk" Mar 19 12:37:05.202007 master-0 kubenswrapper[31830]: I0319 12:37:05.200135 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fd15-account-create-update-bn6tm" Mar 19 12:37:05.202007 master-0 kubenswrapper[31830]: I0319 12:37:05.200187 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q72gc" Mar 19 12:37:05.202007 master-0 kubenswrapper[31830]: I0319 12:37:05.200228 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f4e38-default-external-api-0" event={"ID":"2df405a8-816c-4e6f-a3a1-fb4e350d0188","Type":"ContainerDied","Data":"540715b44bf4f1fb44c15bd059ddd42ec745d87db740c1bcad0d1b93567610e4"} Mar 19 12:37:05.202007 master-0 kubenswrapper[31830]: I0319 12:37:05.200286 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-n2tdc" Mar 19 12:37:05.202007 master-0 kubenswrapper[31830]: I0319 12:37:05.200150 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-eb56-account-create-update-6w8tc" Mar 19 12:37:07.049539 master-0 kubenswrapper[31830]: I0319 12:37:07.049481 31830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-f4e38-default-internal-api-0" podUID="3e4ccffc-3539-4e3a-b507-3fa51250d5a6" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.128.0.215:9292/healthcheck\": dial tcp 10.128.0.215:9292: connect: connection refused" Mar 19 12:37:07.050445 master-0 kubenswrapper[31830]: I0319 12:37:07.049495 31830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-f4e38-default-internal-api-0" podUID="3e4ccffc-3539-4e3a-b507-3fa51250d5a6" containerName="glance-log" probeResult="failure" output="Get \"https://10.128.0.215:9292/healthcheck\": dial tcp 10.128.0.215:9292: connect: connection refused" Mar 19 12:37:07.693253 master-0 kubenswrapper[31830]: I0319 12:37:07.693169 31830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-f4e38-default-external-api-0" podUID="2df405a8-816c-4e6f-a3a1-fb4e350d0188" containerName="glance-log" probeResult="failure" output="Get \"https://10.128.0.214:9292/healthcheck\": read tcp 10.128.0.2:54418->10.128.0.214:9292: read: connection reset by peer" Mar 19 12:37:07.693515 master-0 kubenswrapper[31830]: I0319 12:37:07.693368 31830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-f4e38-default-external-api-0" podUID="2df405a8-816c-4e6f-a3a1-fb4e350d0188" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.128.0.214:9292/healthcheck\": read tcp 10.128.0.2:54420->10.128.0.214:9292: read: connection reset by peer" Mar 19 12:37:07.830772 master-0 kubenswrapper[31830]: I0319 12:37:07.830711 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:07.836937 master-0 kubenswrapper[31830]: I0319 12:37:07.836865 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-config-data\") pod \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " Mar 19 12:37:07.837365 master-0 kubenswrapper[31830]: I0319 12:37:07.837341 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-logs\") pod \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " Mar 19 12:37:07.837437 master-0 kubenswrapper[31830]: I0319 12:37:07.837397 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dz7k\" (UniqueName: \"kubernetes.io/projected/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-kube-api-access-4dz7k\") pod \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " Mar 19 12:37:07.837437 master-0 kubenswrapper[31830]: I0319 12:37:07.837432 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-scripts\") pod \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " Mar 19 12:37:07.837531 master-0 kubenswrapper[31830]: I0319 12:37:07.837496 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-httpd-run\") pod \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " Mar 19 12:37:07.837992 master-0 kubenswrapper[31830]: I0319 12:37:07.837945 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-logs" (OuterVolumeSpecName: "logs") pod "3e4ccffc-3539-4e3a-b507-3fa51250d5a6" (UID: "3e4ccffc-3539-4e3a-b507-3fa51250d5a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:37:07.838412 master-0 kubenswrapper[31830]: I0319 12:37:07.838366 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3e4ccffc-3539-4e3a-b507-3fa51250d5a6" (UID: "3e4ccffc-3539-4e3a-b507-3fa51250d5a6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:37:07.841722 master-0 kubenswrapper[31830]: I0319 12:37:07.841682 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-scripts" (OuterVolumeSpecName: "scripts") pod "3e4ccffc-3539-4e3a-b507-3fa51250d5a6" (UID: "3e4ccffc-3539-4e3a-b507-3fa51250d5a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:37:07.842516 master-0 kubenswrapper[31830]: I0319 12:37:07.842423 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-kube-api-access-4dz7k" (OuterVolumeSpecName: "kube-api-access-4dz7k") pod "3e4ccffc-3539-4e3a-b507-3fa51250d5a6" (UID: "3e4ccffc-3539-4e3a-b507-3fa51250d5a6"). InnerVolumeSpecName "kube-api-access-4dz7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:37:07.849973 master-0 kubenswrapper[31830]: I0319 12:37:07.849924 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3\") pod \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " Mar 19 12:37:07.851313 master-0 kubenswrapper[31830]: I0319 12:37:07.850159 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-internal-tls-certs\") pod \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " Mar 19 12:37:07.851313 master-0 kubenswrapper[31830]: I0319 12:37:07.850201 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-combined-ca-bundle\") pod \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\" (UID: \"3e4ccffc-3539-4e3a-b507-3fa51250d5a6\") " Mar 19 12:37:07.851313 master-0 kubenswrapper[31830]: I0319 12:37:07.851036 31830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:07.851313 master-0 kubenswrapper[31830]: I0319 12:37:07.851058 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dz7k\" (UniqueName: \"kubernetes.io/projected/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-kube-api-access-4dz7k\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:07.851313 master-0 kubenswrapper[31830]: I0319 12:37:07.851068 31830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:07.851313 master-0 kubenswrapper[31830]: I0319 12:37:07.851076 31830 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:07.878277 master-0 kubenswrapper[31830]: I0319 12:37:07.878181 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e4ccffc-3539-4e3a-b507-3fa51250d5a6" (UID: "3e4ccffc-3539-4e3a-b507-3fa51250d5a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:37:07.904207 master-0 kubenswrapper[31830]: I0319 12:37:07.904134 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-config-data" (OuterVolumeSpecName: "config-data") pod "3e4ccffc-3539-4e3a-b507-3fa51250d5a6" (UID: "3e4ccffc-3539-4e3a-b507-3fa51250d5a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:37:07.920228 master-0 kubenswrapper[31830]: I0319 12:37:07.920157 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3e4ccffc-3539-4e3a-b507-3fa51250d5a6" (UID: "3e4ccffc-3539-4e3a-b507-3fa51250d5a6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:37:07.955056 master-0 kubenswrapper[31830]: I0319 12:37:07.954989 31830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:07.955056 master-0 kubenswrapper[31830]: I0319 12:37:07.955055 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:07.955248 master-0 kubenswrapper[31830]: I0319 12:37:07.955071 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e4ccffc-3539-4e3a-b507-3fa51250d5a6-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:07.966456 master-0 kubenswrapper[31830]: I0319 12:37:07.966400 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3" (OuterVolumeSpecName: "glance") pod "3e4ccffc-3539-4e3a-b507-3fa51250d5a6" (UID: "3e4ccffc-3539-4e3a-b507-3fa51250d5a6"). InnerVolumeSpecName "pvc-a65517da-f83f-4270-b394-d7175eb38204". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 19 12:37:08.058352 master-0 kubenswrapper[31830]: I0319 12:37:08.058259 31830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a65517da-f83f-4270-b394-d7175eb38204\" (UniqueName: \"kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3\") on node \"master-0\" " Mar 19 12:37:08.215356 master-0 kubenswrapper[31830]: I0319 12:37:08.215306 31830 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 19 12:37:08.215601 master-0 kubenswrapper[31830]: I0319 12:37:08.215571 31830 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a65517da-f83f-4270-b394-d7175eb38204" (UniqueName: "kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3") on node "master-0" Mar 19 12:37:08.246946 master-0 kubenswrapper[31830]: I0319 12:37:08.246773 31830 generic.go:334] "Generic (PLEG): container finished" podID="2df405a8-816c-4e6f-a3a1-fb4e350d0188" containerID="882becff49bfa95bb73cd4b31ade5255291e0c70a9eb13fc2dda996634ebf04f" exitCode=0 Mar 19 12:37:08.246946 master-0 kubenswrapper[31830]: I0319 12:37:08.246873 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f4e38-default-external-api-0" event={"ID":"2df405a8-816c-4e6f-a3a1-fb4e350d0188","Type":"ContainerDied","Data":"882becff49bfa95bb73cd4b31ade5255291e0c70a9eb13fc2dda996634ebf04f"} Mar 19 12:37:08.249248 master-0 kubenswrapper[31830]: I0319 12:37:08.249212 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f4e38-default-internal-api-0" event={"ID":"3e4ccffc-3539-4e3a-b507-3fa51250d5a6","Type":"ContainerDied","Data":"0f46d1a142f064c51b50aa3c3425107fb12424d7093b89808c1ed6c81745ca5c"} Mar 19 12:37:08.249335 master-0 kubenswrapper[31830]: I0319 12:37:08.249259 31830 scope.go:117] "RemoveContainer" containerID="b69e125e42a299fc4c60cee3320600e1ac8a82dfc8fed27137eaceadec37c002" Mar 19 12:37:08.249462 master-0 kubenswrapper[31830]: I0319 12:37:08.249435 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:08.265641 master-0 kubenswrapper[31830]: I0319 12:37:08.265604 31830 reconciler_common.go:293] "Volume detached for volume \"pvc-a65517da-f83f-4270-b394-d7175eb38204\" (UniqueName: \"kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:08.325152 master-0 kubenswrapper[31830]: I0319 12:37:08.324568 31830 scope.go:117] "RemoveContainer" containerID="a662b6953e099e8046c0f19e2f43fe2830ffd4d8ed5268bb5b5772d761645370" Mar 19 12:37:10.871528 master-0 kubenswrapper[31830]: I0319 12:37:10.870936 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f4e38-default-internal-api-0"] Mar 19 12:37:11.230827 master-0 kubenswrapper[31830]: I0319 12:37:11.229198 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-f4e38-default-internal-api-0"] Mar 19 12:37:11.695630 master-0 kubenswrapper[31830]: I0319 12:37:11.695456 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e4ccffc-3539-4e3a-b507-3fa51250d5a6" path="/var/lib/kubelet/pods/3e4ccffc-3539-4e3a-b507-3fa51250d5a6/volumes" Mar 19 12:37:13.138097 master-0 kubenswrapper[31830]: I0319 12:37:13.138049 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-f4e38-default-internal-api-0"] Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: E0319 12:37:13.138528 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a04e9d8-ceee-449a-9e77-ebbe7b230aa7" containerName="mariadb-database-create" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: I0319 12:37:13.138540 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a04e9d8-ceee-449a-9e77-ebbe7b230aa7" containerName="mariadb-database-create" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: E0319 12:37:13.138552 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63713e2-7d18-4053-b79c-86ab7b8e1e57" containerName="neutron-api" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: I0319 12:37:13.138558 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63713e2-7d18-4053-b79c-86ab7b8e1e57" containerName="neutron-api" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: E0319 12:37:13.138578 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4ccffc-3539-4e3a-b507-3fa51250d5a6" containerName="glance-log" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: I0319 12:37:13.138584 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4ccffc-3539-4e3a-b507-3fa51250d5a6" containerName="glance-log" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: E0319 12:37:13.138599 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4ccffc-3539-4e3a-b507-3fa51250d5a6" containerName="glance-httpd" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: I0319 12:37:13.138605 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4ccffc-3539-4e3a-b507-3fa51250d5a6" containerName="glance-httpd" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: E0319 12:37:13.138614 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="617862b3-8acc-478a-a829-74116f0d4a3d" containerName="mariadb-account-create-update" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: I0319 12:37:13.138620 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="617862b3-8acc-478a-a829-74116f0d4a3d" containerName="mariadb-account-create-update" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: E0319 12:37:13.138638 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc7c0ae-fdcf-449d-86d3-51d23d43be6c" containerName="mariadb-account-create-update" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: I0319 12:37:13.138644 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc7c0ae-fdcf-449d-86d3-51d23d43be6c" containerName="mariadb-account-create-update" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: E0319 12:37:13.138656 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab21923-ce35-4200-b1d5-d0d20931131c" containerName="mariadb-account-create-update" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: I0319 12:37:13.138662 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab21923-ce35-4200-b1d5-d0d20931131c" containerName="mariadb-account-create-update" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: E0319 12:37:13.138675 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baac9526-2845-4c1c-8a75-3ed2dcc2f3fb" containerName="mariadb-database-create" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: I0319 12:37:13.138681 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="baac9526-2845-4c1c-8a75-3ed2dcc2f3fb" containerName="mariadb-database-create" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: E0319 12:37:13.138704 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16baa845-ba2a-42b4-b0a8-3745b32f4f3e" containerName="mariadb-database-create" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: I0319 12:37:13.138712 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="16baa845-ba2a-42b4-b0a8-3745b32f4f3e" containerName="mariadb-database-create" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: E0319 12:37:13.138723 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63713e2-7d18-4053-b79c-86ab7b8e1e57" containerName="neutron-httpd" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: I0319 12:37:13.138730 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63713e2-7d18-4053-b79c-86ab7b8e1e57" containerName="neutron-httpd" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: I0319 12:37:13.138948 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="abc7c0ae-fdcf-449d-86d3-51d23d43be6c" containerName="mariadb-account-create-update" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: I0319 12:37:13.138969 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ab21923-ce35-4200-b1d5-d0d20931131c" containerName="mariadb-account-create-update" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: I0319 12:37:13.138985 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e4ccffc-3539-4e3a-b507-3fa51250d5a6" containerName="glance-httpd" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: I0319 12:37:13.139001 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a04e9d8-ceee-449a-9e77-ebbe7b230aa7" containerName="mariadb-database-create" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: I0319 12:37:13.139018 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="16baa845-ba2a-42b4-b0a8-3745b32f4f3e" containerName="mariadb-database-create" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: I0319 12:37:13.139029 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="baac9526-2845-4c1c-8a75-3ed2dcc2f3fb" containerName="mariadb-database-create" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: I0319 12:37:13.139218 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f63713e2-7d18-4053-b79c-86ab7b8e1e57" containerName="neutron-api" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: I0319 12:37:13.139229 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e4ccffc-3539-4e3a-b507-3fa51250d5a6" containerName="glance-log" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: I0319 12:37:13.139244 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="617862b3-8acc-478a-a829-74116f0d4a3d" containerName="mariadb-account-create-update" Mar 19 12:37:13.139324 master-0 kubenswrapper[31830]: I0319 12:37:13.139255 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f63713e2-7d18-4053-b79c-86ab7b8e1e57" containerName="neutron-httpd" Mar 19 12:37:13.142284 master-0 kubenswrapper[31830]: I0319 12:37:13.140359 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:13.151341 master-0 kubenswrapper[31830]: I0319 12:37:13.151293 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 19 12:37:13.151878 master-0 kubenswrapper[31830]: I0319 12:37:13.151316 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-f4e38-default-internal-config-data" Mar 19 12:37:13.471944 master-0 kubenswrapper[31830]: I0319 12:37:13.468640 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f4e38-default-internal-api-0"] Mar 19 12:37:14.094541 master-0 kubenswrapper[31830]: I0319 12:37:14.094459 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d05de021-992c-4c11-bea3-1fea7fade5e5-internal-tls-certs\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:14.094541 master-0 kubenswrapper[31830]: I0319 12:37:14.094547 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5csvp\" (UniqueName: \"kubernetes.io/projected/d05de021-992c-4c11-bea3-1fea7fade5e5-kube-api-access-5csvp\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:14.095088 master-0 kubenswrapper[31830]: I0319 12:37:14.095030 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d05de021-992c-4c11-bea3-1fea7fade5e5-logs\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:14.095287 master-0 kubenswrapper[31830]: I0319 12:37:14.095210 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d05de021-992c-4c11-bea3-1fea7fade5e5-scripts\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:14.095649 master-0 kubenswrapper[31830]: I0319 12:37:14.095537 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a65517da-f83f-4270-b394-d7175eb38204\" (UniqueName: \"kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:14.095649 master-0 kubenswrapper[31830]: I0319 12:37:14.095604 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d05de021-992c-4c11-bea3-1fea7fade5e5-config-data\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:14.095778 master-0 kubenswrapper[31830]: I0319 12:37:14.095659 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d05de021-992c-4c11-bea3-1fea7fade5e5-httpd-run\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:14.095778 master-0 kubenswrapper[31830]: I0319 12:37:14.095703 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05de021-992c-4c11-bea3-1fea7fade5e5-combined-ca-bundle\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:14.197493 master-0 kubenswrapper[31830]: I0319 12:37:14.197392 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d05de021-992c-4c11-bea3-1fea7fade5e5-internal-tls-certs\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:14.197493 master-0 kubenswrapper[31830]: I0319 12:37:14.197449 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5csvp\" (UniqueName: \"kubernetes.io/projected/d05de021-992c-4c11-bea3-1fea7fade5e5-kube-api-access-5csvp\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:14.198289 master-0 kubenswrapper[31830]: I0319 12:37:14.197781 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d05de021-992c-4c11-bea3-1fea7fade5e5-logs\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:14.198289 master-0 kubenswrapper[31830]: I0319 12:37:14.197895 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d05de021-992c-4c11-bea3-1fea7fade5e5-scripts\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:14.198289 master-0 kubenswrapper[31830]: I0319 12:37:14.198278 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d05de021-992c-4c11-bea3-1fea7fade5e5-logs\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:14.198440 master-0 kubenswrapper[31830]: I0319 12:37:14.198365 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d05de021-992c-4c11-bea3-1fea7fade5e5-config-data\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:14.198440 master-0 kubenswrapper[31830]: I0319 12:37:14.198417 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d05de021-992c-4c11-bea3-1fea7fade5e5-httpd-run\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:14.198556 master-0 kubenswrapper[31830]: I0319 12:37:14.198463 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05de021-992c-4c11-bea3-1fea7fade5e5-combined-ca-bundle\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:14.198954 master-0 kubenswrapper[31830]: I0319 12:37:14.198916 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d05de021-992c-4c11-bea3-1fea7fade5e5-httpd-run\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:14.201698 master-0 kubenswrapper[31830]: I0319 12:37:14.201634 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d05de021-992c-4c11-bea3-1fea7fade5e5-internal-tls-certs\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:14.203159 master-0 kubenswrapper[31830]: I0319 12:37:14.203079 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05de021-992c-4c11-bea3-1fea7fade5e5-combined-ca-bundle\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:14.203354 master-0 kubenswrapper[31830]: I0319 12:37:14.203093 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d05de021-992c-4c11-bea3-1fea7fade5e5-config-data\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:14.204423 master-0 kubenswrapper[31830]: I0319 12:37:14.204370 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d05de021-992c-4c11-bea3-1fea7fade5e5-scripts\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:15.015168 master-0 kubenswrapper[31830]: I0319 12:37:15.015110 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a65517da-f83f-4270-b394-d7175eb38204\" (UniqueName: \"kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:15.023923 master-0 kubenswrapper[31830]: I0319 12:37:15.023734 31830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 12:37:15.023923 master-0 kubenswrapper[31830]: I0319 12:37:15.023784 31830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a65517da-f83f-4270-b394-d7175eb38204\" (UniqueName: \"kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/a297acd8689bd9435b3ef7c4521a212d0a62d14f63738b5b80f182076c3660ff/globalmount\"" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:15.967652 master-0 kubenswrapper[31830]: I0319 12:37:15.967568 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5csvp\" (UniqueName: \"kubernetes.io/projected/d05de021-992c-4c11-bea3-1fea7fade5e5-kube-api-access-5csvp\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:16.079668 master-0 kubenswrapper[31830]: I0319 12:37:16.079551 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:16.195272 master-0 kubenswrapper[31830]: I0319 12:37:16.195119 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-combined-ca-bundle\") pod \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " Mar 19 12:37:16.195525 master-0 kubenswrapper[31830]: I0319 12:37:16.195475 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5crrx\" (UniqueName: \"kubernetes.io/projected/2df405a8-816c-4e6f-a3a1-fb4e350d0188-kube-api-access-5crrx\") pod \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " Mar 19 12:37:16.195579 master-0 kubenswrapper[31830]: I0319 12:37:16.195568 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-scripts\") pod \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " Mar 19 12:37:16.195656 master-0 kubenswrapper[31830]: I0319 12:37:16.195624 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-public-tls-certs\") pod \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " Mar 19 12:37:16.195713 master-0 kubenswrapper[31830]: I0319 12:37:16.195691 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2df405a8-816c-4e6f-a3a1-fb4e350d0188-logs\") pod \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " Mar 19 12:37:16.195764 master-0 kubenswrapper[31830]: I0319 12:37:16.195726 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2df405a8-816c-4e6f-a3a1-fb4e350d0188-httpd-run\") pod \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " Mar 19 12:37:16.195847 master-0 kubenswrapper[31830]: I0319 12:37:16.195761 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-config-data\") pod \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " Mar 19 12:37:16.196831 master-0 kubenswrapper[31830]: I0319 12:37:16.196769 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2df405a8-816c-4e6f-a3a1-fb4e350d0188-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2df405a8-816c-4e6f-a3a1-fb4e350d0188" (UID: "2df405a8-816c-4e6f-a3a1-fb4e350d0188"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:37:16.197076 master-0 kubenswrapper[31830]: I0319 12:37:16.197019 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2df405a8-816c-4e6f-a3a1-fb4e350d0188-logs" (OuterVolumeSpecName: "logs") pod "2df405a8-816c-4e6f-a3a1-fb4e350d0188" (UID: "2df405a8-816c-4e6f-a3a1-fb4e350d0188"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:37:16.199085 master-0 kubenswrapper[31830]: I0319 12:37:16.199028 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-scripts" (OuterVolumeSpecName: "scripts") pod "2df405a8-816c-4e6f-a3a1-fb4e350d0188" (UID: "2df405a8-816c-4e6f-a3a1-fb4e350d0188"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:37:16.199224 master-0 kubenswrapper[31830]: I0319 12:37:16.199187 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2df405a8-816c-4e6f-a3a1-fb4e350d0188-kube-api-access-5crrx" (OuterVolumeSpecName: "kube-api-access-5crrx") pod "2df405a8-816c-4e6f-a3a1-fb4e350d0188" (UID: "2df405a8-816c-4e6f-a3a1-fb4e350d0188"). InnerVolumeSpecName "kube-api-access-5crrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:37:16.221584 master-0 kubenswrapper[31830]: I0319 12:37:16.221485 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2df405a8-816c-4e6f-a3a1-fb4e350d0188" (UID: "2df405a8-816c-4e6f-a3a1-fb4e350d0188"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:37:16.231067 master-0 kubenswrapper[31830]: I0319 12:37:16.230979 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33\") pod \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\" (UID: \"2df405a8-816c-4e6f-a3a1-fb4e350d0188\") " Mar 19 12:37:16.232195 master-0 kubenswrapper[31830]: I0319 12:37:16.232146 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5crrx\" (UniqueName: \"kubernetes.io/projected/2df405a8-816c-4e6f-a3a1-fb4e350d0188-kube-api-access-5crrx\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:16.232195 master-0 kubenswrapper[31830]: I0319 12:37:16.232174 31830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:16.232195 master-0 kubenswrapper[31830]: I0319 12:37:16.232186 31830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2df405a8-816c-4e6f-a3a1-fb4e350d0188-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:16.232195 master-0 kubenswrapper[31830]: I0319 12:37:16.232201 31830 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2df405a8-816c-4e6f-a3a1-fb4e350d0188-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:16.232626 master-0 kubenswrapper[31830]: I0319 12:37:16.232212 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:16.251836 master-0 kubenswrapper[31830]: I0319 12:37:16.251736 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-config-data" (OuterVolumeSpecName: "config-data") pod "2df405a8-816c-4e6f-a3a1-fb4e350d0188" (UID: "2df405a8-816c-4e6f-a3a1-fb4e350d0188"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:37:16.259990 master-0 kubenswrapper[31830]: I0319 12:37:16.259903 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2df405a8-816c-4e6f-a3a1-fb4e350d0188" (UID: "2df405a8-816c-4e6f-a3a1-fb4e350d0188"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:37:16.277016 master-0 kubenswrapper[31830]: I0319 12:37:16.276957 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33" (OuterVolumeSpecName: "glance") pod "2df405a8-816c-4e6f-a3a1-fb4e350d0188" (UID: "2df405a8-816c-4e6f-a3a1-fb4e350d0188"). InnerVolumeSpecName "pvc-38dd62d2-8408-4ac5-a7b7-e2aa55a152b1". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 19 12:37:16.288052 master-0 kubenswrapper[31830]: I0319 12:37:16.287993 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a65517da-f83f-4270-b394-d7175eb38204\" (UniqueName: \"kubernetes.io/csi/topolvm.io^19328693-1987-4217-9b35-24f3b480bfc3\") pod \"glance-f4e38-default-internal-api-0\" (UID: \"d05de021-992c-4c11-bea3-1fea7fade5e5\") " pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:16.334892 master-0 kubenswrapper[31830]: I0319 12:37:16.334823 31830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:16.334892 master-0 kubenswrapper[31830]: I0319 12:37:16.334885 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2df405a8-816c-4e6f-a3a1-fb4e350d0188-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:16.335189 master-0 kubenswrapper[31830]: I0319 12:37:16.334930 31830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-38dd62d2-8408-4ac5-a7b7-e2aa55a152b1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33\") on node \"master-0\" " Mar 19 12:37:16.351753 master-0 kubenswrapper[31830]: I0319 12:37:16.351698 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f4e38-default-external-api-0" event={"ID":"2df405a8-816c-4e6f-a3a1-fb4e350d0188","Type":"ContainerDied","Data":"8e3d70cd1c3ac357bb3c4d53a15aed9178705458e5bc95d03d836f9960bb897a"} Mar 19 12:37:16.351983 master-0 kubenswrapper[31830]: I0319 12:37:16.351766 31830 scope.go:117] "RemoveContainer" containerID="882becff49bfa95bb73cd4b31ade5255291e0c70a9eb13fc2dda996634ebf04f" Mar 19 12:37:16.351983 master-0 kubenswrapper[31830]: I0319 12:37:16.351770 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:16.385408 master-0 kubenswrapper[31830]: I0319 12:37:16.385372 31830 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 19 12:37:16.385590 master-0 kubenswrapper[31830]: I0319 12:37:16.385509 31830 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-38dd62d2-8408-4ac5-a7b7-e2aa55a152b1" (UniqueName: "kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33") on node "master-0" Mar 19 12:37:16.413916 master-0 kubenswrapper[31830]: I0319 12:37:16.412099 31830 scope.go:117] "RemoveContainer" containerID="540715b44bf4f1fb44c15bd059ddd42ec745d87db740c1bcad0d1b93567610e4" Mar 19 12:37:16.439000 master-0 kubenswrapper[31830]: I0319 12:37:16.438317 31830 reconciler_common.go:293] "Volume detached for volume \"pvc-38dd62d2-8408-4ac5-a7b7-e2aa55a152b1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:16.462884 master-0 kubenswrapper[31830]: I0319 12:37:16.460290 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:17.039627 master-0 kubenswrapper[31830]: I0319 12:37:17.039503 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:37:17.043251 master-0 kubenswrapper[31830]: I0319 12:37:17.043204 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7d99c66444-6vrxg" Mar 19 12:37:21.017244 master-0 kubenswrapper[31830]: I0319 12:37:21.017073 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f4e38-default-external-api-0"] Mar 19 12:37:21.436088 master-0 kubenswrapper[31830]: I0319 12:37:21.436016 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-f4e38-default-external-api-0"] Mar 19 12:37:21.448467 master-0 kubenswrapper[31830]: I0319 12:37:21.448406 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ntd2j"] Mar 19 12:37:21.449041 master-0 kubenswrapper[31830]: E0319 12:37:21.448910 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2df405a8-816c-4e6f-a3a1-fb4e350d0188" containerName="glance-log" Mar 19 12:37:21.449041 master-0 kubenswrapper[31830]: I0319 12:37:21.448925 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2df405a8-816c-4e6f-a3a1-fb4e350d0188" containerName="glance-log" Mar 19 12:37:21.449041 master-0 kubenswrapper[31830]: E0319 12:37:21.448967 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2df405a8-816c-4e6f-a3a1-fb4e350d0188" containerName="glance-httpd" Mar 19 12:37:21.449041 master-0 kubenswrapper[31830]: I0319 12:37:21.448973 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2df405a8-816c-4e6f-a3a1-fb4e350d0188" containerName="glance-httpd" Mar 19 12:37:21.449424 master-0 kubenswrapper[31830]: I0319 12:37:21.449188 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2df405a8-816c-4e6f-a3a1-fb4e350d0188" containerName="glance-httpd" Mar 19 12:37:21.449424 master-0 kubenswrapper[31830]: I0319 12:37:21.449215 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2df405a8-816c-4e6f-a3a1-fb4e350d0188" containerName="glance-log" Mar 19 12:37:21.450031 master-0 kubenswrapper[31830]: I0319 12:37:21.450010 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ntd2j" Mar 19 12:37:21.452949 master-0 kubenswrapper[31830]: I0319 12:37:21.452922 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 19 12:37:21.454057 master-0 kubenswrapper[31830]: I0319 12:37:21.453085 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 19 12:37:21.572572 master-0 kubenswrapper[31830]: I0319 12:37:21.572458 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644035f0-0f52-4762-a1d3-1d4ce8745615-config-data\") pod \"nova-cell0-conductor-db-sync-ntd2j\" (UID: \"644035f0-0f52-4762-a1d3-1d4ce8745615\") " pod="openstack/nova-cell0-conductor-db-sync-ntd2j" Mar 19 12:37:21.572572 master-0 kubenswrapper[31830]: I0319 12:37:21.572599 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644035f0-0f52-4762-a1d3-1d4ce8745615-scripts\") pod \"nova-cell0-conductor-db-sync-ntd2j\" (UID: \"644035f0-0f52-4762-a1d3-1d4ce8745615\") " pod="openstack/nova-cell0-conductor-db-sync-ntd2j" Mar 19 12:37:21.573191 master-0 kubenswrapper[31830]: I0319 12:37:21.572655 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjf6q\" (UniqueName: \"kubernetes.io/projected/644035f0-0f52-4762-a1d3-1d4ce8745615-kube-api-access-cjf6q\") pod \"nova-cell0-conductor-db-sync-ntd2j\" (UID: \"644035f0-0f52-4762-a1d3-1d4ce8745615\") " pod="openstack/nova-cell0-conductor-db-sync-ntd2j" Mar 19 12:37:21.573191 master-0 kubenswrapper[31830]: I0319 12:37:21.572760 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644035f0-0f52-4762-a1d3-1d4ce8745615-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ntd2j\" (UID: \"644035f0-0f52-4762-a1d3-1d4ce8745615\") " pod="openstack/nova-cell0-conductor-db-sync-ntd2j" Mar 19 12:37:21.674328 master-0 kubenswrapper[31830]: I0319 12:37:21.674233 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644035f0-0f52-4762-a1d3-1d4ce8745615-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ntd2j\" (UID: \"644035f0-0f52-4762-a1d3-1d4ce8745615\") " pod="openstack/nova-cell0-conductor-db-sync-ntd2j" Mar 19 12:37:21.674328 master-0 kubenswrapper[31830]: I0319 12:37:21.674304 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644035f0-0f52-4762-a1d3-1d4ce8745615-config-data\") pod \"nova-cell0-conductor-db-sync-ntd2j\" (UID: \"644035f0-0f52-4762-a1d3-1d4ce8745615\") " pod="openstack/nova-cell0-conductor-db-sync-ntd2j" Mar 19 12:37:21.675006 master-0 kubenswrapper[31830]: I0319 12:37:21.674372 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644035f0-0f52-4762-a1d3-1d4ce8745615-scripts\") pod \"nova-cell0-conductor-db-sync-ntd2j\" (UID: \"644035f0-0f52-4762-a1d3-1d4ce8745615\") " pod="openstack/nova-cell0-conductor-db-sync-ntd2j" Mar 19 12:37:21.675006 master-0 kubenswrapper[31830]: I0319 12:37:21.674432 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjf6q\" (UniqueName: \"kubernetes.io/projected/644035f0-0f52-4762-a1d3-1d4ce8745615-kube-api-access-cjf6q\") pod \"nova-cell0-conductor-db-sync-ntd2j\" (UID: \"644035f0-0f52-4762-a1d3-1d4ce8745615\") " pod="openstack/nova-cell0-conductor-db-sync-ntd2j" Mar 19 12:37:21.681820 master-0 kubenswrapper[31830]: I0319 12:37:21.681765 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 19 12:37:21.681971 master-0 kubenswrapper[31830]: I0319 12:37:21.681942 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 19 12:37:21.704194 master-0 kubenswrapper[31830]: I0319 12:37:21.700539 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644035f0-0f52-4762-a1d3-1d4ce8745615-scripts\") pod \"nova-cell0-conductor-db-sync-ntd2j\" (UID: \"644035f0-0f52-4762-a1d3-1d4ce8745615\") " pod="openstack/nova-cell0-conductor-db-sync-ntd2j" Mar 19 12:37:21.704194 master-0 kubenswrapper[31830]: I0319 12:37:21.703745 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644035f0-0f52-4762-a1d3-1d4ce8745615-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ntd2j\" (UID: \"644035f0-0f52-4762-a1d3-1d4ce8745615\") " pod="openstack/nova-cell0-conductor-db-sync-ntd2j" Mar 19 12:37:21.710451 master-0 kubenswrapper[31830]: I0319 12:37:21.710305 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644035f0-0f52-4762-a1d3-1d4ce8745615-config-data\") pod \"nova-cell0-conductor-db-sync-ntd2j\" (UID: \"644035f0-0f52-4762-a1d3-1d4ce8745615\") " pod="openstack/nova-cell0-conductor-db-sync-ntd2j" Mar 19 12:37:21.723483 master-0 kubenswrapper[31830]: I0319 12:37:21.723387 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2df405a8-816c-4e6f-a3a1-fb4e350d0188" path="/var/lib/kubelet/pods/2df405a8-816c-4e6f-a3a1-fb4e350d0188/volumes" Mar 19 12:37:21.724773 master-0 kubenswrapper[31830]: I0319 12:37:21.724728 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ntd2j"] Mar 19 12:37:22.023786 master-0 kubenswrapper[31830]: I0319 12:37:22.023711 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-f4e38-default-external-api-0"] Mar 19 12:37:22.027058 master-0 kubenswrapper[31830]: I0319 12:37:22.027007 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:22.030282 master-0 kubenswrapper[31830]: I0319 12:37:22.030227 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-f4e38-default-external-config-data" Mar 19 12:37:22.030473 master-0 kubenswrapper[31830]: I0319 12:37:22.030451 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 19 12:37:22.404823 master-0 kubenswrapper[31830]: I0319 12:37:22.404719 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjf6q\" (UniqueName: \"kubernetes.io/projected/644035f0-0f52-4762-a1d3-1d4ce8745615-kube-api-access-cjf6q\") pod \"nova-cell0-conductor-db-sync-ntd2j\" (UID: \"644035f0-0f52-4762-a1d3-1d4ce8745615\") " pod="openstack/nova-cell0-conductor-db-sync-ntd2j" Mar 19 12:37:22.405096 master-0 kubenswrapper[31830]: I0319 12:37:22.404746 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f4e38-default-external-api-0"] Mar 19 12:37:22.672030 master-0 kubenswrapper[31830]: I0319 12:37:22.671880 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ntd2j" Mar 19 12:37:23.129238 master-0 kubenswrapper[31830]: I0319 12:37:23.129178 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6a28bf0-c9db-427e-9f5e-dd58ee654662-combined-ca-bundle\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.134757 master-0 kubenswrapper[31830]: I0319 12:37:23.134469 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6a28bf0-c9db-427e-9f5e-dd58ee654662-scripts\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.134757 master-0 kubenswrapper[31830]: I0319 12:37:23.134549 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-38dd62d2-8408-4ac5-a7b7-e2aa55a152b1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.134757 master-0 kubenswrapper[31830]: I0319 12:37:23.134689 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6a28bf0-c9db-427e-9f5e-dd58ee654662-logs\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.135038 master-0 kubenswrapper[31830]: I0319 12:37:23.134971 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxz8s\" (UniqueName: \"kubernetes.io/projected/a6a28bf0-c9db-427e-9f5e-dd58ee654662-kube-api-access-qxz8s\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.135038 master-0 kubenswrapper[31830]: I0319 12:37:23.135016 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6a28bf0-c9db-427e-9f5e-dd58ee654662-config-data\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.137119 master-0 kubenswrapper[31830]: I0319 12:37:23.137087 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6a28bf0-c9db-427e-9f5e-dd58ee654662-public-tls-certs\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.137288 master-0 kubenswrapper[31830]: I0319 12:37:23.137264 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6a28bf0-c9db-427e-9f5e-dd58ee654662-httpd-run\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.240516 master-0 kubenswrapper[31830]: I0319 12:37:23.240415 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6a28bf0-c9db-427e-9f5e-dd58ee654662-scripts\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.240787 master-0 kubenswrapper[31830]: I0319 12:37:23.240536 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6a28bf0-c9db-427e-9f5e-dd58ee654662-logs\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.240787 master-0 kubenswrapper[31830]: I0319 12:37:23.240628 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxz8s\" (UniqueName: \"kubernetes.io/projected/a6a28bf0-c9db-427e-9f5e-dd58ee654662-kube-api-access-qxz8s\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.240787 master-0 kubenswrapper[31830]: I0319 12:37:23.240661 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6a28bf0-c9db-427e-9f5e-dd58ee654662-config-data\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.240787 master-0 kubenswrapper[31830]: I0319 12:37:23.240740 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6a28bf0-c9db-427e-9f5e-dd58ee654662-public-tls-certs\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.240787 master-0 kubenswrapper[31830]: I0319 12:37:23.240772 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6a28bf0-c9db-427e-9f5e-dd58ee654662-httpd-run\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.241072 master-0 kubenswrapper[31830]: I0319 12:37:23.240871 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6a28bf0-c9db-427e-9f5e-dd58ee654662-combined-ca-bundle\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.241126 master-0 kubenswrapper[31830]: I0319 12:37:23.241071 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6a28bf0-c9db-427e-9f5e-dd58ee654662-logs\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.241893 master-0 kubenswrapper[31830]: I0319 12:37:23.241865 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6a28bf0-c9db-427e-9f5e-dd58ee654662-httpd-run\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.244586 master-0 kubenswrapper[31830]: I0319 12:37:23.244246 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6a28bf0-c9db-427e-9f5e-dd58ee654662-scripts\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.244586 master-0 kubenswrapper[31830]: I0319 12:37:23.244537 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6a28bf0-c9db-427e-9f5e-dd58ee654662-public-tls-certs\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.245514 master-0 kubenswrapper[31830]: I0319 12:37:23.245474 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6a28bf0-c9db-427e-9f5e-dd58ee654662-combined-ca-bundle\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.258278 master-0 kubenswrapper[31830]: I0319 12:37:23.258205 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6a28bf0-c9db-427e-9f5e-dd58ee654662-config-data\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.665497 master-0 kubenswrapper[31830]: I0319 12:37:23.665423 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5f878994d6-brrf9"] Mar 19 12:37:23.665716 master-0 kubenswrapper[31830]: I0319 12:37:23.665689 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5f878994d6-brrf9" podUID="8ab4af90-2e9a-489c-b2bf-08579f4c3335" containerName="placement-log" containerID="cri-o://7622d7453d22bc08ff3e47847b95f0d99de8b75a0bc629a881f7e1e47fbc5127" gracePeriod=30 Mar 19 12:37:23.665903 master-0 kubenswrapper[31830]: I0319 12:37:23.665760 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5f878994d6-brrf9" podUID="8ab4af90-2e9a-489c-b2bf-08579f4c3335" containerName="placement-api" containerID="cri-o://73e07780bcc782edd93533f0824cda16b30d7283ee7559e869062923116f5506" gracePeriod=30 Mar 19 12:37:23.762873 master-0 kubenswrapper[31830]: I0319 12:37:23.762782 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-38dd62d2-8408-4ac5-a7b7-e2aa55a152b1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:23.764578 master-0 kubenswrapper[31830]: I0319 12:37:23.764544 31830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 12:37:23.764668 master-0 kubenswrapper[31830]: I0319 12:37:23.764576 31830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-38dd62d2-8408-4ac5-a7b7-e2aa55a152b1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/e424bd2a44d69b7b9bbf34d8863c487c6938417f60f1d51f079a71c7d4c379eb/globalmount\"" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:24.218924 master-0 kubenswrapper[31830]: I0319 12:37:24.215071 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxz8s\" (UniqueName: \"kubernetes.io/projected/a6a28bf0-c9db-427e-9f5e-dd58ee654662-kube-api-access-qxz8s\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:24.226776 master-0 kubenswrapper[31830]: I0319 12:37:24.226729 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ntd2j"] Mar 19 12:37:24.327916 master-0 kubenswrapper[31830]: I0319 12:37:24.326067 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f4e38-default-internal-api-0"] Mar 19 12:37:24.558969 master-0 kubenswrapper[31830]: I0319 12:37:24.557240 31830 generic.go:334] "Generic (PLEG): container finished" podID="8ab4af90-2e9a-489c-b2bf-08579f4c3335" containerID="7622d7453d22bc08ff3e47847b95f0d99de8b75a0bc629a881f7e1e47fbc5127" exitCode=143 Mar 19 12:37:24.558969 master-0 kubenswrapper[31830]: I0319 12:37:24.557361 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f878994d6-brrf9" event={"ID":"8ab4af90-2e9a-489c-b2bf-08579f4c3335","Type":"ContainerDied","Data":"7622d7453d22bc08ff3e47847b95f0d99de8b75a0bc629a881f7e1e47fbc5127"} Mar 19 12:37:24.562025 master-0 kubenswrapper[31830]: I0319 12:37:24.561963 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ntd2j" event={"ID":"644035f0-0f52-4762-a1d3-1d4ce8745615","Type":"ContainerStarted","Data":"1f68eaa8077228e5b52bee567622f9d1f58c3b4467b2f6c9cb7b2983f09f50c8"} Mar 19 12:37:24.564445 master-0 kubenswrapper[31830]: I0319 12:37:24.564397 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f4e38-default-internal-api-0" event={"ID":"d05de021-992c-4c11-bea3-1fea7fade5e5","Type":"ContainerStarted","Data":"6a28ef15eb8120fa379c42923f392fd84f8b5f17937e027078503c8d4528e84d"} Mar 19 12:37:24.676235 master-0 kubenswrapper[31830]: I0319 12:37:24.676101 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-38dd62d2-8408-4ac5-a7b7-e2aa55a152b1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72f5d1c4-ff8c-497a-97da-4c12e82dcd33\") pod \"glance-f4e38-default-external-api-0\" (UID: \"a6a28bf0-c9db-427e-9f5e-dd58ee654662\") " pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:24.756895 master-0 kubenswrapper[31830]: I0319 12:37:24.756839 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:25.582742 master-0 kubenswrapper[31830]: I0319 12:37:25.582610 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f4e38-default-internal-api-0" event={"ID":"d05de021-992c-4c11-bea3-1fea7fade5e5","Type":"ContainerStarted","Data":"850a2a1cc316eac5129e788604c53cf5c3d078006ca8304824d8422ac007545d"} Mar 19 12:37:26.615315 master-0 kubenswrapper[31830]: I0319 12:37:26.615249 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f4e38-default-internal-api-0" event={"ID":"d05de021-992c-4c11-bea3-1fea7fade5e5","Type":"ContainerStarted","Data":"542c1a045d360120bcdf6165b6c22b227274b98687de9ba179f268e6f43ac52d"} Mar 19 12:37:27.196960 master-0 kubenswrapper[31830]: I0319 12:37:27.195107 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-f4e38-default-internal-api-0" podStartSLOduration=16.195084078 podStartE2EDuration="16.195084078s" podCreationTimestamp="2026-03-19 12:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:37:27.18484228 +0000 UTC m=+1385.733802994" watchObservedRunningTime="2026-03-19 12:37:27.195084078 +0000 UTC m=+1385.744044792" Mar 19 12:37:27.523920 master-0 kubenswrapper[31830]: I0319 12:37:27.511934 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f4e38-default-external-api-0"] Mar 19 12:37:27.632911 master-0 kubenswrapper[31830]: I0319 12:37:27.631635 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f4e38-default-external-api-0" event={"ID":"a6a28bf0-c9db-427e-9f5e-dd58ee654662","Type":"ContainerStarted","Data":"78ba2c7a93e544c32750cc61713cbfcd8940e2f7fd7e9c4fccf5b6dd461b6b3a"} Mar 19 12:37:27.634111 master-0 kubenswrapper[31830]: I0319 12:37:27.633823 31830 generic.go:334] "Generic (PLEG): container finished" podID="8ab4af90-2e9a-489c-b2bf-08579f4c3335" containerID="73e07780bcc782edd93533f0824cda16b30d7283ee7559e869062923116f5506" exitCode=0 Mar 19 12:37:27.635041 master-0 kubenswrapper[31830]: I0319 12:37:27.634931 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f878994d6-brrf9" event={"ID":"8ab4af90-2e9a-489c-b2bf-08579f4c3335","Type":"ContainerDied","Data":"73e07780bcc782edd93533f0824cda16b30d7283ee7559e869062923116f5506"} Mar 19 12:37:27.902215 master-0 kubenswrapper[31830]: I0319 12:37:27.902152 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:37:27.980360 master-0 kubenswrapper[31830]: I0319 12:37:27.980300 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-internal-tls-certs\") pod \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " Mar 19 12:37:27.980601 master-0 kubenswrapper[31830]: I0319 12:37:27.980489 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-combined-ca-bundle\") pod \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " Mar 19 12:37:27.980601 master-0 kubenswrapper[31830]: I0319 12:37:27.980548 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ab4af90-2e9a-489c-b2bf-08579f4c3335-logs\") pod \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " Mar 19 12:37:27.980601 master-0 kubenswrapper[31830]: I0319 12:37:27.980571 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-scripts\") pod \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " Mar 19 12:37:27.980601 master-0 kubenswrapper[31830]: I0319 12:37:27.980595 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwnj9\" (UniqueName: \"kubernetes.io/projected/8ab4af90-2e9a-489c-b2bf-08579f4c3335-kube-api-access-vwnj9\") pod \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " Mar 19 12:37:27.980758 master-0 kubenswrapper[31830]: I0319 12:37:27.980660 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-public-tls-certs\") pod \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " Mar 19 12:37:27.980758 master-0 kubenswrapper[31830]: I0319 12:37:27.980723 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-config-data\") pod \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\" (UID: \"8ab4af90-2e9a-489c-b2bf-08579f4c3335\") " Mar 19 12:37:27.981257 master-0 kubenswrapper[31830]: I0319 12:37:27.981219 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ab4af90-2e9a-489c-b2bf-08579f4c3335-logs" (OuterVolumeSpecName: "logs") pod "8ab4af90-2e9a-489c-b2bf-08579f4c3335" (UID: "8ab4af90-2e9a-489c-b2bf-08579f4c3335"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:37:27.988085 master-0 kubenswrapper[31830]: I0319 12:37:27.988026 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ab4af90-2e9a-489c-b2bf-08579f4c3335-kube-api-access-vwnj9" (OuterVolumeSpecName: "kube-api-access-vwnj9") pod "8ab4af90-2e9a-489c-b2bf-08579f4c3335" (UID: "8ab4af90-2e9a-489c-b2bf-08579f4c3335"). InnerVolumeSpecName "kube-api-access-vwnj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:37:27.990040 master-0 kubenswrapper[31830]: I0319 12:37:27.989981 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-scripts" (OuterVolumeSpecName: "scripts") pod "8ab4af90-2e9a-489c-b2bf-08579f4c3335" (UID: "8ab4af90-2e9a-489c-b2bf-08579f4c3335"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:37:28.044360 master-0 kubenswrapper[31830]: I0319 12:37:28.044208 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ab4af90-2e9a-489c-b2bf-08579f4c3335" (UID: "8ab4af90-2e9a-489c-b2bf-08579f4c3335"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:37:28.056630 master-0 kubenswrapper[31830]: I0319 12:37:28.056515 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-config-data" (OuterVolumeSpecName: "config-data") pod "8ab4af90-2e9a-489c-b2bf-08579f4c3335" (UID: "8ab4af90-2e9a-489c-b2bf-08579f4c3335"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:37:28.089448 master-0 kubenswrapper[31830]: I0319 12:37:28.082883 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:28.089448 master-0 kubenswrapper[31830]: I0319 12:37:28.082922 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:28.089448 master-0 kubenswrapper[31830]: I0319 12:37:28.082931 31830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ab4af90-2e9a-489c-b2bf-08579f4c3335-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:28.089448 master-0 kubenswrapper[31830]: I0319 12:37:28.082943 31830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:28.089448 master-0 kubenswrapper[31830]: I0319 12:37:28.082954 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwnj9\" (UniqueName: \"kubernetes.io/projected/8ab4af90-2e9a-489c-b2bf-08579f4c3335-kube-api-access-vwnj9\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:28.092113 master-0 kubenswrapper[31830]: I0319 12:37:28.092056 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8ab4af90-2e9a-489c-b2bf-08579f4c3335" (UID: "8ab4af90-2e9a-489c-b2bf-08579f4c3335"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:37:28.104206 master-0 kubenswrapper[31830]: I0319 12:37:28.104100 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8ab4af90-2e9a-489c-b2bf-08579f4c3335" (UID: "8ab4af90-2e9a-489c-b2bf-08579f4c3335"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:37:28.185912 master-0 kubenswrapper[31830]: I0319 12:37:28.184904 31830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:28.186147 master-0 kubenswrapper[31830]: I0319 12:37:28.186104 31830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ab4af90-2e9a-489c-b2bf-08579f4c3335-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:28.655260 master-0 kubenswrapper[31830]: I0319 12:37:28.655105 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f4e38-default-external-api-0" event={"ID":"a6a28bf0-c9db-427e-9f5e-dd58ee654662","Type":"ContainerStarted","Data":"28f75ff560650a8e07ea386f80b151fa61232831a56fa9244fd963033d257d98"} Mar 19 12:37:28.657130 master-0 kubenswrapper[31830]: I0319 12:37:28.657094 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f878994d6-brrf9" event={"ID":"8ab4af90-2e9a-489c-b2bf-08579f4c3335","Type":"ContainerDied","Data":"e2f9b3db66ff5545df1ea69249bfbd71db3e4a7246c420207d4558b0ddea3b55"} Mar 19 12:37:28.657211 master-0 kubenswrapper[31830]: I0319 12:37:28.657179 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f878994d6-brrf9" Mar 19 12:37:28.657327 master-0 kubenswrapper[31830]: I0319 12:37:28.657199 31830 scope.go:117] "RemoveContainer" containerID="73e07780bcc782edd93533f0824cda16b30d7283ee7559e869062923116f5506" Mar 19 12:37:28.720382 master-0 kubenswrapper[31830]: I0319 12:37:28.720308 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5f878994d6-brrf9"] Mar 19 12:37:28.737373 master-0 kubenswrapper[31830]: I0319 12:37:28.737308 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5f878994d6-brrf9"] Mar 19 12:37:29.697314 master-0 kubenswrapper[31830]: I0319 12:37:29.697253 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ab4af90-2e9a-489c-b2bf-08579f4c3335" path="/var/lib/kubelet/pods/8ab4af90-2e9a-489c-b2bf-08579f4c3335/volumes" Mar 19 12:37:33.840300 master-0 kubenswrapper[31830]: I0319 12:37:33.840229 31830 scope.go:117] "RemoveContainer" containerID="7622d7453d22bc08ff3e47847b95f0d99de8b75a0bc629a881f7e1e47fbc5127" Mar 19 12:37:34.767497 master-0 kubenswrapper[31830]: I0319 12:37:34.767439 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ntd2j" event={"ID":"644035f0-0f52-4762-a1d3-1d4ce8745615","Type":"ContainerStarted","Data":"a2c6f72f1e5bd45fdcd80e2b8f2624d82a6ae03875df1294a92a348d60246a6a"} Mar 19 12:37:34.770526 master-0 kubenswrapper[31830]: I0319 12:37:34.770452 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f4e38-default-external-api-0" event={"ID":"a6a28bf0-c9db-427e-9f5e-dd58ee654662","Type":"ContainerStarted","Data":"c82a606c3667ec553edf77d32e28afea5def5d441e343cdfcdaadb9eb5481aa3"} Mar 19 12:37:34.787915 master-0 kubenswrapper[31830]: I0319 12:37:34.787834 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-ntd2j" podStartSLOduration=5.095495615 podStartE2EDuration="14.787814889s" podCreationTimestamp="2026-03-19 12:37:20 +0000 UTC" firstStartedPulling="2026-03-19 12:37:24.227304489 +0000 UTC m=+1382.776265193" lastFinishedPulling="2026-03-19 12:37:33.919623763 +0000 UTC m=+1392.468584467" observedRunningTime="2026-03-19 12:37:34.782084981 +0000 UTC m=+1393.331045695" watchObservedRunningTime="2026-03-19 12:37:34.787814889 +0000 UTC m=+1393.336775593" Mar 19 12:37:34.812676 master-0 kubenswrapper[31830]: I0319 12:37:34.812584 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-f4e38-default-external-api-0" podStartSLOduration=13.812558226 podStartE2EDuration="13.812558226s" podCreationTimestamp="2026-03-19 12:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:37:34.801689129 +0000 UTC m=+1393.350649853" watchObservedRunningTime="2026-03-19 12:37:34.812558226 +0000 UTC m=+1393.361518930" Mar 19 12:37:36.462036 master-0 kubenswrapper[31830]: I0319 12:37:36.461968 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:36.462036 master-0 kubenswrapper[31830]: I0319 12:37:36.462044 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:36.493837 master-0 kubenswrapper[31830]: I0319 12:37:36.493698 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:36.512176 master-0 kubenswrapper[31830]: I0319 12:37:36.511881 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:36.791117 master-0 kubenswrapper[31830]: I0319 12:37:36.791035 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:36.791347 master-0 kubenswrapper[31830]: I0319 12:37:36.791220 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:40.255517 master-0 kubenswrapper[31830]: I0319 12:37:40.254621 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:40.255517 master-0 kubenswrapper[31830]: I0319 12:37:40.254742 31830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 12:37:40.255517 master-0 kubenswrapper[31830]: I0319 12:37:40.255481 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-f4e38-default-internal-api-0" Mar 19 12:37:44.758723 master-0 kubenswrapper[31830]: I0319 12:37:44.758308 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:44.758723 master-0 kubenswrapper[31830]: I0319 12:37:44.758730 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:44.791146 master-0 kubenswrapper[31830]: I0319 12:37:44.791011 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:44.805398 master-0 kubenswrapper[31830]: I0319 12:37:44.805326 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:44.912982 master-0 kubenswrapper[31830]: I0319 12:37:44.912881 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:44.912982 master-0 kubenswrapper[31830]: I0319 12:37:44.912953 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:46.944888 master-0 kubenswrapper[31830]: I0319 12:37:46.944823 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:46.945510 master-0 kubenswrapper[31830]: I0319 12:37:46.944933 31830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 12:37:46.977509 master-0 kubenswrapper[31830]: I0319 12:37:46.977439 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-f4e38-default-external-api-0" Mar 19 12:37:51.000605 master-0 kubenswrapper[31830]: I0319 12:37:51.000510 31830 generic.go:334] "Generic (PLEG): container finished" podID="644035f0-0f52-4762-a1d3-1d4ce8745615" containerID="a2c6f72f1e5bd45fdcd80e2b8f2624d82a6ae03875df1294a92a348d60246a6a" exitCode=0 Mar 19 12:37:51.000605 master-0 kubenswrapper[31830]: I0319 12:37:51.000578 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ntd2j" event={"ID":"644035f0-0f52-4762-a1d3-1d4ce8745615","Type":"ContainerDied","Data":"a2c6f72f1e5bd45fdcd80e2b8f2624d82a6ae03875df1294a92a348d60246a6a"} Mar 19 12:37:52.407631 master-0 kubenswrapper[31830]: I0319 12:37:52.407587 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ntd2j" Mar 19 12:37:53.026481 master-0 kubenswrapper[31830]: I0319 12:37:53.026410 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ntd2j" event={"ID":"644035f0-0f52-4762-a1d3-1d4ce8745615","Type":"ContainerDied","Data":"1f68eaa8077228e5b52bee567622f9d1f58c3b4467b2f6c9cb7b2983f09f50c8"} Mar 19 12:37:53.026481 master-0 kubenswrapper[31830]: I0319 12:37:53.026458 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f68eaa8077228e5b52bee567622f9d1f58c3b4467b2f6c9cb7b2983f09f50c8" Mar 19 12:37:53.026758 master-0 kubenswrapper[31830]: I0319 12:37:53.026503 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ntd2j" Mar 19 12:37:55.890502 master-0 kubenswrapper[31830]: I0319 12:37:55.890416 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644035f0-0f52-4762-a1d3-1d4ce8745615-combined-ca-bundle\") pod \"644035f0-0f52-4762-a1d3-1d4ce8745615\" (UID: \"644035f0-0f52-4762-a1d3-1d4ce8745615\") " Mar 19 12:37:55.891208 master-0 kubenswrapper[31830]: I0319 12:37:55.890729 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644035f0-0f52-4762-a1d3-1d4ce8745615-scripts\") pod \"644035f0-0f52-4762-a1d3-1d4ce8745615\" (UID: \"644035f0-0f52-4762-a1d3-1d4ce8745615\") " Mar 19 12:37:55.891208 master-0 kubenswrapper[31830]: I0319 12:37:55.891011 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjf6q\" (UniqueName: \"kubernetes.io/projected/644035f0-0f52-4762-a1d3-1d4ce8745615-kube-api-access-cjf6q\") pod \"644035f0-0f52-4762-a1d3-1d4ce8745615\" (UID: \"644035f0-0f52-4762-a1d3-1d4ce8745615\") " Mar 19 12:37:55.891208 master-0 kubenswrapper[31830]: I0319 12:37:55.891075 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644035f0-0f52-4762-a1d3-1d4ce8745615-config-data\") pod \"644035f0-0f52-4762-a1d3-1d4ce8745615\" (UID: \"644035f0-0f52-4762-a1d3-1d4ce8745615\") " Mar 19 12:37:55.895848 master-0 kubenswrapper[31830]: I0319 12:37:55.895789 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644035f0-0f52-4762-a1d3-1d4ce8745615-kube-api-access-cjf6q" (OuterVolumeSpecName: "kube-api-access-cjf6q") pod "644035f0-0f52-4762-a1d3-1d4ce8745615" (UID: "644035f0-0f52-4762-a1d3-1d4ce8745615"). InnerVolumeSpecName "kube-api-access-cjf6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:37:55.901336 master-0 kubenswrapper[31830]: I0319 12:37:55.901274 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644035f0-0f52-4762-a1d3-1d4ce8745615-scripts" (OuterVolumeSpecName: "scripts") pod "644035f0-0f52-4762-a1d3-1d4ce8745615" (UID: "644035f0-0f52-4762-a1d3-1d4ce8745615"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:37:55.914966 master-0 kubenswrapper[31830]: I0319 12:37:55.914901 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644035f0-0f52-4762-a1d3-1d4ce8745615-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "644035f0-0f52-4762-a1d3-1d4ce8745615" (UID: "644035f0-0f52-4762-a1d3-1d4ce8745615"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:37:55.926456 master-0 kubenswrapper[31830]: I0319 12:37:55.926386 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644035f0-0f52-4762-a1d3-1d4ce8745615-config-data" (OuterVolumeSpecName: "config-data") pod "644035f0-0f52-4762-a1d3-1d4ce8745615" (UID: "644035f0-0f52-4762-a1d3-1d4ce8745615"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:37:55.997415 master-0 kubenswrapper[31830]: I0319 12:37:55.996834 31830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644035f0-0f52-4762-a1d3-1d4ce8745615-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:55.997415 master-0 kubenswrapper[31830]: I0319 12:37:55.996871 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjf6q\" (UniqueName: \"kubernetes.io/projected/644035f0-0f52-4762-a1d3-1d4ce8745615-kube-api-access-cjf6q\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:55.997415 master-0 kubenswrapper[31830]: I0319 12:37:55.996886 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644035f0-0f52-4762-a1d3-1d4ce8745615-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:55.997415 master-0 kubenswrapper[31830]: I0319 12:37:55.996899 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644035f0-0f52-4762-a1d3-1d4ce8745615-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:37:56.989382 master-0 kubenswrapper[31830]: I0319 12:37:56.988596 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 19 12:37:56.990061 master-0 kubenswrapper[31830]: E0319 12:37:56.989568 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab4af90-2e9a-489c-b2bf-08579f4c3335" containerName="placement-api" Mar 19 12:37:56.990061 master-0 kubenswrapper[31830]: I0319 12:37:56.989594 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab4af90-2e9a-489c-b2bf-08579f4c3335" containerName="placement-api" Mar 19 12:37:56.990061 master-0 kubenswrapper[31830]: E0319 12:37:56.989666 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab4af90-2e9a-489c-b2bf-08579f4c3335" containerName="placement-log" Mar 19 12:37:56.990061 master-0 kubenswrapper[31830]: I0319 12:37:56.989672 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab4af90-2e9a-489c-b2bf-08579f4c3335" containerName="placement-log" Mar 19 12:37:56.990061 master-0 kubenswrapper[31830]: E0319 12:37:56.989688 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644035f0-0f52-4762-a1d3-1d4ce8745615" containerName="nova-cell0-conductor-db-sync" Mar 19 12:37:56.990061 master-0 kubenswrapper[31830]: I0319 12:37:56.989695 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="644035f0-0f52-4762-a1d3-1d4ce8745615" containerName="nova-cell0-conductor-db-sync" Mar 19 12:37:56.990061 master-0 kubenswrapper[31830]: I0319 12:37:56.989998 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="644035f0-0f52-4762-a1d3-1d4ce8745615" containerName="nova-cell0-conductor-db-sync" Mar 19 12:37:56.990061 master-0 kubenswrapper[31830]: I0319 12:37:56.990033 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ab4af90-2e9a-489c-b2bf-08579f4c3335" containerName="placement-api" Mar 19 12:37:56.990061 master-0 kubenswrapper[31830]: I0319 12:37:56.990062 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ab4af90-2e9a-489c-b2bf-08579f4c3335" containerName="placement-log" Mar 19 12:37:56.991169 master-0 kubenswrapper[31830]: I0319 12:37:56.991119 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 19 12:37:56.996016 master-0 kubenswrapper[31830]: I0319 12:37:56.995950 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 19 12:37:57.007029 master-0 kubenswrapper[31830]: I0319 12:37:57.006971 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 19 12:37:57.123604 master-0 kubenswrapper[31830]: I0319 12:37:57.123527 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50910557-81a5-4255-84eb-bd2ef2691a00-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"50910557-81a5-4255-84eb-bd2ef2691a00\") " pod="openstack/nova-cell0-conductor-0" Mar 19 12:37:57.123916 master-0 kubenswrapper[31830]: I0319 12:37:57.123695 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50910557-81a5-4255-84eb-bd2ef2691a00-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"50910557-81a5-4255-84eb-bd2ef2691a00\") " pod="openstack/nova-cell0-conductor-0" Mar 19 12:37:57.123916 master-0 kubenswrapper[31830]: I0319 12:37:57.123747 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w7j8\" (UniqueName: \"kubernetes.io/projected/50910557-81a5-4255-84eb-bd2ef2691a00-kube-api-access-4w7j8\") pod \"nova-cell0-conductor-0\" (UID: \"50910557-81a5-4255-84eb-bd2ef2691a00\") " pod="openstack/nova-cell0-conductor-0" Mar 19 12:37:57.227539 master-0 kubenswrapper[31830]: I0319 12:37:57.227353 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w7j8\" (UniqueName: \"kubernetes.io/projected/50910557-81a5-4255-84eb-bd2ef2691a00-kube-api-access-4w7j8\") pod \"nova-cell0-conductor-0\" (UID: \"50910557-81a5-4255-84eb-bd2ef2691a00\") " pod="openstack/nova-cell0-conductor-0" Mar 19 12:37:57.227539 master-0 kubenswrapper[31830]: I0319 12:37:57.227538 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50910557-81a5-4255-84eb-bd2ef2691a00-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"50910557-81a5-4255-84eb-bd2ef2691a00\") " pod="openstack/nova-cell0-conductor-0" Mar 19 12:37:57.228626 master-0 kubenswrapper[31830]: I0319 12:37:57.227609 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50910557-81a5-4255-84eb-bd2ef2691a00-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"50910557-81a5-4255-84eb-bd2ef2691a00\") " pod="openstack/nova-cell0-conductor-0" Mar 19 12:37:57.238853 master-0 kubenswrapper[31830]: I0319 12:37:57.232228 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50910557-81a5-4255-84eb-bd2ef2691a00-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"50910557-81a5-4255-84eb-bd2ef2691a00\") " pod="openstack/nova-cell0-conductor-0" Mar 19 12:37:57.238853 master-0 kubenswrapper[31830]: I0319 12:37:57.232490 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50910557-81a5-4255-84eb-bd2ef2691a00-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"50910557-81a5-4255-84eb-bd2ef2691a00\") " pod="openstack/nova-cell0-conductor-0" Mar 19 12:37:57.251505 master-0 kubenswrapper[31830]: I0319 12:37:57.251395 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w7j8\" (UniqueName: \"kubernetes.io/projected/50910557-81a5-4255-84eb-bd2ef2691a00-kube-api-access-4w7j8\") pod \"nova-cell0-conductor-0\" (UID: \"50910557-81a5-4255-84eb-bd2ef2691a00\") " pod="openstack/nova-cell0-conductor-0" Mar 19 12:37:57.334702 master-0 kubenswrapper[31830]: I0319 12:37:57.334626 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 19 12:37:57.776483 master-0 kubenswrapper[31830]: I0319 12:37:57.776414 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 19 12:37:57.784637 master-0 kubenswrapper[31830]: W0319 12:37:57.784587 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50910557_81a5_4255_84eb_bd2ef2691a00.slice/crio-0625653aef694f8d4b79c950b808486f53012114ab3439ff70442e690084d4d0 WatchSource:0}: Error finding container 0625653aef694f8d4b79c950b808486f53012114ab3439ff70442e690084d4d0: Status 404 returned error can't find the container with id 0625653aef694f8d4b79c950b808486f53012114ab3439ff70442e690084d4d0 Mar 19 12:37:58.075393 master-0 kubenswrapper[31830]: I0319 12:37:58.075265 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"50910557-81a5-4255-84eb-bd2ef2691a00","Type":"ContainerStarted","Data":"2535dfae7b19e42df847886bb23a0a24d484cc3d3bb59fe4d6497ba95e5e4895"} Mar 19 12:37:58.075393 master-0 kubenswrapper[31830]: I0319 12:37:58.075314 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"50910557-81a5-4255-84eb-bd2ef2691a00","Type":"ContainerStarted","Data":"0625653aef694f8d4b79c950b808486f53012114ab3439ff70442e690084d4d0"} Mar 19 12:37:58.076091 master-0 kubenswrapper[31830]: I0319 12:37:58.075437 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 19 12:37:58.101880 master-0 kubenswrapper[31830]: I0319 12:37:58.100791 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.100771156 podStartE2EDuration="2.100771156s" podCreationTimestamp="2026-03-19 12:37:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:37:58.091324963 +0000 UTC m=+1416.640285677" watchObservedRunningTime="2026-03-19 12:37:58.100771156 +0000 UTC m=+1416.649731860" Mar 19 12:38:02.362506 master-0 kubenswrapper[31830]: I0319 12:38:02.362387 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 19 12:38:02.856700 master-0 kubenswrapper[31830]: I0319 12:38:02.856637 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-mk6fp"] Mar 19 12:38:02.859482 master-0 kubenswrapper[31830]: I0319 12:38:02.859442 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mk6fp" Mar 19 12:38:02.864536 master-0 kubenswrapper[31830]: I0319 12:38:02.864480 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 19 12:38:02.864762 master-0 kubenswrapper[31830]: I0319 12:38:02.864696 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 19 12:38:02.885813 master-0 kubenswrapper[31830]: I0319 12:38:02.885180 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mk6fp"] Mar 19 12:38:02.969829 master-0 kubenswrapper[31830]: I0319 12:38:02.964959 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-242mg\" (UniqueName: \"kubernetes.io/projected/f24450d6-f939-4621-8d88-e0ecc012ebb6-kube-api-access-242mg\") pod \"nova-cell0-cell-mapping-mk6fp\" (UID: \"f24450d6-f939-4621-8d88-e0ecc012ebb6\") " pod="openstack/nova-cell0-cell-mapping-mk6fp" Mar 19 12:38:02.969829 master-0 kubenswrapper[31830]: I0319 12:38:02.965120 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f24450d6-f939-4621-8d88-e0ecc012ebb6-config-data\") pod \"nova-cell0-cell-mapping-mk6fp\" (UID: \"f24450d6-f939-4621-8d88-e0ecc012ebb6\") " pod="openstack/nova-cell0-cell-mapping-mk6fp" Mar 19 12:38:02.969829 master-0 kubenswrapper[31830]: I0319 12:38:02.965174 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f24450d6-f939-4621-8d88-e0ecc012ebb6-scripts\") pod \"nova-cell0-cell-mapping-mk6fp\" (UID: \"f24450d6-f939-4621-8d88-e0ecc012ebb6\") " pod="openstack/nova-cell0-cell-mapping-mk6fp" Mar 19 12:38:02.969829 master-0 kubenswrapper[31830]: I0319 12:38:02.965205 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f24450d6-f939-4621-8d88-e0ecc012ebb6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mk6fp\" (UID: \"f24450d6-f939-4621-8d88-e0ecc012ebb6\") " pod="openstack/nova-cell0-cell-mapping-mk6fp" Mar 19 12:38:03.069821 master-0 kubenswrapper[31830]: I0319 12:38:03.068256 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-242mg\" (UniqueName: \"kubernetes.io/projected/f24450d6-f939-4621-8d88-e0ecc012ebb6-kube-api-access-242mg\") pod \"nova-cell0-cell-mapping-mk6fp\" (UID: \"f24450d6-f939-4621-8d88-e0ecc012ebb6\") " pod="openstack/nova-cell0-cell-mapping-mk6fp" Mar 19 12:38:03.069821 master-0 kubenswrapper[31830]: I0319 12:38:03.068467 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f24450d6-f939-4621-8d88-e0ecc012ebb6-config-data\") pod \"nova-cell0-cell-mapping-mk6fp\" (UID: \"f24450d6-f939-4621-8d88-e0ecc012ebb6\") " pod="openstack/nova-cell0-cell-mapping-mk6fp" Mar 19 12:38:03.069821 master-0 kubenswrapper[31830]: I0319 12:38:03.068564 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f24450d6-f939-4621-8d88-e0ecc012ebb6-scripts\") pod \"nova-cell0-cell-mapping-mk6fp\" (UID: \"f24450d6-f939-4621-8d88-e0ecc012ebb6\") " pod="openstack/nova-cell0-cell-mapping-mk6fp" Mar 19 12:38:03.069821 master-0 kubenswrapper[31830]: I0319 12:38:03.068613 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f24450d6-f939-4621-8d88-e0ecc012ebb6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mk6fp\" (UID: \"f24450d6-f939-4621-8d88-e0ecc012ebb6\") " pod="openstack/nova-cell0-cell-mapping-mk6fp" Mar 19 12:38:03.082877 master-0 kubenswrapper[31830]: I0319 12:38:03.079625 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f24450d6-f939-4621-8d88-e0ecc012ebb6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mk6fp\" (UID: \"f24450d6-f939-4621-8d88-e0ecc012ebb6\") " pod="openstack/nova-cell0-cell-mapping-mk6fp" Mar 19 12:38:03.093375 master-0 kubenswrapper[31830]: I0319 12:38:03.093299 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f24450d6-f939-4621-8d88-e0ecc012ebb6-config-data\") pod \"nova-cell0-cell-mapping-mk6fp\" (UID: \"f24450d6-f939-4621-8d88-e0ecc012ebb6\") " pod="openstack/nova-cell0-cell-mapping-mk6fp" Mar 19 12:38:03.095177 master-0 kubenswrapper[31830]: I0319 12:38:03.095126 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f24450d6-f939-4621-8d88-e0ecc012ebb6-scripts\") pod \"nova-cell0-cell-mapping-mk6fp\" (UID: \"f24450d6-f939-4621-8d88-e0ecc012ebb6\") " pod="openstack/nova-cell0-cell-mapping-mk6fp" Mar 19 12:38:03.138697 master-0 kubenswrapper[31830]: I0319 12:38:03.125852 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-242mg\" (UniqueName: \"kubernetes.io/projected/f24450d6-f939-4621-8d88-e0ecc012ebb6-kube-api-access-242mg\") pod \"nova-cell0-cell-mapping-mk6fp\" (UID: \"f24450d6-f939-4621-8d88-e0ecc012ebb6\") " pod="openstack/nova-cell0-cell-mapping-mk6fp" Mar 19 12:38:03.222940 master-0 kubenswrapper[31830]: I0319 12:38:03.209542 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mk6fp" Mar 19 12:38:03.261936 master-0 kubenswrapper[31830]: I0319 12:38:03.261862 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 19 12:38:03.271398 master-0 kubenswrapper[31830]: I0319 12:38:03.268989 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 12:38:03.298787 master-0 kubenswrapper[31830]: I0319 12:38:03.291657 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 19 12:38:03.343379 master-0 kubenswrapper[31830]: I0319 12:38:03.343318 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39e14f09-1088-4009-a899-1b8da89f4f11-logs\") pod \"nova-metadata-0\" (UID: \"39e14f09-1088-4009-a899-1b8da89f4f11\") " pod="openstack/nova-metadata-0" Mar 19 12:38:03.343589 master-0 kubenswrapper[31830]: I0319 12:38:03.343402 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39e14f09-1088-4009-a899-1b8da89f4f11-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"39e14f09-1088-4009-a899-1b8da89f4f11\") " pod="openstack/nova-metadata-0" Mar 19 12:38:03.343589 master-0 kubenswrapper[31830]: I0319 12:38:03.343437 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39e14f09-1088-4009-a899-1b8da89f4f11-config-data\") pod \"nova-metadata-0\" (UID: \"39e14f09-1088-4009-a899-1b8da89f4f11\") " pod="openstack/nova-metadata-0" Mar 19 12:38:03.343589 master-0 kubenswrapper[31830]: I0319 12:38:03.343550 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxrbv\" (UniqueName: \"kubernetes.io/projected/39e14f09-1088-4009-a899-1b8da89f4f11-kube-api-access-wxrbv\") pod \"nova-metadata-0\" (UID: \"39e14f09-1088-4009-a899-1b8da89f4f11\") " pod="openstack/nova-metadata-0" Mar 19 12:38:03.383936 master-0 kubenswrapper[31830]: I0319 12:38:03.382488 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 19 12:38:03.385477 master-0 kubenswrapper[31830]: I0319 12:38:03.385441 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 12:38:03.440425 master-0 kubenswrapper[31830]: I0319 12:38:03.440383 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 19 12:38:03.560693 master-0 kubenswrapper[31830]: I0319 12:38:03.499095 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 12:38:03.560693 master-0 kubenswrapper[31830]: I0319 12:38:03.506111 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 12:38:03.560693 master-0 kubenswrapper[31830]: I0319 12:38:03.507467 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a704971-dde7-4ffa-a887-5e8067b964bd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4a704971-dde7-4ffa-a887-5e8067b964bd\") " pod="openstack/nova-api-0" Mar 19 12:38:03.560693 master-0 kubenswrapper[31830]: I0319 12:38:03.507538 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxrbv\" (UniqueName: \"kubernetes.io/projected/39e14f09-1088-4009-a899-1b8da89f4f11-kube-api-access-wxrbv\") pod \"nova-metadata-0\" (UID: \"39e14f09-1088-4009-a899-1b8da89f4f11\") " pod="openstack/nova-metadata-0" Mar 19 12:38:03.560693 master-0 kubenswrapper[31830]: I0319 12:38:03.507658 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a704971-dde7-4ffa-a887-5e8067b964bd-logs\") pod \"nova-api-0\" (UID: \"4a704971-dde7-4ffa-a887-5e8067b964bd\") " pod="openstack/nova-api-0" Mar 19 12:38:03.560693 master-0 kubenswrapper[31830]: I0319 12:38:03.507737 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39e14f09-1088-4009-a899-1b8da89f4f11-logs\") pod \"nova-metadata-0\" (UID: \"39e14f09-1088-4009-a899-1b8da89f4f11\") " pod="openstack/nova-metadata-0" Mar 19 12:38:03.560693 master-0 kubenswrapper[31830]: I0319 12:38:03.507778 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a704971-dde7-4ffa-a887-5e8067b964bd-config-data\") pod \"nova-api-0\" (UID: \"4a704971-dde7-4ffa-a887-5e8067b964bd\") " pod="openstack/nova-api-0" Mar 19 12:38:03.560693 master-0 kubenswrapper[31830]: I0319 12:38:03.507821 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39e14f09-1088-4009-a899-1b8da89f4f11-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"39e14f09-1088-4009-a899-1b8da89f4f11\") " pod="openstack/nova-metadata-0" Mar 19 12:38:03.560693 master-0 kubenswrapper[31830]: I0319 12:38:03.507850 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39e14f09-1088-4009-a899-1b8da89f4f11-config-data\") pod \"nova-metadata-0\" (UID: \"39e14f09-1088-4009-a899-1b8da89f4f11\") " pod="openstack/nova-metadata-0" Mar 19 12:38:03.560693 master-0 kubenswrapper[31830]: I0319 12:38:03.507872 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdqlq\" (UniqueName: \"kubernetes.io/projected/4a704971-dde7-4ffa-a887-5e8067b964bd-kube-api-access-sdqlq\") pod \"nova-api-0\" (UID: \"4a704971-dde7-4ffa-a887-5e8067b964bd\") " pod="openstack/nova-api-0" Mar 19 12:38:03.560693 master-0 kubenswrapper[31830]: I0319 12:38:03.508910 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39e14f09-1088-4009-a899-1b8da89f4f11-logs\") pod \"nova-metadata-0\" (UID: \"39e14f09-1088-4009-a899-1b8da89f4f11\") " pod="openstack/nova-metadata-0" Mar 19 12:38:03.560693 master-0 kubenswrapper[31830]: I0319 12:38:03.514944 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39e14f09-1088-4009-a899-1b8da89f4f11-config-data\") pod \"nova-metadata-0\" (UID: \"39e14f09-1088-4009-a899-1b8da89f4f11\") " pod="openstack/nova-metadata-0" Mar 19 12:38:03.560693 master-0 kubenswrapper[31830]: I0319 12:38:03.518961 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 12:38:03.560693 master-0 kubenswrapper[31830]: I0319 12:38:03.520966 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 12:38:03.560693 master-0 kubenswrapper[31830]: I0319 12:38:03.524619 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39e14f09-1088-4009-a899-1b8da89f4f11-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"39e14f09-1088-4009-a899-1b8da89f4f11\") " pod="openstack/nova-metadata-0" Mar 19 12:38:03.560693 master-0 kubenswrapper[31830]: I0319 12:38:03.541280 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 12:38:03.561470 master-0 kubenswrapper[31830]: I0319 12:38:03.561142 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 12:38:03.582840 master-0 kubenswrapper[31830]: I0319 12:38:03.563463 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:03.582840 master-0 kubenswrapper[31830]: I0319 12:38:03.568057 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 19 12:38:03.582840 master-0 kubenswrapper[31830]: I0319 12:38:03.572643 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 19 12:38:03.612002 master-0 kubenswrapper[31830]: I0319 12:38:03.611152 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a704971-dde7-4ffa-a887-5e8067b964bd-config-data\") pod \"nova-api-0\" (UID: \"4a704971-dde7-4ffa-a887-5e8067b964bd\") " pod="openstack/nova-api-0" Mar 19 12:38:03.612002 master-0 kubenswrapper[31830]: I0319 12:38:03.611206 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdqlq\" (UniqueName: \"kubernetes.io/projected/4a704971-dde7-4ffa-a887-5e8067b964bd-kube-api-access-sdqlq\") pod \"nova-api-0\" (UID: \"4a704971-dde7-4ffa-a887-5e8067b964bd\") " pod="openstack/nova-api-0" Mar 19 12:38:03.612002 master-0 kubenswrapper[31830]: I0319 12:38:03.611277 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a704971-dde7-4ffa-a887-5e8067b964bd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4a704971-dde7-4ffa-a887-5e8067b964bd\") " pod="openstack/nova-api-0" Mar 19 12:38:03.612002 master-0 kubenswrapper[31830]: I0319 12:38:03.611374 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380fc05c-56b2-4e38-8601-bca5c49a343e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"380fc05c-56b2-4e38-8601-bca5c49a343e\") " pod="openstack/nova-scheduler-0" Mar 19 12:38:03.612002 master-0 kubenswrapper[31830]: I0319 12:38:03.611416 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z646q\" (UniqueName: \"kubernetes.io/projected/380fc05c-56b2-4e38-8601-bca5c49a343e-kube-api-access-z646q\") pod \"nova-scheduler-0\" (UID: \"380fc05c-56b2-4e38-8601-bca5c49a343e\") " pod="openstack/nova-scheduler-0" Mar 19 12:38:03.612002 master-0 kubenswrapper[31830]: I0319 12:38:03.611497 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a704971-dde7-4ffa-a887-5e8067b964bd-logs\") pod \"nova-api-0\" (UID: \"4a704971-dde7-4ffa-a887-5e8067b964bd\") " pod="openstack/nova-api-0" Mar 19 12:38:03.612002 master-0 kubenswrapper[31830]: I0319 12:38:03.611582 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/380fc05c-56b2-4e38-8601-bca5c49a343e-config-data\") pod \"nova-scheduler-0\" (UID: \"380fc05c-56b2-4e38-8601-bca5c49a343e\") " pod="openstack/nova-scheduler-0" Mar 19 12:38:03.612449 master-0 kubenswrapper[31830]: I0319 12:38:03.612313 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a704971-dde7-4ffa-a887-5e8067b964bd-logs\") pod \"nova-api-0\" (UID: \"4a704971-dde7-4ffa-a887-5e8067b964bd\") " pod="openstack/nova-api-0" Mar 19 12:38:03.629007 master-0 kubenswrapper[31830]: I0319 12:38:03.628030 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b44cf4869-grng7"] Mar 19 12:38:03.632875 master-0 kubenswrapper[31830]: I0319 12:38:03.630288 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.644960 master-0 kubenswrapper[31830]: I0319 12:38:03.644452 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b44cf4869-grng7"] Mar 19 12:38:03.656387 master-0 kubenswrapper[31830]: I0319 12:38:03.656083 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a704971-dde7-4ffa-a887-5e8067b964bd-config-data\") pod \"nova-api-0\" (UID: \"4a704971-dde7-4ffa-a887-5e8067b964bd\") " pod="openstack/nova-api-0" Mar 19 12:38:03.666587 master-0 kubenswrapper[31830]: I0319 12:38:03.666269 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a704971-dde7-4ffa-a887-5e8067b964bd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4a704971-dde7-4ffa-a887-5e8067b964bd\") " pod="openstack/nova-api-0" Mar 19 12:38:03.768319 master-0 kubenswrapper[31830]: I0319 12:38:03.768277 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 12:38:03.785843 master-0 kubenswrapper[31830]: I0319 12:38:03.780391 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-edpm-b\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.785843 master-0 kubenswrapper[31830]: I0319 12:38:03.780473 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-edpm-a\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.785843 master-0 kubenswrapper[31830]: I0319 12:38:03.780563 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-ovsdbserver-sb\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.785843 master-0 kubenswrapper[31830]: I0319 12:38:03.780635 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8g62\" (UniqueName: \"kubernetes.io/projected/e87b78cf-8720-4f07-8bb5-e8a2de404fea-kube-api-access-k8g62\") pod \"nova-cell1-novncproxy-0\" (UID: \"e87b78cf-8720-4f07-8bb5-e8a2de404fea\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:03.785843 master-0 kubenswrapper[31830]: I0319 12:38:03.780680 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e87b78cf-8720-4f07-8bb5-e8a2de404fea-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e87b78cf-8720-4f07-8bb5-e8a2de404fea\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:03.785843 master-0 kubenswrapper[31830]: I0319 12:38:03.780712 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-dns-svc\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.785843 master-0 kubenswrapper[31830]: I0319 12:38:03.780856 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380fc05c-56b2-4e38-8601-bca5c49a343e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"380fc05c-56b2-4e38-8601-bca5c49a343e\") " pod="openstack/nova-scheduler-0" Mar 19 12:38:03.785843 master-0 kubenswrapper[31830]: I0319 12:38:03.780920 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z646q\" (UniqueName: \"kubernetes.io/projected/380fc05c-56b2-4e38-8601-bca5c49a343e-kube-api-access-z646q\") pod \"nova-scheduler-0\" (UID: \"380fc05c-56b2-4e38-8601-bca5c49a343e\") " pod="openstack/nova-scheduler-0" Mar 19 12:38:03.785843 master-0 kubenswrapper[31830]: I0319 12:38:03.780996 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-config\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.785843 master-0 kubenswrapper[31830]: I0319 12:38:03.781048 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7grl\" (UniqueName: \"kubernetes.io/projected/aadf7978-e684-447a-897d-5e643ecbd822-kube-api-access-v7grl\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.785843 master-0 kubenswrapper[31830]: I0319 12:38:03.781080 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-ovsdbserver-nb\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.785843 master-0 kubenswrapper[31830]: I0319 12:38:03.781165 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-dns-swift-storage-0\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.785843 master-0 kubenswrapper[31830]: I0319 12:38:03.781236 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/380fc05c-56b2-4e38-8601-bca5c49a343e-config-data\") pod \"nova-scheduler-0\" (UID: \"380fc05c-56b2-4e38-8601-bca5c49a343e\") " pod="openstack/nova-scheduler-0" Mar 19 12:38:03.785843 master-0 kubenswrapper[31830]: I0319 12:38:03.781276 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e87b78cf-8720-4f07-8bb5-e8a2de404fea-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e87b78cf-8720-4f07-8bb5-e8a2de404fea\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:03.805335 master-0 kubenswrapper[31830]: I0319 12:38:03.805256 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380fc05c-56b2-4e38-8601-bca5c49a343e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"380fc05c-56b2-4e38-8601-bca5c49a343e\") " pod="openstack/nova-scheduler-0" Mar 19 12:38:03.814841 master-0 kubenswrapper[31830]: I0319 12:38:03.811180 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdqlq\" (UniqueName: \"kubernetes.io/projected/4a704971-dde7-4ffa-a887-5e8067b964bd-kube-api-access-sdqlq\") pod \"nova-api-0\" (UID: \"4a704971-dde7-4ffa-a887-5e8067b964bd\") " pod="openstack/nova-api-0" Mar 19 12:38:03.825840 master-0 kubenswrapper[31830]: I0319 12:38:03.823662 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/380fc05c-56b2-4e38-8601-bca5c49a343e-config-data\") pod \"nova-scheduler-0\" (UID: \"380fc05c-56b2-4e38-8601-bca5c49a343e\") " pod="openstack/nova-scheduler-0" Mar 19 12:38:03.834834 master-0 kubenswrapper[31830]: I0319 12:38:03.832173 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z646q\" (UniqueName: \"kubernetes.io/projected/380fc05c-56b2-4e38-8601-bca5c49a343e-kube-api-access-z646q\") pod \"nova-scheduler-0\" (UID: \"380fc05c-56b2-4e38-8601-bca5c49a343e\") " pod="openstack/nova-scheduler-0" Mar 19 12:38:03.903934 master-0 kubenswrapper[31830]: I0319 12:38:03.896963 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxrbv\" (UniqueName: \"kubernetes.io/projected/39e14f09-1088-4009-a899-1b8da89f4f11-kube-api-access-wxrbv\") pod \"nova-metadata-0\" (UID: \"39e14f09-1088-4009-a899-1b8da89f4f11\") " pod="openstack/nova-metadata-0" Mar 19 12:38:03.903934 master-0 kubenswrapper[31830]: I0319 12:38:03.897385 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-config\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.903934 master-0 kubenswrapper[31830]: I0319 12:38:03.897425 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7grl\" (UniqueName: \"kubernetes.io/projected/aadf7978-e684-447a-897d-5e643ecbd822-kube-api-access-v7grl\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.903934 master-0 kubenswrapper[31830]: I0319 12:38:03.897447 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-ovsdbserver-nb\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.903934 master-0 kubenswrapper[31830]: I0319 12:38:03.897501 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-dns-swift-storage-0\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.903934 master-0 kubenswrapper[31830]: I0319 12:38:03.897544 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e87b78cf-8720-4f07-8bb5-e8a2de404fea-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e87b78cf-8720-4f07-8bb5-e8a2de404fea\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:03.903934 master-0 kubenswrapper[31830]: I0319 12:38:03.897598 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-edpm-b\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.903934 master-0 kubenswrapper[31830]: I0319 12:38:03.897617 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-edpm-a\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.903934 master-0 kubenswrapper[31830]: I0319 12:38:03.897731 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-ovsdbserver-sb\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.903934 master-0 kubenswrapper[31830]: I0319 12:38:03.897785 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8g62\" (UniqueName: \"kubernetes.io/projected/e87b78cf-8720-4f07-8bb5-e8a2de404fea-kube-api-access-k8g62\") pod \"nova-cell1-novncproxy-0\" (UID: \"e87b78cf-8720-4f07-8bb5-e8a2de404fea\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:03.903934 master-0 kubenswrapper[31830]: I0319 12:38:03.898765 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e87b78cf-8720-4f07-8bb5-e8a2de404fea-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e87b78cf-8720-4f07-8bb5-e8a2de404fea\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:03.903934 master-0 kubenswrapper[31830]: I0319 12:38:03.899683 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-ovsdbserver-nb\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.903934 master-0 kubenswrapper[31830]: I0319 12:38:03.900509 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-ovsdbserver-sb\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.903934 master-0 kubenswrapper[31830]: I0319 12:38:03.901238 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-edpm-a\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.903934 master-0 kubenswrapper[31830]: I0319 12:38:03.901764 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-edpm-b\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.903934 master-0 kubenswrapper[31830]: I0319 12:38:03.901865 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-dns-svc\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.903934 master-0 kubenswrapper[31830]: I0319 12:38:03.902424 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-dns-swift-storage-0\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.903934 master-0 kubenswrapper[31830]: I0319 12:38:03.903636 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e87b78cf-8720-4f07-8bb5-e8a2de404fea-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e87b78cf-8720-4f07-8bb5-e8a2de404fea\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:03.909668 master-0 kubenswrapper[31830]: I0319 12:38:03.905581 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-dns-svc\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.909668 master-0 kubenswrapper[31830]: I0319 12:38:03.907066 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-config\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.924079 master-0 kubenswrapper[31830]: I0319 12:38:03.923774 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e87b78cf-8720-4f07-8bb5-e8a2de404fea-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e87b78cf-8720-4f07-8bb5-e8a2de404fea\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:03.926723 master-0 kubenswrapper[31830]: I0319 12:38:03.926668 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8g62\" (UniqueName: \"kubernetes.io/projected/e87b78cf-8720-4f07-8bb5-e8a2de404fea-kube-api-access-k8g62\") pod \"nova-cell1-novncproxy-0\" (UID: \"e87b78cf-8720-4f07-8bb5-e8a2de404fea\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:03.928861 master-0 kubenswrapper[31830]: I0319 12:38:03.928823 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7grl\" (UniqueName: \"kubernetes.io/projected/aadf7978-e684-447a-897d-5e643ecbd822-kube-api-access-v7grl\") pod \"dnsmasq-dns-5b44cf4869-grng7\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:03.957661 master-0 kubenswrapper[31830]: I0319 12:38:03.956888 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 12:38:04.012980 master-0 kubenswrapper[31830]: I0319 12:38:04.008052 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mk6fp"] Mar 19 12:38:04.091978 master-0 kubenswrapper[31830]: I0319 12:38:04.088863 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 12:38:04.133868 master-0 kubenswrapper[31830]: I0319 12:38:04.132571 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 12:38:04.187143 master-0 kubenswrapper[31830]: I0319 12:38:04.184355 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:04.187143 master-0 kubenswrapper[31830]: I0319 12:38:04.186591 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:04.296728 master-0 kubenswrapper[31830]: I0319 12:38:04.295861 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mk6fp" event={"ID":"f24450d6-f939-4621-8d88-e0ecc012ebb6","Type":"ContainerStarted","Data":"d57cab916f539e78af61e0ec01ce5f674195bd18850cf2cd87f5d8889bde2fa9"} Mar 19 12:38:04.392023 master-0 kubenswrapper[31830]: I0319 12:38:04.391876 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7qtfk"] Mar 19 12:38:04.394630 master-0 kubenswrapper[31830]: I0319 12:38:04.394598 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7qtfk" Mar 19 12:38:04.400168 master-0 kubenswrapper[31830]: I0319 12:38:04.398220 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 19 12:38:04.400168 master-0 kubenswrapper[31830]: I0319 12:38:04.398281 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 19 12:38:04.513837 master-0 kubenswrapper[31830]: I0319 12:38:04.498572 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7qtfk"] Mar 19 12:38:04.513837 master-0 kubenswrapper[31830]: I0319 12:38:04.500425 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8188767-a3a9-4859-aa0f-bc448a038114-config-data\") pod \"nova-cell1-conductor-db-sync-7qtfk\" (UID: \"d8188767-a3a9-4859-aa0f-bc448a038114\") " pod="openstack/nova-cell1-conductor-db-sync-7qtfk" Mar 19 12:38:04.513837 master-0 kubenswrapper[31830]: I0319 12:38:04.500548 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8188767-a3a9-4859-aa0f-bc448a038114-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7qtfk\" (UID: \"d8188767-a3a9-4859-aa0f-bc448a038114\") " pod="openstack/nova-cell1-conductor-db-sync-7qtfk" Mar 19 12:38:04.513837 master-0 kubenswrapper[31830]: I0319 12:38:04.500603 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8188767-a3a9-4859-aa0f-bc448a038114-scripts\") pod \"nova-cell1-conductor-db-sync-7qtfk\" (UID: \"d8188767-a3a9-4859-aa0f-bc448a038114\") " pod="openstack/nova-cell1-conductor-db-sync-7qtfk" Mar 19 12:38:04.513837 master-0 kubenswrapper[31830]: I0319 12:38:04.505457 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx7ft\" (UniqueName: \"kubernetes.io/projected/d8188767-a3a9-4859-aa0f-bc448a038114-kube-api-access-jx7ft\") pod \"nova-cell1-conductor-db-sync-7qtfk\" (UID: \"d8188767-a3a9-4859-aa0f-bc448a038114\") " pod="openstack/nova-cell1-conductor-db-sync-7qtfk" Mar 19 12:38:04.586938 master-0 kubenswrapper[31830]: W0319 12:38:04.581386 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39e14f09_1088_4009_a899_1b8da89f4f11.slice/crio-13af947831e055717798e62e175d10a1734b26edc48babf9dadc64fd5993af30 WatchSource:0}: Error finding container 13af947831e055717798e62e175d10a1734b26edc48babf9dadc64fd5993af30: Status 404 returned error can't find the container with id 13af947831e055717798e62e175d10a1734b26edc48babf9dadc64fd5993af30 Mar 19 12:38:04.612228 master-0 kubenswrapper[31830]: I0319 12:38:04.610018 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8188767-a3a9-4859-aa0f-bc448a038114-config-data\") pod \"nova-cell1-conductor-db-sync-7qtfk\" (UID: \"d8188767-a3a9-4859-aa0f-bc448a038114\") " pod="openstack/nova-cell1-conductor-db-sync-7qtfk" Mar 19 12:38:04.612228 master-0 kubenswrapper[31830]: I0319 12:38:04.611741 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8188767-a3a9-4859-aa0f-bc448a038114-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7qtfk\" (UID: \"d8188767-a3a9-4859-aa0f-bc448a038114\") " pod="openstack/nova-cell1-conductor-db-sync-7qtfk" Mar 19 12:38:04.612228 master-0 kubenswrapper[31830]: I0319 12:38:04.611825 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8188767-a3a9-4859-aa0f-bc448a038114-scripts\") pod \"nova-cell1-conductor-db-sync-7qtfk\" (UID: \"d8188767-a3a9-4859-aa0f-bc448a038114\") " pod="openstack/nova-cell1-conductor-db-sync-7qtfk" Mar 19 12:38:04.612228 master-0 kubenswrapper[31830]: I0319 12:38:04.612137 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx7ft\" (UniqueName: \"kubernetes.io/projected/d8188767-a3a9-4859-aa0f-bc448a038114-kube-api-access-jx7ft\") pod \"nova-cell1-conductor-db-sync-7qtfk\" (UID: \"d8188767-a3a9-4859-aa0f-bc448a038114\") " pod="openstack/nova-cell1-conductor-db-sync-7qtfk" Mar 19 12:38:04.615973 master-0 kubenswrapper[31830]: I0319 12:38:04.615327 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8188767-a3a9-4859-aa0f-bc448a038114-scripts\") pod \"nova-cell1-conductor-db-sync-7qtfk\" (UID: \"d8188767-a3a9-4859-aa0f-bc448a038114\") " pod="openstack/nova-cell1-conductor-db-sync-7qtfk" Mar 19 12:38:04.617850 master-0 kubenswrapper[31830]: I0319 12:38:04.617143 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8188767-a3a9-4859-aa0f-bc448a038114-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7qtfk\" (UID: \"d8188767-a3a9-4859-aa0f-bc448a038114\") " pod="openstack/nova-cell1-conductor-db-sync-7qtfk" Mar 19 12:38:04.623558 master-0 kubenswrapper[31830]: I0319 12:38:04.623502 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8188767-a3a9-4859-aa0f-bc448a038114-config-data\") pod \"nova-cell1-conductor-db-sync-7qtfk\" (UID: \"d8188767-a3a9-4859-aa0f-bc448a038114\") " pod="openstack/nova-cell1-conductor-db-sync-7qtfk" Mar 19 12:38:04.630533 master-0 kubenswrapper[31830]: I0319 12:38:04.629979 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx7ft\" (UniqueName: \"kubernetes.io/projected/d8188767-a3a9-4859-aa0f-bc448a038114-kube-api-access-jx7ft\") pod \"nova-cell1-conductor-db-sync-7qtfk\" (UID: \"d8188767-a3a9-4859-aa0f-bc448a038114\") " pod="openstack/nova-cell1-conductor-db-sync-7qtfk" Mar 19 12:38:04.650548 master-0 kubenswrapper[31830]: I0319 12:38:04.650249 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 12:38:04.873407 master-0 kubenswrapper[31830]: I0319 12:38:04.865616 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7qtfk" Mar 19 12:38:04.929904 master-0 kubenswrapper[31830]: I0319 12:38:04.929785 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 12:38:05.323975 master-0 kubenswrapper[31830]: I0319 12:38:05.321383 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 12:38:05.332398 master-0 kubenswrapper[31830]: I0319 12:38:05.332343 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b44cf4869-grng7"] Mar 19 12:38:05.337600 master-0 kubenswrapper[31830]: I0319 12:38:05.337541 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mk6fp" event={"ID":"f24450d6-f939-4621-8d88-e0ecc012ebb6","Type":"ContainerStarted","Data":"83b29b8f4d632e9bb87c36f92639bab0483a79f1a1f95063473e19c4696969fd"} Mar 19 12:38:05.339402 master-0 kubenswrapper[31830]: I0319 12:38:05.339361 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a704971-dde7-4ffa-a887-5e8067b964bd","Type":"ContainerStarted","Data":"79e3e6f384167f2ee68f34cd050672a4cf965e042837b69f7206158116109203"} Mar 19 12:38:05.342565 master-0 kubenswrapper[31830]: I0319 12:38:05.342525 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e87b78cf-8720-4f07-8bb5-e8a2de404fea","Type":"ContainerStarted","Data":"1e34ec055a358ffb0c1b73ea8bf17964dd2599c76b4f4932193911c20cc7d4a2"} Mar 19 12:38:05.343488 master-0 kubenswrapper[31830]: I0319 12:38:05.343456 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"380fc05c-56b2-4e38-8601-bca5c49a343e","Type":"ContainerStarted","Data":"ca023c33f14c9574149d38f87c1dd0ae70346868e4ce84e473ac3508720e64fe"} Mar 19 12:38:05.344510 master-0 kubenswrapper[31830]: I0319 12:38:05.344477 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b44cf4869-grng7" event={"ID":"aadf7978-e684-447a-897d-5e643ecbd822","Type":"ContainerStarted","Data":"89c854a619b207ab5d0eff27d2e74a8ccb75d92e5c4aa316f86b5b94686db6a6"} Mar 19 12:38:05.346084 master-0 kubenswrapper[31830]: I0319 12:38:05.346048 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"39e14f09-1088-4009-a899-1b8da89f4f11","Type":"ContainerStarted","Data":"13af947831e055717798e62e175d10a1734b26edc48babf9dadc64fd5993af30"} Mar 19 12:38:05.399845 master-0 kubenswrapper[31830]: I0319 12:38:05.394955 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 12:38:05.404918 master-0 kubenswrapper[31830]: I0319 12:38:05.404853 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-mk6fp" podStartSLOduration=3.404831164 podStartE2EDuration="3.404831164s" podCreationTimestamp="2026-03-19 12:38:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:38:05.359672784 +0000 UTC m=+1423.908648978" watchObservedRunningTime="2026-03-19 12:38:05.404831164 +0000 UTC m=+1423.953791858" Mar 19 12:38:05.444219 master-0 kubenswrapper[31830]: I0319 12:38:05.444152 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7qtfk"] Mar 19 12:38:05.454031 master-0 kubenswrapper[31830]: W0319 12:38:05.453980 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8188767_a3a9_4859_aa0f_bc448a038114.slice/crio-e45251ec738bb611fc0adff1f636621e76f1d4e1005303efc6caabc86e0fcdb3 WatchSource:0}: Error finding container e45251ec738bb611fc0adff1f636621e76f1d4e1005303efc6caabc86e0fcdb3: Status 404 returned error can't find the container with id e45251ec738bb611fc0adff1f636621e76f1d4e1005303efc6caabc86e0fcdb3 Mar 19 12:38:06.377825 master-0 kubenswrapper[31830]: I0319 12:38:06.377264 31830 generic.go:334] "Generic (PLEG): container finished" podID="aadf7978-e684-447a-897d-5e643ecbd822" containerID="bfd8a7cb0a093b6f097c3d12098722dacc000d63b3976aea9245c67796f3d520" exitCode=0 Mar 19 12:38:06.377825 master-0 kubenswrapper[31830]: I0319 12:38:06.377349 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b44cf4869-grng7" event={"ID":"aadf7978-e684-447a-897d-5e643ecbd822","Type":"ContainerDied","Data":"bfd8a7cb0a093b6f097c3d12098722dacc000d63b3976aea9245c67796f3d520"} Mar 19 12:38:06.381828 master-0 kubenswrapper[31830]: I0319 12:38:06.379661 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7qtfk" event={"ID":"d8188767-a3a9-4859-aa0f-bc448a038114","Type":"ContainerStarted","Data":"8ad1ad251def71f89d1bda5f55c372d1b9191806f1cb572e601abe10afabdfe9"} Mar 19 12:38:06.381828 master-0 kubenswrapper[31830]: I0319 12:38:06.379690 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7qtfk" event={"ID":"d8188767-a3a9-4859-aa0f-bc448a038114","Type":"ContainerStarted","Data":"e45251ec738bb611fc0adff1f636621e76f1d4e1005303efc6caabc86e0fcdb3"} Mar 19 12:38:06.522884 master-0 kubenswrapper[31830]: I0319 12:38:06.521436 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-7qtfk" podStartSLOduration=2.521415502 podStartE2EDuration="2.521415502s" podCreationTimestamp="2026-03-19 12:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:38:06.477048166 +0000 UTC m=+1425.026008870" watchObservedRunningTime="2026-03-19 12:38:06.521415502 +0000 UTC m=+1425.070376206" Mar 19 12:38:07.393832 master-0 kubenswrapper[31830]: I0319 12:38:07.393752 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b44cf4869-grng7" event={"ID":"aadf7978-e684-447a-897d-5e643ecbd822","Type":"ContainerStarted","Data":"c158841c52e816832862d7c901540c7378c2736724961e566c7ae84ca116337a"} Mar 19 12:38:07.851483 master-0 kubenswrapper[31830]: I0319 12:38:07.851381 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b44cf4869-grng7" podStartSLOduration=4.851362998 podStartE2EDuration="4.851362998s" podCreationTimestamp="2026-03-19 12:38:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:38:07.845407203 +0000 UTC m=+1426.394367907" watchObservedRunningTime="2026-03-19 12:38:07.851362998 +0000 UTC m=+1426.400323692" Mar 19 12:38:08.289358 master-0 kubenswrapper[31830]: I0319 12:38:08.289009 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 12:38:08.314893 master-0 kubenswrapper[31830]: I0319 12:38:08.314814 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 12:38:08.404968 master-0 kubenswrapper[31830]: I0319 12:38:08.404915 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:10.427643 master-0 kubenswrapper[31830]: I0319 12:38:10.427585 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"39e14f09-1088-4009-a899-1b8da89f4f11","Type":"ContainerStarted","Data":"22c4fb6620acf84c1b66620dd4fc606e5d3c3eb0563989efb9e7c44ed6d11941"} Mar 19 12:38:10.428152 master-0 kubenswrapper[31830]: I0319 12:38:10.427650 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"39e14f09-1088-4009-a899-1b8da89f4f11","Type":"ContainerStarted","Data":"d16ae52ff2218baf31bac38cc1fa70ba196306a010d30bb19eac09542baa9187"} Mar 19 12:38:10.428152 master-0 kubenswrapper[31830]: I0319 12:38:10.427645 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="39e14f09-1088-4009-a899-1b8da89f4f11" containerName="nova-metadata-log" containerID="cri-o://d16ae52ff2218baf31bac38cc1fa70ba196306a010d30bb19eac09542baa9187" gracePeriod=30 Mar 19 12:38:10.428152 master-0 kubenswrapper[31830]: I0319 12:38:10.427701 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="39e14f09-1088-4009-a899-1b8da89f4f11" containerName="nova-metadata-metadata" containerID="cri-o://22c4fb6620acf84c1b66620dd4fc606e5d3c3eb0563989efb9e7c44ed6d11941" gracePeriod=30 Mar 19 12:38:10.437738 master-0 kubenswrapper[31830]: I0319 12:38:10.437189 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a704971-dde7-4ffa-a887-5e8067b964bd","Type":"ContainerStarted","Data":"12e1ab58c399bb832837cc2271e06fcd742d23055321f8ccabab84286e4af1c8"} Mar 19 12:38:10.437738 master-0 kubenswrapper[31830]: I0319 12:38:10.437252 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a704971-dde7-4ffa-a887-5e8067b964bd","Type":"ContainerStarted","Data":"619addc1e55430132d16f8617693ff97d3e98ff34bf359fc6e96a4e8dd573573"} Mar 19 12:38:10.439660 master-0 kubenswrapper[31830]: I0319 12:38:10.439389 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e87b78cf-8720-4f07-8bb5-e8a2de404fea","Type":"ContainerStarted","Data":"b3ab4fde11634d6fc2d4a0ca60dc25a869215c2e6c7cd820e6dcbd49e69d9632"} Mar 19 12:38:10.439660 master-0 kubenswrapper[31830]: I0319 12:38:10.439532 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="e87b78cf-8720-4f07-8bb5-e8a2de404fea" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b3ab4fde11634d6fc2d4a0ca60dc25a869215c2e6c7cd820e6dcbd49e69d9632" gracePeriod=30 Mar 19 12:38:10.443043 master-0 kubenswrapper[31830]: I0319 12:38:10.443008 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"380fc05c-56b2-4e38-8601-bca5c49a343e","Type":"ContainerStarted","Data":"7a4b98ddef5c00787fa798c47862a461d1e8e39d4e202113a4ec244fcf836ca8"} Mar 19 12:38:10.473030 master-0 kubenswrapper[31830]: I0319 12:38:10.472951 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.824046804 podStartE2EDuration="7.472929379s" podCreationTimestamp="2026-03-19 12:38:03 +0000 UTC" firstStartedPulling="2026-03-19 12:38:04.585664799 +0000 UTC m=+1423.134625493" lastFinishedPulling="2026-03-19 12:38:09.234547364 +0000 UTC m=+1427.783508068" observedRunningTime="2026-03-19 12:38:10.460570166 +0000 UTC m=+1429.009530870" watchObservedRunningTime="2026-03-19 12:38:10.472929379 +0000 UTC m=+1429.021890083" Mar 19 12:38:10.510996 master-0 kubenswrapper[31830]: I0319 12:38:10.510421 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.552666452 podStartE2EDuration="7.510403141s" podCreationTimestamp="2026-03-19 12:38:03 +0000 UTC" firstStartedPulling="2026-03-19 12:38:05.270387765 +0000 UTC m=+1423.819348469" lastFinishedPulling="2026-03-19 12:38:09.228124454 +0000 UTC m=+1427.777085158" observedRunningTime="2026-03-19 12:38:10.482431874 +0000 UTC m=+1429.031392578" watchObservedRunningTime="2026-03-19 12:38:10.510403141 +0000 UTC m=+1429.059363845" Mar 19 12:38:10.511209 master-0 kubenswrapper[31830]: I0319 12:38:10.511095 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.588287466 podStartE2EDuration="7.511089223s" podCreationTimestamp="2026-03-19 12:38:03 +0000 UTC" firstStartedPulling="2026-03-19 12:38:05.309192418 +0000 UTC m=+1423.858153112" lastFinishedPulling="2026-03-19 12:38:09.231994165 +0000 UTC m=+1427.780954869" observedRunningTime="2026-03-19 12:38:10.50035947 +0000 UTC m=+1429.049320194" watchObservedRunningTime="2026-03-19 12:38:10.511089223 +0000 UTC m=+1429.060049917" Mar 19 12:38:10.532598 master-0 kubenswrapper[31830]: I0319 12:38:10.532508 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.233173544 podStartE2EDuration="7.532483136s" podCreationTimestamp="2026-03-19 12:38:03 +0000 UTC" firstStartedPulling="2026-03-19 12:38:04.933985752 +0000 UTC m=+1423.482946456" lastFinishedPulling="2026-03-19 12:38:09.233295344 +0000 UTC m=+1427.782256048" observedRunningTime="2026-03-19 12:38:10.518295256 +0000 UTC m=+1429.067255970" watchObservedRunningTime="2026-03-19 12:38:10.532483136 +0000 UTC m=+1429.081443840" Mar 19 12:38:11.153919 master-0 kubenswrapper[31830]: I0319 12:38:11.153842 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 12:38:11.340723 master-0 kubenswrapper[31830]: I0319 12:38:11.340542 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39e14f09-1088-4009-a899-1b8da89f4f11-combined-ca-bundle\") pod \"39e14f09-1088-4009-a899-1b8da89f4f11\" (UID: \"39e14f09-1088-4009-a899-1b8da89f4f11\") " Mar 19 12:38:11.340723 master-0 kubenswrapper[31830]: I0319 12:38:11.340717 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39e14f09-1088-4009-a899-1b8da89f4f11-logs\") pod \"39e14f09-1088-4009-a899-1b8da89f4f11\" (UID: \"39e14f09-1088-4009-a899-1b8da89f4f11\") " Mar 19 12:38:11.341090 master-0 kubenswrapper[31830]: I0319 12:38:11.340758 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxrbv\" (UniqueName: \"kubernetes.io/projected/39e14f09-1088-4009-a899-1b8da89f4f11-kube-api-access-wxrbv\") pod \"39e14f09-1088-4009-a899-1b8da89f4f11\" (UID: \"39e14f09-1088-4009-a899-1b8da89f4f11\") " Mar 19 12:38:11.341090 master-0 kubenswrapper[31830]: I0319 12:38:11.340822 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39e14f09-1088-4009-a899-1b8da89f4f11-config-data\") pod \"39e14f09-1088-4009-a899-1b8da89f4f11\" (UID: \"39e14f09-1088-4009-a899-1b8da89f4f11\") " Mar 19 12:38:11.342058 master-0 kubenswrapper[31830]: I0319 12:38:11.342007 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39e14f09-1088-4009-a899-1b8da89f4f11-logs" (OuterVolumeSpecName: "logs") pod "39e14f09-1088-4009-a899-1b8da89f4f11" (UID: "39e14f09-1088-4009-a899-1b8da89f4f11"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:38:11.345907 master-0 kubenswrapper[31830]: I0319 12:38:11.344539 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39e14f09-1088-4009-a899-1b8da89f4f11-kube-api-access-wxrbv" (OuterVolumeSpecName: "kube-api-access-wxrbv") pod "39e14f09-1088-4009-a899-1b8da89f4f11" (UID: "39e14f09-1088-4009-a899-1b8da89f4f11"). InnerVolumeSpecName "kube-api-access-wxrbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:38:11.370647 master-0 kubenswrapper[31830]: I0319 12:38:11.370584 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39e14f09-1088-4009-a899-1b8da89f4f11-config-data" (OuterVolumeSpecName: "config-data") pod "39e14f09-1088-4009-a899-1b8da89f4f11" (UID: "39e14f09-1088-4009-a899-1b8da89f4f11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:38:11.373740 master-0 kubenswrapper[31830]: I0319 12:38:11.373676 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39e14f09-1088-4009-a899-1b8da89f4f11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39e14f09-1088-4009-a899-1b8da89f4f11" (UID: "39e14f09-1088-4009-a899-1b8da89f4f11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:38:11.446043 master-0 kubenswrapper[31830]: I0319 12:38:11.445970 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39e14f09-1088-4009-a899-1b8da89f4f11-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:11.446043 master-0 kubenswrapper[31830]: I0319 12:38:11.446032 31830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39e14f09-1088-4009-a899-1b8da89f4f11-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:11.446043 master-0 kubenswrapper[31830]: I0319 12:38:11.446049 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxrbv\" (UniqueName: \"kubernetes.io/projected/39e14f09-1088-4009-a899-1b8da89f4f11-kube-api-access-wxrbv\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:11.446736 master-0 kubenswrapper[31830]: I0319 12:38:11.446066 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39e14f09-1088-4009-a899-1b8da89f4f11-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:11.465990 master-0 kubenswrapper[31830]: I0319 12:38:11.465927 31830 generic.go:334] "Generic (PLEG): container finished" podID="39e14f09-1088-4009-a899-1b8da89f4f11" containerID="22c4fb6620acf84c1b66620dd4fc606e5d3c3eb0563989efb9e7c44ed6d11941" exitCode=0 Mar 19 12:38:11.465990 master-0 kubenswrapper[31830]: I0319 12:38:11.465970 31830 generic.go:334] "Generic (PLEG): container finished" podID="39e14f09-1088-4009-a899-1b8da89f4f11" containerID="d16ae52ff2218baf31bac38cc1fa70ba196306a010d30bb19eac09542baa9187" exitCode=143 Mar 19 12:38:11.466281 master-0 kubenswrapper[31830]: I0319 12:38:11.466023 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"39e14f09-1088-4009-a899-1b8da89f4f11","Type":"ContainerDied","Data":"22c4fb6620acf84c1b66620dd4fc606e5d3c3eb0563989efb9e7c44ed6d11941"} Mar 19 12:38:11.466281 master-0 kubenswrapper[31830]: I0319 12:38:11.466067 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 12:38:11.466281 master-0 kubenswrapper[31830]: I0319 12:38:11.466107 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"39e14f09-1088-4009-a899-1b8da89f4f11","Type":"ContainerDied","Data":"d16ae52ff2218baf31bac38cc1fa70ba196306a010d30bb19eac09542baa9187"} Mar 19 12:38:11.466281 master-0 kubenswrapper[31830]: I0319 12:38:11.466124 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"39e14f09-1088-4009-a899-1b8da89f4f11","Type":"ContainerDied","Data":"13af947831e055717798e62e175d10a1734b26edc48babf9dadc64fd5993af30"} Mar 19 12:38:11.466281 master-0 kubenswrapper[31830]: I0319 12:38:11.466151 31830 scope.go:117] "RemoveContainer" containerID="22c4fb6620acf84c1b66620dd4fc606e5d3c3eb0563989efb9e7c44ed6d11941" Mar 19 12:38:11.526862 master-0 kubenswrapper[31830]: I0319 12:38:11.520843 31830 scope.go:117] "RemoveContainer" containerID="d16ae52ff2218baf31bac38cc1fa70ba196306a010d30bb19eac09542baa9187" Mar 19 12:38:11.526862 master-0 kubenswrapper[31830]: I0319 12:38:11.525450 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 12:38:11.536978 master-0 kubenswrapper[31830]: I0319 12:38:11.536922 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 12:38:11.571917 master-0 kubenswrapper[31830]: I0319 12:38:11.571851 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 19 12:38:11.576971 master-0 kubenswrapper[31830]: I0319 12:38:11.576854 31830 scope.go:117] "RemoveContainer" containerID="22c4fb6620acf84c1b66620dd4fc606e5d3c3eb0563989efb9e7c44ed6d11941" Mar 19 12:38:11.577686 master-0 kubenswrapper[31830]: E0319 12:38:11.577578 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22c4fb6620acf84c1b66620dd4fc606e5d3c3eb0563989efb9e7c44ed6d11941\": container with ID starting with 22c4fb6620acf84c1b66620dd4fc606e5d3c3eb0563989efb9e7c44ed6d11941 not found: ID does not exist" containerID="22c4fb6620acf84c1b66620dd4fc606e5d3c3eb0563989efb9e7c44ed6d11941" Mar 19 12:38:11.577686 master-0 kubenswrapper[31830]: I0319 12:38:11.577625 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22c4fb6620acf84c1b66620dd4fc606e5d3c3eb0563989efb9e7c44ed6d11941"} err="failed to get container status \"22c4fb6620acf84c1b66620dd4fc606e5d3c3eb0563989efb9e7c44ed6d11941\": rpc error: code = NotFound desc = could not find container \"22c4fb6620acf84c1b66620dd4fc606e5d3c3eb0563989efb9e7c44ed6d11941\": container with ID starting with 22c4fb6620acf84c1b66620dd4fc606e5d3c3eb0563989efb9e7c44ed6d11941 not found: ID does not exist" Mar 19 12:38:11.577686 master-0 kubenswrapper[31830]: I0319 12:38:11.577647 31830 scope.go:117] "RemoveContainer" containerID="d16ae52ff2218baf31bac38cc1fa70ba196306a010d30bb19eac09542baa9187" Mar 19 12:38:11.578991 master-0 kubenswrapper[31830]: E0319 12:38:11.578904 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d16ae52ff2218baf31bac38cc1fa70ba196306a010d30bb19eac09542baa9187\": container with ID starting with d16ae52ff2218baf31bac38cc1fa70ba196306a010d30bb19eac09542baa9187 not found: ID does not exist" containerID="d16ae52ff2218baf31bac38cc1fa70ba196306a010d30bb19eac09542baa9187" Mar 19 12:38:11.578991 master-0 kubenswrapper[31830]: I0319 12:38:11.578931 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d16ae52ff2218baf31bac38cc1fa70ba196306a010d30bb19eac09542baa9187"} err="failed to get container status \"d16ae52ff2218baf31bac38cc1fa70ba196306a010d30bb19eac09542baa9187\": rpc error: code = NotFound desc = could not find container \"d16ae52ff2218baf31bac38cc1fa70ba196306a010d30bb19eac09542baa9187\": container with ID starting with d16ae52ff2218baf31bac38cc1fa70ba196306a010d30bb19eac09542baa9187 not found: ID does not exist" Mar 19 12:38:11.579237 master-0 kubenswrapper[31830]: I0319 12:38:11.579180 31830 scope.go:117] "RemoveContainer" containerID="22c4fb6620acf84c1b66620dd4fc606e5d3c3eb0563989efb9e7c44ed6d11941" Mar 19 12:38:11.581491 master-0 kubenswrapper[31830]: I0319 12:38:11.581256 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22c4fb6620acf84c1b66620dd4fc606e5d3c3eb0563989efb9e7c44ed6d11941"} err="failed to get container status \"22c4fb6620acf84c1b66620dd4fc606e5d3c3eb0563989efb9e7c44ed6d11941\": rpc error: code = NotFound desc = could not find container \"22c4fb6620acf84c1b66620dd4fc606e5d3c3eb0563989efb9e7c44ed6d11941\": container with ID starting with 22c4fb6620acf84c1b66620dd4fc606e5d3c3eb0563989efb9e7c44ed6d11941 not found: ID does not exist" Mar 19 12:38:11.581491 master-0 kubenswrapper[31830]: I0319 12:38:11.581280 31830 scope.go:117] "RemoveContainer" containerID="d16ae52ff2218baf31bac38cc1fa70ba196306a010d30bb19eac09542baa9187" Mar 19 12:38:11.582101 master-0 kubenswrapper[31830]: I0319 12:38:11.582082 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d16ae52ff2218baf31bac38cc1fa70ba196306a010d30bb19eac09542baa9187"} err="failed to get container status \"d16ae52ff2218baf31bac38cc1fa70ba196306a010d30bb19eac09542baa9187\": rpc error: code = NotFound desc = could not find container \"d16ae52ff2218baf31bac38cc1fa70ba196306a010d30bb19eac09542baa9187\": container with ID starting with d16ae52ff2218baf31bac38cc1fa70ba196306a010d30bb19eac09542baa9187 not found: ID does not exist" Mar 19 12:38:11.606140 master-0 kubenswrapper[31830]: E0319 12:38:11.606006 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e14f09-1088-4009-a899-1b8da89f4f11" containerName="nova-metadata-metadata" Mar 19 12:38:11.607136 master-0 kubenswrapper[31830]: I0319 12:38:11.607115 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e14f09-1088-4009-a899-1b8da89f4f11" containerName="nova-metadata-metadata" Mar 19 12:38:11.610487 master-0 kubenswrapper[31830]: E0319 12:38:11.610447 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e14f09-1088-4009-a899-1b8da89f4f11" containerName="nova-metadata-log" Mar 19 12:38:11.610708 master-0 kubenswrapper[31830]: I0319 12:38:11.610691 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e14f09-1088-4009-a899-1b8da89f4f11" containerName="nova-metadata-log" Mar 19 12:38:11.611825 master-0 kubenswrapper[31830]: I0319 12:38:11.611777 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="39e14f09-1088-4009-a899-1b8da89f4f11" containerName="nova-metadata-log" Mar 19 12:38:11.612043 master-0 kubenswrapper[31830]: I0319 12:38:11.612024 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="39e14f09-1088-4009-a899-1b8da89f4f11" containerName="nova-metadata-metadata" Mar 19 12:38:11.634551 master-0 kubenswrapper[31830]: I0319 12:38:11.634495 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 12:38:11.636035 master-0 kubenswrapper[31830]: I0319 12:38:11.636014 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 12:38:11.639200 master-0 kubenswrapper[31830]: I0319 12:38:11.639170 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 19 12:38:11.639332 master-0 kubenswrapper[31830]: I0319 12:38:11.639292 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 19 12:38:11.700185 master-0 kubenswrapper[31830]: I0319 12:38:11.700060 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39e14f09-1088-4009-a899-1b8da89f4f11" path="/var/lib/kubelet/pods/39e14f09-1088-4009-a899-1b8da89f4f11/volumes" Mar 19 12:38:11.761699 master-0 kubenswrapper[31830]: I0319 12:38:11.761626 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/215a74f3-ce0f-4f33-b327-ac7df448ec62-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"215a74f3-ce0f-4f33-b327-ac7df448ec62\") " pod="openstack/nova-metadata-0" Mar 19 12:38:11.761937 master-0 kubenswrapper[31830]: I0319 12:38:11.761765 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/215a74f3-ce0f-4f33-b327-ac7df448ec62-logs\") pod \"nova-metadata-0\" (UID: \"215a74f3-ce0f-4f33-b327-ac7df448ec62\") " pod="openstack/nova-metadata-0" Mar 19 12:38:11.761937 master-0 kubenswrapper[31830]: I0319 12:38:11.761842 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215a74f3-ce0f-4f33-b327-ac7df448ec62-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"215a74f3-ce0f-4f33-b327-ac7df448ec62\") " pod="openstack/nova-metadata-0" Mar 19 12:38:11.761937 master-0 kubenswrapper[31830]: I0319 12:38:11.761888 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5c9v\" (UniqueName: \"kubernetes.io/projected/215a74f3-ce0f-4f33-b327-ac7df448ec62-kube-api-access-v5c9v\") pod \"nova-metadata-0\" (UID: \"215a74f3-ce0f-4f33-b327-ac7df448ec62\") " pod="openstack/nova-metadata-0" Mar 19 12:38:11.762095 master-0 kubenswrapper[31830]: I0319 12:38:11.761981 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/215a74f3-ce0f-4f33-b327-ac7df448ec62-config-data\") pod \"nova-metadata-0\" (UID: \"215a74f3-ce0f-4f33-b327-ac7df448ec62\") " pod="openstack/nova-metadata-0" Mar 19 12:38:11.864445 master-0 kubenswrapper[31830]: I0319 12:38:11.864253 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/215a74f3-ce0f-4f33-b327-ac7df448ec62-logs\") pod \"nova-metadata-0\" (UID: \"215a74f3-ce0f-4f33-b327-ac7df448ec62\") " pod="openstack/nova-metadata-0" Mar 19 12:38:11.864445 master-0 kubenswrapper[31830]: I0319 12:38:11.864379 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215a74f3-ce0f-4f33-b327-ac7df448ec62-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"215a74f3-ce0f-4f33-b327-ac7df448ec62\") " pod="openstack/nova-metadata-0" Mar 19 12:38:11.864445 master-0 kubenswrapper[31830]: I0319 12:38:11.864422 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5c9v\" (UniqueName: \"kubernetes.io/projected/215a74f3-ce0f-4f33-b327-ac7df448ec62-kube-api-access-v5c9v\") pod \"nova-metadata-0\" (UID: \"215a74f3-ce0f-4f33-b327-ac7df448ec62\") " pod="openstack/nova-metadata-0" Mar 19 12:38:11.864757 master-0 kubenswrapper[31830]: I0319 12:38:11.864708 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/215a74f3-ce0f-4f33-b327-ac7df448ec62-config-data\") pod \"nova-metadata-0\" (UID: \"215a74f3-ce0f-4f33-b327-ac7df448ec62\") " pod="openstack/nova-metadata-0" Mar 19 12:38:11.865056 master-0 kubenswrapper[31830]: I0319 12:38:11.865023 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/215a74f3-ce0f-4f33-b327-ac7df448ec62-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"215a74f3-ce0f-4f33-b327-ac7df448ec62\") " pod="openstack/nova-metadata-0" Mar 19 12:38:11.867001 master-0 kubenswrapper[31830]: I0319 12:38:11.866925 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/215a74f3-ce0f-4f33-b327-ac7df448ec62-logs\") pod \"nova-metadata-0\" (UID: \"215a74f3-ce0f-4f33-b327-ac7df448ec62\") " pod="openstack/nova-metadata-0" Mar 19 12:38:11.871504 master-0 kubenswrapper[31830]: I0319 12:38:11.871439 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215a74f3-ce0f-4f33-b327-ac7df448ec62-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"215a74f3-ce0f-4f33-b327-ac7df448ec62\") " pod="openstack/nova-metadata-0" Mar 19 12:38:11.874290 master-0 kubenswrapper[31830]: I0319 12:38:11.874239 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/215a74f3-ce0f-4f33-b327-ac7df448ec62-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"215a74f3-ce0f-4f33-b327-ac7df448ec62\") " pod="openstack/nova-metadata-0" Mar 19 12:38:11.875657 master-0 kubenswrapper[31830]: I0319 12:38:11.875617 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/215a74f3-ce0f-4f33-b327-ac7df448ec62-config-data\") pod \"nova-metadata-0\" (UID: \"215a74f3-ce0f-4f33-b327-ac7df448ec62\") " pod="openstack/nova-metadata-0" Mar 19 12:38:11.883627 master-0 kubenswrapper[31830]: I0319 12:38:11.883583 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5c9v\" (UniqueName: \"kubernetes.io/projected/215a74f3-ce0f-4f33-b327-ac7df448ec62-kube-api-access-v5c9v\") pod \"nova-metadata-0\" (UID: \"215a74f3-ce0f-4f33-b327-ac7df448ec62\") " pod="openstack/nova-metadata-0" Mar 19 12:38:11.969388 master-0 kubenswrapper[31830]: I0319 12:38:11.968313 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 12:38:12.474935 master-0 kubenswrapper[31830]: I0319 12:38:12.474871 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 12:38:13.513671 master-0 kubenswrapper[31830]: I0319 12:38:13.512128 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"215a74f3-ce0f-4f33-b327-ac7df448ec62","Type":"ContainerStarted","Data":"a94c157d67b40c566f11a375cac6dec11d3aee2bcec901e9bd61aabc28508a30"} Mar 19 12:38:13.513671 master-0 kubenswrapper[31830]: I0319 12:38:13.512199 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"215a74f3-ce0f-4f33-b327-ac7df448ec62","Type":"ContainerStarted","Data":"23beebf4cd5475223e9e4d06ce9af7fc562a586a2c1f8e981fe30f8ea9134d28"} Mar 19 12:38:13.513671 master-0 kubenswrapper[31830]: I0319 12:38:13.512213 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"215a74f3-ce0f-4f33-b327-ac7df448ec62","Type":"ContainerStarted","Data":"9769d1aa6f6e9d34ae3a2d58df094a61a76868ec6580b5141018cb5f95f8a7a5"} Mar 19 12:38:13.560405 master-0 kubenswrapper[31830]: I0319 12:38:13.560302 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.560277206 podStartE2EDuration="2.560277206s" podCreationTimestamp="2026-03-19 12:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:38:13.538976805 +0000 UTC m=+1432.087937509" watchObservedRunningTime="2026-03-19 12:38:13.560277206 +0000 UTC m=+1432.109237910" Mar 19 12:38:14.091032 master-0 kubenswrapper[31830]: I0319 12:38:14.090181 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 19 12:38:14.091032 master-0 kubenswrapper[31830]: I0319 12:38:14.090271 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 19 12:38:14.133678 master-0 kubenswrapper[31830]: I0319 12:38:14.133619 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 19 12:38:14.133678 master-0 kubenswrapper[31830]: I0319 12:38:14.133682 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 19 12:38:14.168292 master-0 kubenswrapper[31830]: I0319 12:38:14.168237 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 19 12:38:14.188603 master-0 kubenswrapper[31830]: I0319 12:38:14.187936 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:14.189620 master-0 kubenswrapper[31830]: I0319 12:38:14.189577 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:14.361532 master-0 kubenswrapper[31830]: I0319 12:38:14.361428 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7557f57847-t2m77"] Mar 19 12:38:14.361759 master-0 kubenswrapper[31830]: I0319 12:38:14.361731 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7557f57847-t2m77" podUID="293ebf87-213b-41aa-86be-a71453a91c0c" containerName="dnsmasq-dns" containerID="cri-o://73c784ae1f1ba201279fca33a48c0e3517d76ecbe82116e10b0ecf59e8173cf5" gracePeriod=10 Mar 19 12:38:14.529740 master-0 kubenswrapper[31830]: I0319 12:38:14.529679 31830 generic.go:334] "Generic (PLEG): container finished" podID="f24450d6-f939-4621-8d88-e0ecc012ebb6" containerID="83b29b8f4d632e9bb87c36f92639bab0483a79f1a1f95063473e19c4696969fd" exitCode=0 Mar 19 12:38:14.529740 master-0 kubenswrapper[31830]: I0319 12:38:14.529742 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mk6fp" event={"ID":"f24450d6-f939-4621-8d88-e0ecc012ebb6","Type":"ContainerDied","Data":"83b29b8f4d632e9bb87c36f92639bab0483a79f1a1f95063473e19c4696969fd"} Mar 19 12:38:14.542676 master-0 kubenswrapper[31830]: I0319 12:38:14.541756 31830 generic.go:334] "Generic (PLEG): container finished" podID="293ebf87-213b-41aa-86be-a71453a91c0c" containerID="73c784ae1f1ba201279fca33a48c0e3517d76ecbe82116e10b0ecf59e8173cf5" exitCode=0 Mar 19 12:38:14.542676 master-0 kubenswrapper[31830]: I0319 12:38:14.541848 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7557f57847-t2m77" event={"ID":"293ebf87-213b-41aa-86be-a71453a91c0c","Type":"ContainerDied","Data":"73c784ae1f1ba201279fca33a48c0e3517d76ecbe82116e10b0ecf59e8173cf5"} Mar 19 12:38:14.598084 master-0 kubenswrapper[31830]: I0319 12:38:14.598038 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 19 12:38:15.092559 master-0 kubenswrapper[31830]: I0319 12:38:15.092444 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:38:15.185988 master-0 kubenswrapper[31830]: I0319 12:38:15.180323 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4a704971-dde7-4ffa-a887-5e8067b964bd" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.0.248:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 12:38:15.185988 master-0 kubenswrapper[31830]: I0319 12:38:15.180634 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4a704971-dde7-4ffa-a887-5e8067b964bd" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.0.248:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 12:38:15.273818 master-0 kubenswrapper[31830]: I0319 12:38:15.273747 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccght\" (UniqueName: \"kubernetes.io/projected/293ebf87-213b-41aa-86be-a71453a91c0c-kube-api-access-ccght\") pod \"293ebf87-213b-41aa-86be-a71453a91c0c\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " Mar 19 12:38:15.274100 master-0 kubenswrapper[31830]: I0319 12:38:15.273947 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-dns-swift-storage-0\") pod \"293ebf87-213b-41aa-86be-a71453a91c0c\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " Mar 19 12:38:15.274145 master-0 kubenswrapper[31830]: I0319 12:38:15.274123 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-config\") pod \"293ebf87-213b-41aa-86be-a71453a91c0c\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " Mar 19 12:38:15.274403 master-0 kubenswrapper[31830]: I0319 12:38:15.274188 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-dns-svc\") pod \"293ebf87-213b-41aa-86be-a71453a91c0c\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " Mar 19 12:38:15.274403 master-0 kubenswrapper[31830]: I0319 12:38:15.274234 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-edpm-b\") pod \"293ebf87-213b-41aa-86be-a71453a91c0c\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " Mar 19 12:38:15.274403 master-0 kubenswrapper[31830]: I0319 12:38:15.274328 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-ovsdbserver-nb\") pod \"293ebf87-213b-41aa-86be-a71453a91c0c\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " Mar 19 12:38:15.274403 master-0 kubenswrapper[31830]: I0319 12:38:15.274373 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-edpm-a\") pod \"293ebf87-213b-41aa-86be-a71453a91c0c\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " Mar 19 12:38:15.274654 master-0 kubenswrapper[31830]: I0319 12:38:15.274408 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-ovsdbserver-sb\") pod \"293ebf87-213b-41aa-86be-a71453a91c0c\" (UID: \"293ebf87-213b-41aa-86be-a71453a91c0c\") " Mar 19 12:38:15.278981 master-0 kubenswrapper[31830]: I0319 12:38:15.278912 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/293ebf87-213b-41aa-86be-a71453a91c0c-kube-api-access-ccght" (OuterVolumeSpecName: "kube-api-access-ccght") pod "293ebf87-213b-41aa-86be-a71453a91c0c" (UID: "293ebf87-213b-41aa-86be-a71453a91c0c"). InnerVolumeSpecName "kube-api-access-ccght". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:38:15.342742 master-0 kubenswrapper[31830]: I0319 12:38:15.342257 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "293ebf87-213b-41aa-86be-a71453a91c0c" (UID: "293ebf87-213b-41aa-86be-a71453a91c0c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:38:15.358940 master-0 kubenswrapper[31830]: I0319 12:38:15.358879 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "293ebf87-213b-41aa-86be-a71453a91c0c" (UID: "293ebf87-213b-41aa-86be-a71453a91c0c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:38:15.360359 master-0 kubenswrapper[31830]: I0319 12:38:15.360268 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "293ebf87-213b-41aa-86be-a71453a91c0c" (UID: "293ebf87-213b-41aa-86be-a71453a91c0c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:38:15.378426 master-0 kubenswrapper[31830]: I0319 12:38:15.378217 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:15.378426 master-0 kubenswrapper[31830]: I0319 12:38:15.378416 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:15.379197 master-0 kubenswrapper[31830]: I0319 12:38:15.378475 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccght\" (UniqueName: \"kubernetes.io/projected/293ebf87-213b-41aa-86be-a71453a91c0c-kube-api-access-ccght\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:15.379197 master-0 kubenswrapper[31830]: I0319 12:38:15.378489 31830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:15.381516 master-0 kubenswrapper[31830]: I0319 12:38:15.381463 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-config" (OuterVolumeSpecName: "config") pod "293ebf87-213b-41aa-86be-a71453a91c0c" (UID: "293ebf87-213b-41aa-86be-a71453a91c0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:38:15.382080 master-0 kubenswrapper[31830]: I0319 12:38:15.382043 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "293ebf87-213b-41aa-86be-a71453a91c0c" (UID: "293ebf87-213b-41aa-86be-a71453a91c0c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:38:15.382389 master-0 kubenswrapper[31830]: I0319 12:38:15.382230 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-edpm-b" (OuterVolumeSpecName: "edpm-b") pod "293ebf87-213b-41aa-86be-a71453a91c0c" (UID: "293ebf87-213b-41aa-86be-a71453a91c0c"). InnerVolumeSpecName "edpm-b". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:38:15.386480 master-0 kubenswrapper[31830]: I0319 12:38:15.386277 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-edpm-a" (OuterVolumeSpecName: "edpm-a") pod "293ebf87-213b-41aa-86be-a71453a91c0c" (UID: "293ebf87-213b-41aa-86be-a71453a91c0c"). InnerVolumeSpecName "edpm-a". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:38:15.480896 master-0 kubenswrapper[31830]: I0319 12:38:15.480830 31830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:15.480896 master-0 kubenswrapper[31830]: I0319 12:38:15.480889 31830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:15.480896 master-0 kubenswrapper[31830]: I0319 12:38:15.480903 31830 reconciler_common.go:293] "Volume detached for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-edpm-b\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:15.481170 master-0 kubenswrapper[31830]: I0319 12:38:15.480916 31830 reconciler_common.go:293] "Volume detached for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/293ebf87-213b-41aa-86be-a71453a91c0c-edpm-a\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:15.556148 master-0 kubenswrapper[31830]: I0319 12:38:15.555959 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7557f57847-t2m77" event={"ID":"293ebf87-213b-41aa-86be-a71453a91c0c","Type":"ContainerDied","Data":"ffb64820b482bc0c14792c76ba19db44193a9c2d1f67c51d8339fa14b2c69ef2"} Mar 19 12:38:15.556148 master-0 kubenswrapper[31830]: I0319 12:38:15.556001 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7557f57847-t2m77" Mar 19 12:38:15.556148 master-0 kubenswrapper[31830]: I0319 12:38:15.556028 31830 scope.go:117] "RemoveContainer" containerID="73c784ae1f1ba201279fca33a48c0e3517d76ecbe82116e10b0ecf59e8173cf5" Mar 19 12:38:15.589834 master-0 kubenswrapper[31830]: I0319 12:38:15.589781 31830 scope.go:117] "RemoveContainer" containerID="fd8cb515e6f48fa4810d7740925785737ed4514d65dd33253f75d1d869c99d24" Mar 19 12:38:15.639104 master-0 kubenswrapper[31830]: I0319 12:38:15.639041 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7557f57847-t2m77"] Mar 19 12:38:15.658469 master-0 kubenswrapper[31830]: I0319 12:38:15.658374 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7557f57847-t2m77"] Mar 19 12:38:15.711039 master-0 kubenswrapper[31830]: I0319 12:38:15.710262 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="293ebf87-213b-41aa-86be-a71453a91c0c" path="/var/lib/kubelet/pods/293ebf87-213b-41aa-86be-a71453a91c0c/volumes" Mar 19 12:38:16.089220 master-0 kubenswrapper[31830]: I0319 12:38:16.089110 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mk6fp" Mar 19 12:38:16.200428 master-0 kubenswrapper[31830]: I0319 12:38:16.200360 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f24450d6-f939-4621-8d88-e0ecc012ebb6-config-data\") pod \"f24450d6-f939-4621-8d88-e0ecc012ebb6\" (UID: \"f24450d6-f939-4621-8d88-e0ecc012ebb6\") " Mar 19 12:38:16.200667 master-0 kubenswrapper[31830]: I0319 12:38:16.200485 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f24450d6-f939-4621-8d88-e0ecc012ebb6-scripts\") pod \"f24450d6-f939-4621-8d88-e0ecc012ebb6\" (UID: \"f24450d6-f939-4621-8d88-e0ecc012ebb6\") " Mar 19 12:38:16.200667 master-0 kubenswrapper[31830]: I0319 12:38:16.200638 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-242mg\" (UniqueName: \"kubernetes.io/projected/f24450d6-f939-4621-8d88-e0ecc012ebb6-kube-api-access-242mg\") pod \"f24450d6-f939-4621-8d88-e0ecc012ebb6\" (UID: \"f24450d6-f939-4621-8d88-e0ecc012ebb6\") " Mar 19 12:38:16.200777 master-0 kubenswrapper[31830]: I0319 12:38:16.200751 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f24450d6-f939-4621-8d88-e0ecc012ebb6-combined-ca-bundle\") pod \"f24450d6-f939-4621-8d88-e0ecc012ebb6\" (UID: \"f24450d6-f939-4621-8d88-e0ecc012ebb6\") " Mar 19 12:38:16.206254 master-0 kubenswrapper[31830]: I0319 12:38:16.206151 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f24450d6-f939-4621-8d88-e0ecc012ebb6-kube-api-access-242mg" (OuterVolumeSpecName: "kube-api-access-242mg") pod "f24450d6-f939-4621-8d88-e0ecc012ebb6" (UID: "f24450d6-f939-4621-8d88-e0ecc012ebb6"). InnerVolumeSpecName "kube-api-access-242mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:38:16.209971 master-0 kubenswrapper[31830]: I0319 12:38:16.209916 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f24450d6-f939-4621-8d88-e0ecc012ebb6-scripts" (OuterVolumeSpecName: "scripts") pod "f24450d6-f939-4621-8d88-e0ecc012ebb6" (UID: "f24450d6-f939-4621-8d88-e0ecc012ebb6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:38:16.248819 master-0 kubenswrapper[31830]: I0319 12:38:16.248029 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f24450d6-f939-4621-8d88-e0ecc012ebb6-config-data" (OuterVolumeSpecName: "config-data") pod "f24450d6-f939-4621-8d88-e0ecc012ebb6" (UID: "f24450d6-f939-4621-8d88-e0ecc012ebb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:38:16.253032 master-0 kubenswrapper[31830]: I0319 12:38:16.252982 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f24450d6-f939-4621-8d88-e0ecc012ebb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f24450d6-f939-4621-8d88-e0ecc012ebb6" (UID: "f24450d6-f939-4621-8d88-e0ecc012ebb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:38:16.304518 master-0 kubenswrapper[31830]: I0319 12:38:16.304472 31830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f24450d6-f939-4621-8d88-e0ecc012ebb6-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:16.304518 master-0 kubenswrapper[31830]: I0319 12:38:16.304520 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-242mg\" (UniqueName: \"kubernetes.io/projected/f24450d6-f939-4621-8d88-e0ecc012ebb6-kube-api-access-242mg\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:16.304721 master-0 kubenswrapper[31830]: I0319 12:38:16.304536 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f24450d6-f939-4621-8d88-e0ecc012ebb6-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:16.304721 master-0 kubenswrapper[31830]: I0319 12:38:16.304549 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f24450d6-f939-4621-8d88-e0ecc012ebb6-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:16.595719 master-0 kubenswrapper[31830]: I0319 12:38:16.589711 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mk6fp" event={"ID":"f24450d6-f939-4621-8d88-e0ecc012ebb6","Type":"ContainerDied","Data":"d57cab916f539e78af61e0ec01ce5f674195bd18850cf2cd87f5d8889bde2fa9"} Mar 19 12:38:16.595719 master-0 kubenswrapper[31830]: I0319 12:38:16.590191 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mk6fp" Mar 19 12:38:16.595719 master-0 kubenswrapper[31830]: I0319 12:38:16.590168 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d57cab916f539e78af61e0ec01ce5f674195bd18850cf2cd87f5d8889bde2fa9" Mar 19 12:38:16.779909 master-0 kubenswrapper[31830]: I0319 12:38:16.779065 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 19 12:38:16.779909 master-0 kubenswrapper[31830]: I0319 12:38:16.779435 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4a704971-dde7-4ffa-a887-5e8067b964bd" containerName="nova-api-log" containerID="cri-o://619addc1e55430132d16f8617693ff97d3e98ff34bf359fc6e96a4e8dd573573" gracePeriod=30 Mar 19 12:38:16.780227 master-0 kubenswrapper[31830]: I0319 12:38:16.780112 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4a704971-dde7-4ffa-a887-5e8067b964bd" containerName="nova-api-api" containerID="cri-o://12e1ab58c399bb832837cc2271e06fcd742d23055321f8ccabab84286e4af1c8" gracePeriod=30 Mar 19 12:38:16.816232 master-0 kubenswrapper[31830]: I0319 12:38:16.816171 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 12:38:16.816820 master-0 kubenswrapper[31830]: I0319 12:38:16.816751 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="380fc05c-56b2-4e38-8601-bca5c49a343e" containerName="nova-scheduler-scheduler" containerID="cri-o://7a4b98ddef5c00787fa798c47862a461d1e8e39d4e202113a4ec244fcf836ca8" gracePeriod=30 Mar 19 12:38:16.844788 master-0 kubenswrapper[31830]: I0319 12:38:16.844733 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 12:38:16.845302 master-0 kubenswrapper[31830]: I0319 12:38:16.845268 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="215a74f3-ce0f-4f33-b327-ac7df448ec62" containerName="nova-metadata-log" containerID="cri-o://23beebf4cd5475223e9e4d06ce9af7fc562a586a2c1f8e981fe30f8ea9134d28" gracePeriod=30 Mar 19 12:38:16.846078 master-0 kubenswrapper[31830]: I0319 12:38:16.846053 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="215a74f3-ce0f-4f33-b327-ac7df448ec62" containerName="nova-metadata-metadata" containerID="cri-o://a94c157d67b40c566f11a375cac6dec11d3aee2bcec901e9bd61aabc28508a30" gracePeriod=30 Mar 19 12:38:17.547727 master-0 kubenswrapper[31830]: I0319 12:38:17.547679 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 12:38:17.649111 master-0 kubenswrapper[31830]: I0319 12:38:17.649026 31830 generic.go:334] "Generic (PLEG): container finished" podID="215a74f3-ce0f-4f33-b327-ac7df448ec62" containerID="a94c157d67b40c566f11a375cac6dec11d3aee2bcec901e9bd61aabc28508a30" exitCode=0 Mar 19 12:38:17.649111 master-0 kubenswrapper[31830]: I0319 12:38:17.649075 31830 generic.go:334] "Generic (PLEG): container finished" podID="215a74f3-ce0f-4f33-b327-ac7df448ec62" containerID="23beebf4cd5475223e9e4d06ce9af7fc562a586a2c1f8e981fe30f8ea9134d28" exitCode=143 Mar 19 12:38:17.649659 master-0 kubenswrapper[31830]: I0319 12:38:17.649124 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"215a74f3-ce0f-4f33-b327-ac7df448ec62","Type":"ContainerDied","Data":"a94c157d67b40c566f11a375cac6dec11d3aee2bcec901e9bd61aabc28508a30"} Mar 19 12:38:17.649659 master-0 kubenswrapper[31830]: I0319 12:38:17.649156 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"215a74f3-ce0f-4f33-b327-ac7df448ec62","Type":"ContainerDied","Data":"23beebf4cd5475223e9e4d06ce9af7fc562a586a2c1f8e981fe30f8ea9134d28"} Mar 19 12:38:17.649659 master-0 kubenswrapper[31830]: I0319 12:38:17.649169 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"215a74f3-ce0f-4f33-b327-ac7df448ec62","Type":"ContainerDied","Data":"9769d1aa6f6e9d34ae3a2d58df094a61a76868ec6580b5141018cb5f95f8a7a5"} Mar 19 12:38:17.649659 master-0 kubenswrapper[31830]: I0319 12:38:17.649188 31830 scope.go:117] "RemoveContainer" containerID="a94c157d67b40c566f11a375cac6dec11d3aee2bcec901e9bd61aabc28508a30" Mar 19 12:38:17.649659 master-0 kubenswrapper[31830]: I0319 12:38:17.649333 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 12:38:17.650626 master-0 kubenswrapper[31830]: I0319 12:38:17.650570 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215a74f3-ce0f-4f33-b327-ac7df448ec62-combined-ca-bundle\") pod \"215a74f3-ce0f-4f33-b327-ac7df448ec62\" (UID: \"215a74f3-ce0f-4f33-b327-ac7df448ec62\") " Mar 19 12:38:17.650708 master-0 kubenswrapper[31830]: I0319 12:38:17.650683 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/215a74f3-ce0f-4f33-b327-ac7df448ec62-logs\") pod \"215a74f3-ce0f-4f33-b327-ac7df448ec62\" (UID: \"215a74f3-ce0f-4f33-b327-ac7df448ec62\") " Mar 19 12:38:17.650914 master-0 kubenswrapper[31830]: I0319 12:38:17.650865 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/215a74f3-ce0f-4f33-b327-ac7df448ec62-nova-metadata-tls-certs\") pod \"215a74f3-ce0f-4f33-b327-ac7df448ec62\" (UID: \"215a74f3-ce0f-4f33-b327-ac7df448ec62\") " Mar 19 12:38:17.651025 master-0 kubenswrapper[31830]: I0319 12:38:17.651005 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/215a74f3-ce0f-4f33-b327-ac7df448ec62-config-data\") pod \"215a74f3-ce0f-4f33-b327-ac7df448ec62\" (UID: \"215a74f3-ce0f-4f33-b327-ac7df448ec62\") " Mar 19 12:38:17.651083 master-0 kubenswrapper[31830]: I0319 12:38:17.651054 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5c9v\" (UniqueName: \"kubernetes.io/projected/215a74f3-ce0f-4f33-b327-ac7df448ec62-kube-api-access-v5c9v\") pod \"215a74f3-ce0f-4f33-b327-ac7df448ec62\" (UID: \"215a74f3-ce0f-4f33-b327-ac7df448ec62\") " Mar 19 12:38:17.653745 master-0 kubenswrapper[31830]: I0319 12:38:17.651468 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/215a74f3-ce0f-4f33-b327-ac7df448ec62-logs" (OuterVolumeSpecName: "logs") pod "215a74f3-ce0f-4f33-b327-ac7df448ec62" (UID: "215a74f3-ce0f-4f33-b327-ac7df448ec62"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:38:17.665708 master-0 kubenswrapper[31830]: I0319 12:38:17.663444 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/215a74f3-ce0f-4f33-b327-ac7df448ec62-kube-api-access-v5c9v" (OuterVolumeSpecName: "kube-api-access-v5c9v") pod "215a74f3-ce0f-4f33-b327-ac7df448ec62" (UID: "215a74f3-ce0f-4f33-b327-ac7df448ec62"). InnerVolumeSpecName "kube-api-access-v5c9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:38:17.688868 master-0 kubenswrapper[31830]: I0319 12:38:17.683070 31830 generic.go:334] "Generic (PLEG): container finished" podID="4a704971-dde7-4ffa-a887-5e8067b964bd" containerID="619addc1e55430132d16f8617693ff97d3e98ff34bf359fc6e96a4e8dd573573" exitCode=143 Mar 19 12:38:17.700950 master-0 kubenswrapper[31830]: I0319 12:38:17.700791 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215a74f3-ce0f-4f33-b327-ac7df448ec62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "215a74f3-ce0f-4f33-b327-ac7df448ec62" (UID: "215a74f3-ce0f-4f33-b327-ac7df448ec62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:38:17.721162 master-0 kubenswrapper[31830]: I0319 12:38:17.721106 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215a74f3-ce0f-4f33-b327-ac7df448ec62-config-data" (OuterVolumeSpecName: "config-data") pod "215a74f3-ce0f-4f33-b327-ac7df448ec62" (UID: "215a74f3-ce0f-4f33-b327-ac7df448ec62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:38:17.727396 master-0 kubenswrapper[31830]: I0319 12:38:17.727002 31830 scope.go:117] "RemoveContainer" containerID="23beebf4cd5475223e9e4d06ce9af7fc562a586a2c1f8e981fe30f8ea9134d28" Mar 19 12:38:17.754138 master-0 kubenswrapper[31830]: I0319 12:38:17.754065 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/215a74f3-ce0f-4f33-b327-ac7df448ec62-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:17.754138 master-0 kubenswrapper[31830]: I0319 12:38:17.754130 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5c9v\" (UniqueName: \"kubernetes.io/projected/215a74f3-ce0f-4f33-b327-ac7df448ec62-kube-api-access-v5c9v\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:17.754138 master-0 kubenswrapper[31830]: I0319 12:38:17.754146 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215a74f3-ce0f-4f33-b327-ac7df448ec62-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:17.754393 master-0 kubenswrapper[31830]: I0319 12:38:17.754159 31830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/215a74f3-ce0f-4f33-b327-ac7df448ec62-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:17.767674 master-0 kubenswrapper[31830]: I0319 12:38:17.767617 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215a74f3-ce0f-4f33-b327-ac7df448ec62-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "215a74f3-ce0f-4f33-b327-ac7df448ec62" (UID: "215a74f3-ce0f-4f33-b327-ac7df448ec62"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:38:17.826925 master-0 kubenswrapper[31830]: I0319 12:38:17.823204 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a704971-dde7-4ffa-a887-5e8067b964bd","Type":"ContainerDied","Data":"619addc1e55430132d16f8617693ff97d3e98ff34bf359fc6e96a4e8dd573573"} Mar 19 12:38:17.837226 master-0 kubenswrapper[31830]: I0319 12:38:17.837076 31830 scope.go:117] "RemoveContainer" containerID="a94c157d67b40c566f11a375cac6dec11d3aee2bcec901e9bd61aabc28508a30" Mar 19 12:38:17.837677 master-0 kubenswrapper[31830]: E0319 12:38:17.837636 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a94c157d67b40c566f11a375cac6dec11d3aee2bcec901e9bd61aabc28508a30\": container with ID starting with a94c157d67b40c566f11a375cac6dec11d3aee2bcec901e9bd61aabc28508a30 not found: ID does not exist" containerID="a94c157d67b40c566f11a375cac6dec11d3aee2bcec901e9bd61aabc28508a30" Mar 19 12:38:17.837733 master-0 kubenswrapper[31830]: I0319 12:38:17.837682 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a94c157d67b40c566f11a375cac6dec11d3aee2bcec901e9bd61aabc28508a30"} err="failed to get container status \"a94c157d67b40c566f11a375cac6dec11d3aee2bcec901e9bd61aabc28508a30\": rpc error: code = NotFound desc = could not find container \"a94c157d67b40c566f11a375cac6dec11d3aee2bcec901e9bd61aabc28508a30\": container with ID starting with a94c157d67b40c566f11a375cac6dec11d3aee2bcec901e9bd61aabc28508a30 not found: ID does not exist" Mar 19 12:38:17.837733 master-0 kubenswrapper[31830]: I0319 12:38:17.837710 31830 scope.go:117] "RemoveContainer" containerID="23beebf4cd5475223e9e4d06ce9af7fc562a586a2c1f8e981fe30f8ea9134d28" Mar 19 12:38:17.841944 master-0 kubenswrapper[31830]: E0319 12:38:17.841885 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23beebf4cd5475223e9e4d06ce9af7fc562a586a2c1f8e981fe30f8ea9134d28\": container with ID starting with 23beebf4cd5475223e9e4d06ce9af7fc562a586a2c1f8e981fe30f8ea9134d28 not found: ID does not exist" containerID="23beebf4cd5475223e9e4d06ce9af7fc562a586a2c1f8e981fe30f8ea9134d28" Mar 19 12:38:17.842052 master-0 kubenswrapper[31830]: I0319 12:38:17.841946 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23beebf4cd5475223e9e4d06ce9af7fc562a586a2c1f8e981fe30f8ea9134d28"} err="failed to get container status \"23beebf4cd5475223e9e4d06ce9af7fc562a586a2c1f8e981fe30f8ea9134d28\": rpc error: code = NotFound desc = could not find container \"23beebf4cd5475223e9e4d06ce9af7fc562a586a2c1f8e981fe30f8ea9134d28\": container with ID starting with 23beebf4cd5475223e9e4d06ce9af7fc562a586a2c1f8e981fe30f8ea9134d28 not found: ID does not exist" Mar 19 12:38:17.842052 master-0 kubenswrapper[31830]: I0319 12:38:17.841973 31830 scope.go:117] "RemoveContainer" containerID="a94c157d67b40c566f11a375cac6dec11d3aee2bcec901e9bd61aabc28508a30" Mar 19 12:38:17.843240 master-0 kubenswrapper[31830]: I0319 12:38:17.843202 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a94c157d67b40c566f11a375cac6dec11d3aee2bcec901e9bd61aabc28508a30"} err="failed to get container status \"a94c157d67b40c566f11a375cac6dec11d3aee2bcec901e9bd61aabc28508a30\": rpc error: code = NotFound desc = could not find container \"a94c157d67b40c566f11a375cac6dec11d3aee2bcec901e9bd61aabc28508a30\": container with ID starting with a94c157d67b40c566f11a375cac6dec11d3aee2bcec901e9bd61aabc28508a30 not found: ID does not exist" Mar 19 12:38:17.843240 master-0 kubenswrapper[31830]: I0319 12:38:17.843229 31830 scope.go:117] "RemoveContainer" containerID="23beebf4cd5475223e9e4d06ce9af7fc562a586a2c1f8e981fe30f8ea9134d28" Mar 19 12:38:17.844536 master-0 kubenswrapper[31830]: I0319 12:38:17.844469 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23beebf4cd5475223e9e4d06ce9af7fc562a586a2c1f8e981fe30f8ea9134d28"} err="failed to get container status \"23beebf4cd5475223e9e4d06ce9af7fc562a586a2c1f8e981fe30f8ea9134d28\": rpc error: code = NotFound desc = could not find container \"23beebf4cd5475223e9e4d06ce9af7fc562a586a2c1f8e981fe30f8ea9134d28\": container with ID starting with 23beebf4cd5475223e9e4d06ce9af7fc562a586a2c1f8e981fe30f8ea9134d28 not found: ID does not exist" Mar 19 12:38:17.858126 master-0 kubenswrapper[31830]: I0319 12:38:17.858067 31830 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/215a74f3-ce0f-4f33-b327-ac7df448ec62-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:17.991588 master-0 kubenswrapper[31830]: I0319 12:38:17.991516 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 12:38:18.004805 master-0 kubenswrapper[31830]: I0319 12:38:18.004736 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 12:38:18.022337 master-0 kubenswrapper[31830]: I0319 12:38:18.022277 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 19 12:38:18.022975 master-0 kubenswrapper[31830]: E0319 12:38:18.022948 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f24450d6-f939-4621-8d88-e0ecc012ebb6" containerName="nova-manage" Mar 19 12:38:18.022975 master-0 kubenswrapper[31830]: I0319 12:38:18.022974 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f24450d6-f939-4621-8d88-e0ecc012ebb6" containerName="nova-manage" Mar 19 12:38:18.023101 master-0 kubenswrapper[31830]: E0319 12:38:18.023011 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="293ebf87-213b-41aa-86be-a71453a91c0c" containerName="dnsmasq-dns" Mar 19 12:38:18.023101 master-0 kubenswrapper[31830]: I0319 12:38:18.023020 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="293ebf87-213b-41aa-86be-a71453a91c0c" containerName="dnsmasq-dns" Mar 19 12:38:18.023101 master-0 kubenswrapper[31830]: E0319 12:38:18.023052 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215a74f3-ce0f-4f33-b327-ac7df448ec62" containerName="nova-metadata-metadata" Mar 19 12:38:18.023101 master-0 kubenswrapper[31830]: I0319 12:38:18.023060 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="215a74f3-ce0f-4f33-b327-ac7df448ec62" containerName="nova-metadata-metadata" Mar 19 12:38:18.023238 master-0 kubenswrapper[31830]: E0319 12:38:18.023105 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215a74f3-ce0f-4f33-b327-ac7df448ec62" containerName="nova-metadata-log" Mar 19 12:38:18.023238 master-0 kubenswrapper[31830]: I0319 12:38:18.023114 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="215a74f3-ce0f-4f33-b327-ac7df448ec62" containerName="nova-metadata-log" Mar 19 12:38:18.023238 master-0 kubenswrapper[31830]: E0319 12:38:18.023141 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="293ebf87-213b-41aa-86be-a71453a91c0c" containerName="init" Mar 19 12:38:18.023238 master-0 kubenswrapper[31830]: I0319 12:38:18.023148 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="293ebf87-213b-41aa-86be-a71453a91c0c" containerName="init" Mar 19 12:38:18.023466 master-0 kubenswrapper[31830]: I0319 12:38:18.023439 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f24450d6-f939-4621-8d88-e0ecc012ebb6" containerName="nova-manage" Mar 19 12:38:18.023518 master-0 kubenswrapper[31830]: I0319 12:38:18.023473 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="215a74f3-ce0f-4f33-b327-ac7df448ec62" containerName="nova-metadata-metadata" Mar 19 12:38:18.023518 master-0 kubenswrapper[31830]: I0319 12:38:18.023490 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="293ebf87-213b-41aa-86be-a71453a91c0c" containerName="dnsmasq-dns" Mar 19 12:38:18.023518 master-0 kubenswrapper[31830]: I0319 12:38:18.023513 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="215a74f3-ce0f-4f33-b327-ac7df448ec62" containerName="nova-metadata-log" Mar 19 12:38:18.025001 master-0 kubenswrapper[31830]: I0319 12:38:18.024968 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 12:38:18.031688 master-0 kubenswrapper[31830]: I0319 12:38:18.031616 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 19 12:38:18.031934 master-0 kubenswrapper[31830]: I0319 12:38:18.031638 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 19 12:38:18.063667 master-0 kubenswrapper[31830]: I0319 12:38:18.043849 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 12:38:18.165593 master-0 kubenswrapper[31830]: I0319 12:38:18.165521 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d718087-0caf-46be-9c73-6464f876f335-config-data\") pod \"nova-metadata-0\" (UID: \"2d718087-0caf-46be-9c73-6464f876f335\") " pod="openstack/nova-metadata-0" Mar 19 12:38:18.165840 master-0 kubenswrapper[31830]: I0319 12:38:18.165676 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d718087-0caf-46be-9c73-6464f876f335-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2d718087-0caf-46be-9c73-6464f876f335\") " pod="openstack/nova-metadata-0" Mar 19 12:38:18.165840 master-0 kubenswrapper[31830]: I0319 12:38:18.165735 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d718087-0caf-46be-9c73-6464f876f335-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2d718087-0caf-46be-9c73-6464f876f335\") " pod="openstack/nova-metadata-0" Mar 19 12:38:18.165840 master-0 kubenswrapper[31830]: I0319 12:38:18.165759 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdctc\" (UniqueName: \"kubernetes.io/projected/2d718087-0caf-46be-9c73-6464f876f335-kube-api-access-vdctc\") pod \"nova-metadata-0\" (UID: \"2d718087-0caf-46be-9c73-6464f876f335\") " pod="openstack/nova-metadata-0" Mar 19 12:38:18.165840 master-0 kubenswrapper[31830]: I0319 12:38:18.165836 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d718087-0caf-46be-9c73-6464f876f335-logs\") pod \"nova-metadata-0\" (UID: \"2d718087-0caf-46be-9c73-6464f876f335\") " pod="openstack/nova-metadata-0" Mar 19 12:38:18.270065 master-0 kubenswrapper[31830]: I0319 12:38:18.268308 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d718087-0caf-46be-9c73-6464f876f335-logs\") pod \"nova-metadata-0\" (UID: \"2d718087-0caf-46be-9c73-6464f876f335\") " pod="openstack/nova-metadata-0" Mar 19 12:38:18.270065 master-0 kubenswrapper[31830]: I0319 12:38:18.268466 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d718087-0caf-46be-9c73-6464f876f335-config-data\") pod \"nova-metadata-0\" (UID: \"2d718087-0caf-46be-9c73-6464f876f335\") " pod="openstack/nova-metadata-0" Mar 19 12:38:18.270065 master-0 kubenswrapper[31830]: I0319 12:38:18.268651 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d718087-0caf-46be-9c73-6464f876f335-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2d718087-0caf-46be-9c73-6464f876f335\") " pod="openstack/nova-metadata-0" Mar 19 12:38:18.270065 master-0 kubenswrapper[31830]: I0319 12:38:18.268744 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d718087-0caf-46be-9c73-6464f876f335-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2d718087-0caf-46be-9c73-6464f876f335\") " pod="openstack/nova-metadata-0" Mar 19 12:38:18.270065 master-0 kubenswrapper[31830]: I0319 12:38:18.268778 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdctc\" (UniqueName: \"kubernetes.io/projected/2d718087-0caf-46be-9c73-6464f876f335-kube-api-access-vdctc\") pod \"nova-metadata-0\" (UID: \"2d718087-0caf-46be-9c73-6464f876f335\") " pod="openstack/nova-metadata-0" Mar 19 12:38:18.270065 master-0 kubenswrapper[31830]: I0319 12:38:18.269677 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d718087-0caf-46be-9c73-6464f876f335-logs\") pod \"nova-metadata-0\" (UID: \"2d718087-0caf-46be-9c73-6464f876f335\") " pod="openstack/nova-metadata-0" Mar 19 12:38:18.279628 master-0 kubenswrapper[31830]: I0319 12:38:18.279583 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d718087-0caf-46be-9c73-6464f876f335-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2d718087-0caf-46be-9c73-6464f876f335\") " pod="openstack/nova-metadata-0" Mar 19 12:38:18.279910 master-0 kubenswrapper[31830]: I0319 12:38:18.279585 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d718087-0caf-46be-9c73-6464f876f335-config-data\") pod \"nova-metadata-0\" (UID: \"2d718087-0caf-46be-9c73-6464f876f335\") " pod="openstack/nova-metadata-0" Mar 19 12:38:18.283821 master-0 kubenswrapper[31830]: I0319 12:38:18.281338 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d718087-0caf-46be-9c73-6464f876f335-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2d718087-0caf-46be-9c73-6464f876f335\") " pod="openstack/nova-metadata-0" Mar 19 12:38:18.297818 master-0 kubenswrapper[31830]: I0319 12:38:18.295420 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdctc\" (UniqueName: \"kubernetes.io/projected/2d718087-0caf-46be-9c73-6464f876f335-kube-api-access-vdctc\") pod \"nova-metadata-0\" (UID: \"2d718087-0caf-46be-9c73-6464f876f335\") " pod="openstack/nova-metadata-0" Mar 19 12:38:18.363382 master-0 kubenswrapper[31830]: I0319 12:38:18.363321 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 12:38:18.831111 master-0 kubenswrapper[31830]: I0319 12:38:18.830007 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 12:38:18.834440 master-0 kubenswrapper[31830]: W0319 12:38:18.834380 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d718087_0caf_46be_9c73_6464f876f335.slice/crio-e79f89d84f77cfa83684a331b188e902046ffc1e144ea15f322b9130d177054c WatchSource:0}: Error finding container e79f89d84f77cfa83684a331b188e902046ffc1e144ea15f322b9130d177054c: Status 404 returned error can't find the container with id e79f89d84f77cfa83684a331b188e902046ffc1e144ea15f322b9130d177054c Mar 19 12:38:19.136742 master-0 kubenswrapper[31830]: E0319 12:38:19.136078 31830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a4b98ddef5c00787fa798c47862a461d1e8e39d4e202113a4ec244fcf836ca8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 19 12:38:19.140892 master-0 kubenswrapper[31830]: E0319 12:38:19.138379 31830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a4b98ddef5c00787fa798c47862a461d1e8e39d4e202113a4ec244fcf836ca8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 19 12:38:19.140892 master-0 kubenswrapper[31830]: E0319 12:38:19.139734 31830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a4b98ddef5c00787fa798c47862a461d1e8e39d4e202113a4ec244fcf836ca8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 19 12:38:19.140892 master-0 kubenswrapper[31830]: E0319 12:38:19.139767 31830 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="380fc05c-56b2-4e38-8601-bca5c49a343e" containerName="nova-scheduler-scheduler" Mar 19 12:38:19.707742 master-0 kubenswrapper[31830]: I0319 12:38:19.707673 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="215a74f3-ce0f-4f33-b327-ac7df448ec62" path="/var/lib/kubelet/pods/215a74f3-ce0f-4f33-b327-ac7df448ec62/volumes" Mar 19 12:38:19.728283 master-0 kubenswrapper[31830]: I0319 12:38:19.728222 31830 generic.go:334] "Generic (PLEG): container finished" podID="d8188767-a3a9-4859-aa0f-bc448a038114" containerID="8ad1ad251def71f89d1bda5f55c372d1b9191806f1cb572e601abe10afabdfe9" exitCode=0 Mar 19 12:38:19.728583 master-0 kubenswrapper[31830]: I0319 12:38:19.728303 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7qtfk" event={"ID":"d8188767-a3a9-4859-aa0f-bc448a038114","Type":"ContainerDied","Data":"8ad1ad251def71f89d1bda5f55c372d1b9191806f1cb572e601abe10afabdfe9"} Mar 19 12:38:19.733380 master-0 kubenswrapper[31830]: I0319 12:38:19.733331 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2d718087-0caf-46be-9c73-6464f876f335","Type":"ContainerStarted","Data":"85194f8a11b46ba38d3729da1cc5b267a98d24628efc12287336e5a37ecd6a0d"} Mar 19 12:38:19.733380 master-0 kubenswrapper[31830]: I0319 12:38:19.733374 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2d718087-0caf-46be-9c73-6464f876f335","Type":"ContainerStarted","Data":"6ac507d73488316acd06380b83b71eb842429e97623b18548ac5a27c25a19e46"} Mar 19 12:38:19.733380 master-0 kubenswrapper[31830]: I0319 12:38:19.733384 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2d718087-0caf-46be-9c73-6464f876f335","Type":"ContainerStarted","Data":"e79f89d84f77cfa83684a331b188e902046ffc1e144ea15f322b9130d177054c"} Mar 19 12:38:19.786888 master-0 kubenswrapper[31830]: I0319 12:38:19.786689 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.786668142 podStartE2EDuration="2.786668142s" podCreationTimestamp="2026-03-19 12:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:38:19.778175119 +0000 UTC m=+1438.327135853" watchObservedRunningTime="2026-03-19 12:38:19.786668142 +0000 UTC m=+1438.335628836" Mar 19 12:38:20.045373 master-0 kubenswrapper[31830]: I0319 12:38:20.045197 31830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7557f57847-t2m77" podUID="293ebf87-213b-41aa-86be-a71453a91c0c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.224:5353: i/o timeout" Mar 19 12:38:21.195249 master-0 kubenswrapper[31830]: I0319 12:38:21.195186 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7qtfk" Mar 19 12:38:21.347568 master-0 kubenswrapper[31830]: I0319 12:38:21.347426 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8188767-a3a9-4859-aa0f-bc448a038114-config-data\") pod \"d8188767-a3a9-4859-aa0f-bc448a038114\" (UID: \"d8188767-a3a9-4859-aa0f-bc448a038114\") " Mar 19 12:38:21.348022 master-0 kubenswrapper[31830]: I0319 12:38:21.347947 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx7ft\" (UniqueName: \"kubernetes.io/projected/d8188767-a3a9-4859-aa0f-bc448a038114-kube-api-access-jx7ft\") pod \"d8188767-a3a9-4859-aa0f-bc448a038114\" (UID: \"d8188767-a3a9-4859-aa0f-bc448a038114\") " Mar 19 12:38:21.348755 master-0 kubenswrapper[31830]: I0319 12:38:21.348221 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8188767-a3a9-4859-aa0f-bc448a038114-combined-ca-bundle\") pod \"d8188767-a3a9-4859-aa0f-bc448a038114\" (UID: \"d8188767-a3a9-4859-aa0f-bc448a038114\") " Mar 19 12:38:21.348755 master-0 kubenswrapper[31830]: I0319 12:38:21.348289 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8188767-a3a9-4859-aa0f-bc448a038114-scripts\") pod \"d8188767-a3a9-4859-aa0f-bc448a038114\" (UID: \"d8188767-a3a9-4859-aa0f-bc448a038114\") " Mar 19 12:38:21.359127 master-0 kubenswrapper[31830]: I0319 12:38:21.359060 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8188767-a3a9-4859-aa0f-bc448a038114-kube-api-access-jx7ft" (OuterVolumeSpecName: "kube-api-access-jx7ft") pod "d8188767-a3a9-4859-aa0f-bc448a038114" (UID: "d8188767-a3a9-4859-aa0f-bc448a038114"). InnerVolumeSpecName "kube-api-access-jx7ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:38:21.359366 master-0 kubenswrapper[31830]: I0319 12:38:21.359329 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8188767-a3a9-4859-aa0f-bc448a038114-scripts" (OuterVolumeSpecName: "scripts") pod "d8188767-a3a9-4859-aa0f-bc448a038114" (UID: "d8188767-a3a9-4859-aa0f-bc448a038114"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:38:21.384766 master-0 kubenswrapper[31830]: I0319 12:38:21.384703 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8188767-a3a9-4859-aa0f-bc448a038114-config-data" (OuterVolumeSpecName: "config-data") pod "d8188767-a3a9-4859-aa0f-bc448a038114" (UID: "d8188767-a3a9-4859-aa0f-bc448a038114"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:38:21.385307 master-0 kubenswrapper[31830]: I0319 12:38:21.385254 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8188767-a3a9-4859-aa0f-bc448a038114-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8188767-a3a9-4859-aa0f-bc448a038114" (UID: "d8188767-a3a9-4859-aa0f-bc448a038114"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:38:21.452134 master-0 kubenswrapper[31830]: I0319 12:38:21.452043 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8188767-a3a9-4859-aa0f-bc448a038114-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:21.452134 master-0 kubenswrapper[31830]: I0319 12:38:21.452129 31830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8188767-a3a9-4859-aa0f-bc448a038114-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:21.452134 master-0 kubenswrapper[31830]: I0319 12:38:21.452139 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8188767-a3a9-4859-aa0f-bc448a038114-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:21.452522 master-0 kubenswrapper[31830]: I0319 12:38:21.452210 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx7ft\" (UniqueName: \"kubernetes.io/projected/d8188767-a3a9-4859-aa0f-bc448a038114-kube-api-access-jx7ft\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:21.760434 master-0 kubenswrapper[31830]: I0319 12:38:21.760363 31830 generic.go:334] "Generic (PLEG): container finished" podID="4a704971-dde7-4ffa-a887-5e8067b964bd" containerID="12e1ab58c399bb832837cc2271e06fcd742d23055321f8ccabab84286e4af1c8" exitCode=0 Mar 19 12:38:21.760702 master-0 kubenswrapper[31830]: I0319 12:38:21.760442 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a704971-dde7-4ffa-a887-5e8067b964bd","Type":"ContainerDied","Data":"12e1ab58c399bb832837cc2271e06fcd742d23055321f8ccabab84286e4af1c8"} Mar 19 12:38:21.763521 master-0 kubenswrapper[31830]: I0319 12:38:21.763477 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 12:38:21.764311 master-0 kubenswrapper[31830]: I0319 12:38:21.764267 31830 generic.go:334] "Generic (PLEG): container finished" podID="380fc05c-56b2-4e38-8601-bca5c49a343e" containerID="7a4b98ddef5c00787fa798c47862a461d1e8e39d4e202113a4ec244fcf836ca8" exitCode=0 Mar 19 12:38:21.764454 master-0 kubenswrapper[31830]: I0319 12:38:21.764325 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"380fc05c-56b2-4e38-8601-bca5c49a343e","Type":"ContainerDied","Data":"7a4b98ddef5c00787fa798c47862a461d1e8e39d4e202113a4ec244fcf836ca8"} Mar 19 12:38:21.769852 master-0 kubenswrapper[31830]: I0319 12:38:21.769783 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7qtfk" event={"ID":"d8188767-a3a9-4859-aa0f-bc448a038114","Type":"ContainerDied","Data":"e45251ec738bb611fc0adff1f636621e76f1d4e1005303efc6caabc86e0fcdb3"} Mar 19 12:38:21.769852 master-0 kubenswrapper[31830]: I0319 12:38:21.769854 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e45251ec738bb611fc0adff1f636621e76f1d4e1005303efc6caabc86e0fcdb3" Mar 19 12:38:21.770103 master-0 kubenswrapper[31830]: I0319 12:38:21.769918 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7qtfk" Mar 19 12:38:21.873870 master-0 kubenswrapper[31830]: I0319 12:38:21.864360 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a704971-dde7-4ffa-a887-5e8067b964bd-combined-ca-bundle\") pod \"4a704971-dde7-4ffa-a887-5e8067b964bd\" (UID: \"4a704971-dde7-4ffa-a887-5e8067b964bd\") " Mar 19 12:38:21.873870 master-0 kubenswrapper[31830]: I0319 12:38:21.864561 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a704971-dde7-4ffa-a887-5e8067b964bd-config-data\") pod \"4a704971-dde7-4ffa-a887-5e8067b964bd\" (UID: \"4a704971-dde7-4ffa-a887-5e8067b964bd\") " Mar 19 12:38:21.873870 master-0 kubenswrapper[31830]: I0319 12:38:21.864775 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdqlq\" (UniqueName: \"kubernetes.io/projected/4a704971-dde7-4ffa-a887-5e8067b964bd-kube-api-access-sdqlq\") pod \"4a704971-dde7-4ffa-a887-5e8067b964bd\" (UID: \"4a704971-dde7-4ffa-a887-5e8067b964bd\") " Mar 19 12:38:21.873870 master-0 kubenswrapper[31830]: I0319 12:38:21.864887 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a704971-dde7-4ffa-a887-5e8067b964bd-logs\") pod \"4a704971-dde7-4ffa-a887-5e8067b964bd\" (UID: \"4a704971-dde7-4ffa-a887-5e8067b964bd\") " Mar 19 12:38:21.873870 master-0 kubenswrapper[31830]: I0319 12:38:21.868964 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a704971-dde7-4ffa-a887-5e8067b964bd-logs" (OuterVolumeSpecName: "logs") pod "4a704971-dde7-4ffa-a887-5e8067b964bd" (UID: "4a704971-dde7-4ffa-a887-5e8067b964bd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:38:21.873870 master-0 kubenswrapper[31830]: I0319 12:38:21.872762 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a704971-dde7-4ffa-a887-5e8067b964bd-kube-api-access-sdqlq" (OuterVolumeSpecName: "kube-api-access-sdqlq") pod "4a704971-dde7-4ffa-a887-5e8067b964bd" (UID: "4a704971-dde7-4ffa-a887-5e8067b964bd"). InnerVolumeSpecName "kube-api-access-sdqlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:38:21.932912 master-0 kubenswrapper[31830]: I0319 12:38:21.906822 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 19 12:38:21.932912 master-0 kubenswrapper[31830]: E0319 12:38:21.907437 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8188767-a3a9-4859-aa0f-bc448a038114" containerName="nova-cell1-conductor-db-sync" Mar 19 12:38:21.932912 master-0 kubenswrapper[31830]: I0319 12:38:21.907455 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8188767-a3a9-4859-aa0f-bc448a038114" containerName="nova-cell1-conductor-db-sync" Mar 19 12:38:21.932912 master-0 kubenswrapper[31830]: E0319 12:38:21.907483 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a704971-dde7-4ffa-a887-5e8067b964bd" containerName="nova-api-api" Mar 19 12:38:21.932912 master-0 kubenswrapper[31830]: I0319 12:38:21.907491 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a704971-dde7-4ffa-a887-5e8067b964bd" containerName="nova-api-api" Mar 19 12:38:21.932912 master-0 kubenswrapper[31830]: E0319 12:38:21.907538 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a704971-dde7-4ffa-a887-5e8067b964bd" containerName="nova-api-log" Mar 19 12:38:21.932912 master-0 kubenswrapper[31830]: I0319 12:38:21.907546 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a704971-dde7-4ffa-a887-5e8067b964bd" containerName="nova-api-log" Mar 19 12:38:21.932912 master-0 kubenswrapper[31830]: I0319 12:38:21.907841 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8188767-a3a9-4859-aa0f-bc448a038114" containerName="nova-cell1-conductor-db-sync" Mar 19 12:38:21.932912 master-0 kubenswrapper[31830]: I0319 12:38:21.907860 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a704971-dde7-4ffa-a887-5e8067b964bd" containerName="nova-api-api" Mar 19 12:38:21.932912 master-0 kubenswrapper[31830]: I0319 12:38:21.907885 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a704971-dde7-4ffa-a887-5e8067b964bd" containerName="nova-api-log" Mar 19 12:38:21.932912 master-0 kubenswrapper[31830]: I0319 12:38:21.908627 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 19 12:38:21.932912 master-0 kubenswrapper[31830]: I0319 12:38:21.922261 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 19 12:38:21.934281 master-0 kubenswrapper[31830]: I0319 12:38:21.933921 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 19 12:38:21.939840 master-0 kubenswrapper[31830]: I0319 12:38:21.934438 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 12:38:21.939840 master-0 kubenswrapper[31830]: E0319 12:38:21.939562 31830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a704971-dde7-4ffa-a887-5e8067b964bd-config-data podName:4a704971-dde7-4ffa-a887-5e8067b964bd nodeName:}" failed. No retries permitted until 2026-03-19 12:38:22.439519748 +0000 UTC m=+1440.988480452 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/4a704971-dde7-4ffa-a887-5e8067b964bd-config-data") pod "4a704971-dde7-4ffa-a887-5e8067b964bd" (UID: "4a704971-dde7-4ffa-a887-5e8067b964bd") : error deleting /var/lib/kubelet/pods/4a704971-dde7-4ffa-a887-5e8067b964bd/volume-subpaths: remove /var/lib/kubelet/pods/4a704971-dde7-4ffa-a887-5e8067b964bd/volume-subpaths: no such file or directory Mar 19 12:38:21.942717 master-0 kubenswrapper[31830]: I0319 12:38:21.942653 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a704971-dde7-4ffa-a887-5e8067b964bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a704971-dde7-4ffa-a887-5e8067b964bd" (UID: "4a704971-dde7-4ffa-a887-5e8067b964bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:38:21.972036 master-0 kubenswrapper[31830]: I0319 12:38:21.967692 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdqlq\" (UniqueName: \"kubernetes.io/projected/4a704971-dde7-4ffa-a887-5e8067b964bd-kube-api-access-sdqlq\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:21.972036 master-0 kubenswrapper[31830]: I0319 12:38:21.967732 31830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a704971-dde7-4ffa-a887-5e8067b964bd-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:21.972036 master-0 kubenswrapper[31830]: I0319 12:38:21.967746 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a704971-dde7-4ffa-a887-5e8067b964bd-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:22.069070 master-0 kubenswrapper[31830]: I0319 12:38:22.068990 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z646q\" (UniqueName: \"kubernetes.io/projected/380fc05c-56b2-4e38-8601-bca5c49a343e-kube-api-access-z646q\") pod \"380fc05c-56b2-4e38-8601-bca5c49a343e\" (UID: \"380fc05c-56b2-4e38-8601-bca5c49a343e\") " Mar 19 12:38:22.069293 master-0 kubenswrapper[31830]: I0319 12:38:22.069141 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/380fc05c-56b2-4e38-8601-bca5c49a343e-config-data\") pod \"380fc05c-56b2-4e38-8601-bca5c49a343e\" (UID: \"380fc05c-56b2-4e38-8601-bca5c49a343e\") " Mar 19 12:38:22.069293 master-0 kubenswrapper[31830]: I0319 12:38:22.069222 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380fc05c-56b2-4e38-8601-bca5c49a343e-combined-ca-bundle\") pod \"380fc05c-56b2-4e38-8601-bca5c49a343e\" (UID: \"380fc05c-56b2-4e38-8601-bca5c49a343e\") " Mar 19 12:38:22.069570 master-0 kubenswrapper[31830]: I0319 12:38:22.069528 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8vz4\" (UniqueName: \"kubernetes.io/projected/19617489-e8f7-405b-b047-7344b57f32b4-kube-api-access-j8vz4\") pod \"nova-cell1-conductor-0\" (UID: \"19617489-e8f7-405b-b047-7344b57f32b4\") " pod="openstack/nova-cell1-conductor-0" Mar 19 12:38:22.069660 master-0 kubenswrapper[31830]: I0319 12:38:22.069578 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19617489-e8f7-405b-b047-7344b57f32b4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"19617489-e8f7-405b-b047-7344b57f32b4\") " pod="openstack/nova-cell1-conductor-0" Mar 19 12:38:22.069660 master-0 kubenswrapper[31830]: I0319 12:38:22.069617 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19617489-e8f7-405b-b047-7344b57f32b4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"19617489-e8f7-405b-b047-7344b57f32b4\") " pod="openstack/nova-cell1-conductor-0" Mar 19 12:38:22.073650 master-0 kubenswrapper[31830]: I0319 12:38:22.073571 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/380fc05c-56b2-4e38-8601-bca5c49a343e-kube-api-access-z646q" (OuterVolumeSpecName: "kube-api-access-z646q") pod "380fc05c-56b2-4e38-8601-bca5c49a343e" (UID: "380fc05c-56b2-4e38-8601-bca5c49a343e"). InnerVolumeSpecName "kube-api-access-z646q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:38:22.108253 master-0 kubenswrapper[31830]: I0319 12:38:22.108186 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/380fc05c-56b2-4e38-8601-bca5c49a343e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "380fc05c-56b2-4e38-8601-bca5c49a343e" (UID: "380fc05c-56b2-4e38-8601-bca5c49a343e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:38:22.119212 master-0 kubenswrapper[31830]: I0319 12:38:22.119152 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/380fc05c-56b2-4e38-8601-bca5c49a343e-config-data" (OuterVolumeSpecName: "config-data") pod "380fc05c-56b2-4e38-8601-bca5c49a343e" (UID: "380fc05c-56b2-4e38-8601-bca5c49a343e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:38:22.171614 master-0 kubenswrapper[31830]: I0319 12:38:22.171527 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8vz4\" (UniqueName: \"kubernetes.io/projected/19617489-e8f7-405b-b047-7344b57f32b4-kube-api-access-j8vz4\") pod \"nova-cell1-conductor-0\" (UID: \"19617489-e8f7-405b-b047-7344b57f32b4\") " pod="openstack/nova-cell1-conductor-0" Mar 19 12:38:22.171614 master-0 kubenswrapper[31830]: I0319 12:38:22.171605 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19617489-e8f7-405b-b047-7344b57f32b4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"19617489-e8f7-405b-b047-7344b57f32b4\") " pod="openstack/nova-cell1-conductor-0" Mar 19 12:38:22.172180 master-0 kubenswrapper[31830]: I0319 12:38:22.171949 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19617489-e8f7-405b-b047-7344b57f32b4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"19617489-e8f7-405b-b047-7344b57f32b4\") " pod="openstack/nova-cell1-conductor-0" Mar 19 12:38:22.172180 master-0 kubenswrapper[31830]: I0319 12:38:22.172132 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z646q\" (UniqueName: \"kubernetes.io/projected/380fc05c-56b2-4e38-8601-bca5c49a343e-kube-api-access-z646q\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:22.172180 master-0 kubenswrapper[31830]: I0319 12:38:22.172150 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/380fc05c-56b2-4e38-8601-bca5c49a343e-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:22.172180 master-0 kubenswrapper[31830]: I0319 12:38:22.172160 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380fc05c-56b2-4e38-8601-bca5c49a343e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:22.180816 master-0 kubenswrapper[31830]: I0319 12:38:22.175413 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19617489-e8f7-405b-b047-7344b57f32b4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"19617489-e8f7-405b-b047-7344b57f32b4\") " pod="openstack/nova-cell1-conductor-0" Mar 19 12:38:22.180816 master-0 kubenswrapper[31830]: I0319 12:38:22.175485 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19617489-e8f7-405b-b047-7344b57f32b4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"19617489-e8f7-405b-b047-7344b57f32b4\") " pod="openstack/nova-cell1-conductor-0" Mar 19 12:38:22.192973 master-0 kubenswrapper[31830]: I0319 12:38:22.189468 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8vz4\" (UniqueName: \"kubernetes.io/projected/19617489-e8f7-405b-b047-7344b57f32b4-kube-api-access-j8vz4\") pod \"nova-cell1-conductor-0\" (UID: \"19617489-e8f7-405b-b047-7344b57f32b4\") " pod="openstack/nova-cell1-conductor-0" Mar 19 12:38:22.260218 master-0 kubenswrapper[31830]: I0319 12:38:22.260094 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 19 12:38:22.482694 master-0 kubenswrapper[31830]: I0319 12:38:22.482431 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a704971-dde7-4ffa-a887-5e8067b964bd-config-data\") pod \"4a704971-dde7-4ffa-a887-5e8067b964bd\" (UID: \"4a704971-dde7-4ffa-a887-5e8067b964bd\") " Mar 19 12:38:22.486404 master-0 kubenswrapper[31830]: I0319 12:38:22.485548 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a704971-dde7-4ffa-a887-5e8067b964bd-config-data" (OuterVolumeSpecName: "config-data") pod "4a704971-dde7-4ffa-a887-5e8067b964bd" (UID: "4a704971-dde7-4ffa-a887-5e8067b964bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:38:22.585411 master-0 kubenswrapper[31830]: I0319 12:38:22.585259 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a704971-dde7-4ffa-a887-5e8067b964bd-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:22.756752 master-0 kubenswrapper[31830]: W0319 12:38:22.756681 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19617489_e8f7_405b_b047_7344b57f32b4.slice/crio-3525d0a44ac46ec47273a15cd4efe030372b7230076e22ede0f1c6ad40b86df6 WatchSource:0}: Error finding container 3525d0a44ac46ec47273a15cd4efe030372b7230076e22ede0f1c6ad40b86df6: Status 404 returned error can't find the container with id 3525d0a44ac46ec47273a15cd4efe030372b7230076e22ede0f1c6ad40b86df6 Mar 19 12:38:22.762727 master-0 kubenswrapper[31830]: I0319 12:38:22.761226 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 19 12:38:22.783172 master-0 kubenswrapper[31830]: I0319 12:38:22.783019 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"19617489-e8f7-405b-b047-7344b57f32b4","Type":"ContainerStarted","Data":"3525d0a44ac46ec47273a15cd4efe030372b7230076e22ede0f1c6ad40b86df6"} Mar 19 12:38:22.786213 master-0 kubenswrapper[31830]: I0319 12:38:22.786182 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a704971-dde7-4ffa-a887-5e8067b964bd","Type":"ContainerDied","Data":"79e3e6f384167f2ee68f34cd050672a4cf965e042837b69f7206158116109203"} Mar 19 12:38:22.786294 master-0 kubenswrapper[31830]: I0319 12:38:22.786218 31830 scope.go:117] "RemoveContainer" containerID="12e1ab58c399bb832837cc2271e06fcd742d23055321f8ccabab84286e4af1c8" Mar 19 12:38:22.786294 master-0 kubenswrapper[31830]: I0319 12:38:22.786263 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 12:38:22.789747 master-0 kubenswrapper[31830]: I0319 12:38:22.789685 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"380fc05c-56b2-4e38-8601-bca5c49a343e","Type":"ContainerDied","Data":"ca023c33f14c9574149d38f87c1dd0ae70346868e4ce84e473ac3508720e64fe"} Mar 19 12:38:22.789747 master-0 kubenswrapper[31830]: I0319 12:38:22.789722 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 12:38:22.811037 master-0 kubenswrapper[31830]: I0319 12:38:22.810999 31830 scope.go:117] "RemoveContainer" containerID="619addc1e55430132d16f8617693ff97d3e98ff34bf359fc6e96a4e8dd573573" Mar 19 12:38:22.837656 master-0 kubenswrapper[31830]: I0319 12:38:22.835198 31830 scope.go:117] "RemoveContainer" containerID="7a4b98ddef5c00787fa798c47862a461d1e8e39d4e202113a4ec244fcf836ca8" Mar 19 12:38:22.854984 master-0 kubenswrapper[31830]: I0319 12:38:22.853975 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 19 12:38:22.875461 master-0 kubenswrapper[31830]: I0319 12:38:22.875388 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 19 12:38:22.889896 master-0 kubenswrapper[31830]: I0319 12:38:22.889044 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 12:38:22.917831 master-0 kubenswrapper[31830]: I0319 12:38:22.911864 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 19 12:38:22.917831 master-0 kubenswrapper[31830]: E0319 12:38:22.912478 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="380fc05c-56b2-4e38-8601-bca5c49a343e" containerName="nova-scheduler-scheduler" Mar 19 12:38:22.917831 master-0 kubenswrapper[31830]: I0319 12:38:22.912495 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="380fc05c-56b2-4e38-8601-bca5c49a343e" containerName="nova-scheduler-scheduler" Mar 19 12:38:22.917831 master-0 kubenswrapper[31830]: I0319 12:38:22.913182 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="380fc05c-56b2-4e38-8601-bca5c49a343e" containerName="nova-scheduler-scheduler" Mar 19 12:38:22.917831 master-0 kubenswrapper[31830]: I0319 12:38:22.914719 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 12:38:22.922816 master-0 kubenswrapper[31830]: I0319 12:38:22.919068 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 19 12:38:22.936504 master-0 kubenswrapper[31830]: I0319 12:38:22.936448 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 12:38:22.949123 master-0 kubenswrapper[31830]: I0319 12:38:22.949075 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 12:38:22.962621 master-0 kubenswrapper[31830]: I0319 12:38:22.962551 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 12:38:22.964549 master-0 kubenswrapper[31830]: I0319 12:38:22.964503 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 12:38:22.967082 master-0 kubenswrapper[31830]: I0319 12:38:22.967041 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 19 12:38:22.985264 master-0 kubenswrapper[31830]: I0319 12:38:22.981998 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 12:38:22.995119 master-0 kubenswrapper[31830]: I0319 12:38:22.995064 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-config-data\") pod \"nova-api-0\" (UID: \"6d6bd496-fd6c-4f07-8f05-bef2fe80c398\") " pod="openstack/nova-api-0" Mar 19 12:38:22.995307 master-0 kubenswrapper[31830]: I0319 12:38:22.995257 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6d6bd496-fd6c-4f07-8f05-bef2fe80c398\") " pod="openstack/nova-api-0" Mar 19 12:38:22.995368 master-0 kubenswrapper[31830]: I0319 12:38:22.995332 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-logs\") pod \"nova-api-0\" (UID: \"6d6bd496-fd6c-4f07-8f05-bef2fe80c398\") " pod="openstack/nova-api-0" Mar 19 12:38:22.995488 master-0 kubenswrapper[31830]: I0319 12:38:22.995465 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxctj\" (UniqueName: \"kubernetes.io/projected/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-kube-api-access-dxctj\") pod \"nova-api-0\" (UID: \"6d6bd496-fd6c-4f07-8f05-bef2fe80c398\") " pod="openstack/nova-api-0" Mar 19 12:38:23.097158 master-0 kubenswrapper[31830]: I0319 12:38:23.097027 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-config-data\") pod \"nova-api-0\" (UID: \"6d6bd496-fd6c-4f07-8f05-bef2fe80c398\") " pod="openstack/nova-api-0" Mar 19 12:38:23.097158 master-0 kubenswrapper[31830]: I0319 12:38:23.097119 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6d6bd496-fd6c-4f07-8f05-bef2fe80c398\") " pod="openstack/nova-api-0" Mar 19 12:38:23.097158 master-0 kubenswrapper[31830]: I0319 12:38:23.097155 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-logs\") pod \"nova-api-0\" (UID: \"6d6bd496-fd6c-4f07-8f05-bef2fe80c398\") " pod="openstack/nova-api-0" Mar 19 12:38:23.097480 master-0 kubenswrapper[31830]: I0319 12:38:23.097214 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce924861-cb91-4340-9cd7-d3c74dc4b11c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ce924861-cb91-4340-9cd7-d3c74dc4b11c\") " pod="openstack/nova-scheduler-0" Mar 19 12:38:23.097480 master-0 kubenswrapper[31830]: I0319 12:38:23.097279 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxctj\" (UniqueName: \"kubernetes.io/projected/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-kube-api-access-dxctj\") pod \"nova-api-0\" (UID: \"6d6bd496-fd6c-4f07-8f05-bef2fe80c398\") " pod="openstack/nova-api-0" Mar 19 12:38:23.097480 master-0 kubenswrapper[31830]: I0319 12:38:23.097389 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cl4p\" (UniqueName: \"kubernetes.io/projected/ce924861-cb91-4340-9cd7-d3c74dc4b11c-kube-api-access-2cl4p\") pod \"nova-scheduler-0\" (UID: \"ce924861-cb91-4340-9cd7-d3c74dc4b11c\") " pod="openstack/nova-scheduler-0" Mar 19 12:38:23.097480 master-0 kubenswrapper[31830]: I0319 12:38:23.097461 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce924861-cb91-4340-9cd7-d3c74dc4b11c-config-data\") pod \"nova-scheduler-0\" (UID: \"ce924861-cb91-4340-9cd7-d3c74dc4b11c\") " pod="openstack/nova-scheduler-0" Mar 19 12:38:23.097867 master-0 kubenswrapper[31830]: I0319 12:38:23.097760 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-logs\") pod \"nova-api-0\" (UID: \"6d6bd496-fd6c-4f07-8f05-bef2fe80c398\") " pod="openstack/nova-api-0" Mar 19 12:38:23.100870 master-0 kubenswrapper[31830]: I0319 12:38:23.100817 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6d6bd496-fd6c-4f07-8f05-bef2fe80c398\") " pod="openstack/nova-api-0" Mar 19 12:38:23.101019 master-0 kubenswrapper[31830]: I0319 12:38:23.100898 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-config-data\") pod \"nova-api-0\" (UID: \"6d6bd496-fd6c-4f07-8f05-bef2fe80c398\") " pod="openstack/nova-api-0" Mar 19 12:38:23.113360 master-0 kubenswrapper[31830]: I0319 12:38:23.113296 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxctj\" (UniqueName: \"kubernetes.io/projected/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-kube-api-access-dxctj\") pod \"nova-api-0\" (UID: \"6d6bd496-fd6c-4f07-8f05-bef2fe80c398\") " pod="openstack/nova-api-0" Mar 19 12:38:23.199257 master-0 kubenswrapper[31830]: I0319 12:38:23.199193 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cl4p\" (UniqueName: \"kubernetes.io/projected/ce924861-cb91-4340-9cd7-d3c74dc4b11c-kube-api-access-2cl4p\") pod \"nova-scheduler-0\" (UID: \"ce924861-cb91-4340-9cd7-d3c74dc4b11c\") " pod="openstack/nova-scheduler-0" Mar 19 12:38:23.199462 master-0 kubenswrapper[31830]: I0319 12:38:23.199277 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce924861-cb91-4340-9cd7-d3c74dc4b11c-config-data\") pod \"nova-scheduler-0\" (UID: \"ce924861-cb91-4340-9cd7-d3c74dc4b11c\") " pod="openstack/nova-scheduler-0" Mar 19 12:38:23.199462 master-0 kubenswrapper[31830]: I0319 12:38:23.199379 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce924861-cb91-4340-9cd7-d3c74dc4b11c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ce924861-cb91-4340-9cd7-d3c74dc4b11c\") " pod="openstack/nova-scheduler-0" Mar 19 12:38:23.203364 master-0 kubenswrapper[31830]: I0319 12:38:23.203088 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce924861-cb91-4340-9cd7-d3c74dc4b11c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ce924861-cb91-4340-9cd7-d3c74dc4b11c\") " pod="openstack/nova-scheduler-0" Mar 19 12:38:23.204948 master-0 kubenswrapper[31830]: I0319 12:38:23.204224 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce924861-cb91-4340-9cd7-d3c74dc4b11c-config-data\") pod \"nova-scheduler-0\" (UID: \"ce924861-cb91-4340-9cd7-d3c74dc4b11c\") " pod="openstack/nova-scheduler-0" Mar 19 12:38:23.215901 master-0 kubenswrapper[31830]: I0319 12:38:23.215842 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cl4p\" (UniqueName: \"kubernetes.io/projected/ce924861-cb91-4340-9cd7-d3c74dc4b11c-kube-api-access-2cl4p\") pod \"nova-scheduler-0\" (UID: \"ce924861-cb91-4340-9cd7-d3c74dc4b11c\") " pod="openstack/nova-scheduler-0" Mar 19 12:38:23.245000 master-0 kubenswrapper[31830]: I0319 12:38:23.244955 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 12:38:23.286270 master-0 kubenswrapper[31830]: I0319 12:38:23.286212 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 12:38:23.697380 master-0 kubenswrapper[31830]: I0319 12:38:23.697246 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="380fc05c-56b2-4e38-8601-bca5c49a343e" path="/var/lib/kubelet/pods/380fc05c-56b2-4e38-8601-bca5c49a343e/volumes" Mar 19 12:38:23.697972 master-0 kubenswrapper[31830]: I0319 12:38:23.697893 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a704971-dde7-4ffa-a887-5e8067b964bd" path="/var/lib/kubelet/pods/4a704971-dde7-4ffa-a887-5e8067b964bd/volumes" Mar 19 12:38:23.766958 master-0 kubenswrapper[31830]: I0319 12:38:23.766896 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 12:38:23.812823 master-0 kubenswrapper[31830]: I0319 12:38:23.806347 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d6bd496-fd6c-4f07-8f05-bef2fe80c398","Type":"ContainerStarted","Data":"83028f65c7c5b18ff1cfb4a19143359bdf1116fd0ee3a5a43c4be53452b8c24b"} Mar 19 12:38:23.812823 master-0 kubenswrapper[31830]: I0319 12:38:23.808739 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"19617489-e8f7-405b-b047-7344b57f32b4","Type":"ContainerStarted","Data":"c4c81e9b5097b0f341844dcb2e8c02a3fd6e50dc5209d497fe51770955968629"} Mar 19 12:38:23.812823 master-0 kubenswrapper[31830]: I0319 12:38:23.810385 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 19 12:38:23.849108 master-0 kubenswrapper[31830]: I0319 12:38:23.849031 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.849010366 podStartE2EDuration="2.849010366s" podCreationTimestamp="2026-03-19 12:38:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:38:23.831365149 +0000 UTC m=+1442.380325853" watchObservedRunningTime="2026-03-19 12:38:23.849010366 +0000 UTC m=+1442.397971070" Mar 19 12:38:23.913205 master-0 kubenswrapper[31830]: W0319 12:38:23.913138 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce924861_cb91_4340_9cd7_d3c74dc4b11c.slice/crio-99e548499c29b62e1aa01610dfa7cc377868c4e6fc9d4a8f37c61199f379c03b WatchSource:0}: Error finding container 99e548499c29b62e1aa01610dfa7cc377868c4e6fc9d4a8f37c61199f379c03b: Status 404 returned error can't find the container with id 99e548499c29b62e1aa01610dfa7cc377868c4e6fc9d4a8f37c61199f379c03b Mar 19 12:38:23.916577 master-0 kubenswrapper[31830]: I0319 12:38:23.916517 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 12:38:24.821750 master-0 kubenswrapper[31830]: I0319 12:38:24.821441 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d6bd496-fd6c-4f07-8f05-bef2fe80c398","Type":"ContainerStarted","Data":"33ed0630cca45b4f3dd14183783c57f493ff858c7d541373fc175c7bac521283"} Mar 19 12:38:24.821750 master-0 kubenswrapper[31830]: I0319 12:38:24.821508 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d6bd496-fd6c-4f07-8f05-bef2fe80c398","Type":"ContainerStarted","Data":"aa40425f108f9da61bebf587b917fb91255509b7b9a77b38f7eaf6d53ac52635"} Mar 19 12:38:24.823783 master-0 kubenswrapper[31830]: I0319 12:38:24.823749 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ce924861-cb91-4340-9cd7-d3c74dc4b11c","Type":"ContainerStarted","Data":"92c66ec58f38fdc5a75100c4de18eb5e0dd3bf723e92a7aa54355b872a94bd7f"} Mar 19 12:38:24.823878 master-0 kubenswrapper[31830]: I0319 12:38:24.823807 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ce924861-cb91-4340-9cd7-d3c74dc4b11c","Type":"ContainerStarted","Data":"99e548499c29b62e1aa01610dfa7cc377868c4e6fc9d4a8f37c61199f379c03b"} Mar 19 12:38:24.848437 master-0 kubenswrapper[31830]: I0319 12:38:24.846768 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.846745319 podStartE2EDuration="2.846745319s" podCreationTimestamp="2026-03-19 12:38:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:38:24.840687021 +0000 UTC m=+1443.389647725" watchObservedRunningTime="2026-03-19 12:38:24.846745319 +0000 UTC m=+1443.395706013" Mar 19 12:38:24.878715 master-0 kubenswrapper[31830]: I0319 12:38:24.878611 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.878592707 podStartE2EDuration="2.878592707s" podCreationTimestamp="2026-03-19 12:38:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:38:24.877235034 +0000 UTC m=+1443.426195738" watchObservedRunningTime="2026-03-19 12:38:24.878592707 +0000 UTC m=+1443.427553411" Mar 19 12:38:27.290341 master-0 kubenswrapper[31830]: I0319 12:38:27.290289 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 19 12:38:28.286935 master-0 kubenswrapper[31830]: I0319 12:38:28.286891 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 19 12:38:28.364515 master-0 kubenswrapper[31830]: I0319 12:38:28.364468 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 19 12:38:28.365195 master-0 kubenswrapper[31830]: I0319 12:38:28.365175 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 19 12:38:29.381083 master-0 kubenswrapper[31830]: I0319 12:38:29.381027 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2d718087-0caf-46be-9c73-6464f876f335" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.0.254:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 12:38:29.381819 master-0 kubenswrapper[31830]: I0319 12:38:29.381027 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2d718087-0caf-46be-9c73-6464f876f335" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.0.254:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 12:38:33.245173 master-0 kubenswrapper[31830]: I0319 12:38:33.245118 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 19 12:38:33.245765 master-0 kubenswrapper[31830]: I0319 12:38:33.245191 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 19 12:38:33.289252 master-0 kubenswrapper[31830]: I0319 12:38:33.289180 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 19 12:38:33.325951 master-0 kubenswrapper[31830]: I0319 12:38:33.325730 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 19 12:38:33.961849 master-0 kubenswrapper[31830]: I0319 12:38:33.961354 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 19 12:38:34.328190 master-0 kubenswrapper[31830]: I0319 12:38:34.328116 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6d6bd496-fd6c-4f07-8f05-bef2fe80c398" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.0:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 12:38:34.328764 master-0 kubenswrapper[31830]: I0319 12:38:34.328191 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6d6bd496-fd6c-4f07-8f05-bef2fe80c398" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.0:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 12:38:36.363887 master-0 kubenswrapper[31830]: I0319 12:38:36.363774 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 19 12:38:36.363887 master-0 kubenswrapper[31830]: I0319 12:38:36.363860 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 19 12:38:38.370648 master-0 kubenswrapper[31830]: I0319 12:38:38.370600 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 19 12:38:38.372000 master-0 kubenswrapper[31830]: I0319 12:38:38.371951 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 19 12:38:38.378723 master-0 kubenswrapper[31830]: I0319 12:38:38.378671 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 19 12:38:38.989837 master-0 kubenswrapper[31830]: I0319 12:38:38.989773 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 19 12:38:40.953620 master-0 kubenswrapper[31830]: I0319 12:38:40.953579 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:41.007750 master-0 kubenswrapper[31830]: I0319 12:38:41.007685 31830 generic.go:334] "Generic (PLEG): container finished" podID="e87b78cf-8720-4f07-8bb5-e8a2de404fea" containerID="b3ab4fde11634d6fc2d4a0ca60dc25a869215c2e6c7cd820e6dcbd49e69d9632" exitCode=137 Mar 19 12:38:41.008912 master-0 kubenswrapper[31830]: I0319 12:38:41.008739 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:41.009663 master-0 kubenswrapper[31830]: I0319 12:38:41.009627 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e87b78cf-8720-4f07-8bb5-e8a2de404fea","Type":"ContainerDied","Data":"b3ab4fde11634d6fc2d4a0ca60dc25a869215c2e6c7cd820e6dcbd49e69d9632"} Mar 19 12:38:41.009709 master-0 kubenswrapper[31830]: I0319 12:38:41.009693 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e87b78cf-8720-4f07-8bb5-e8a2de404fea","Type":"ContainerDied","Data":"1e34ec055a358ffb0c1b73ea8bf17964dd2599c76b4f4932193911c20cc7d4a2"} Mar 19 12:38:41.009743 master-0 kubenswrapper[31830]: I0319 12:38:41.009722 31830 scope.go:117] "RemoveContainer" containerID="b3ab4fde11634d6fc2d4a0ca60dc25a869215c2e6c7cd820e6dcbd49e69d9632" Mar 19 12:38:41.037027 master-0 kubenswrapper[31830]: I0319 12:38:41.036982 31830 scope.go:117] "RemoveContainer" containerID="b3ab4fde11634d6fc2d4a0ca60dc25a869215c2e6c7cd820e6dcbd49e69d9632" Mar 19 12:38:41.037441 master-0 kubenswrapper[31830]: E0319 12:38:41.037404 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3ab4fde11634d6fc2d4a0ca60dc25a869215c2e6c7cd820e6dcbd49e69d9632\": container with ID starting with b3ab4fde11634d6fc2d4a0ca60dc25a869215c2e6c7cd820e6dcbd49e69d9632 not found: ID does not exist" containerID="b3ab4fde11634d6fc2d4a0ca60dc25a869215c2e6c7cd820e6dcbd49e69d9632" Mar 19 12:38:41.037500 master-0 kubenswrapper[31830]: I0319 12:38:41.037454 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ab4fde11634d6fc2d4a0ca60dc25a869215c2e6c7cd820e6dcbd49e69d9632"} err="failed to get container status \"b3ab4fde11634d6fc2d4a0ca60dc25a869215c2e6c7cd820e6dcbd49e69d9632\": rpc error: code = NotFound desc = could not find container \"b3ab4fde11634d6fc2d4a0ca60dc25a869215c2e6c7cd820e6dcbd49e69d9632\": container with ID starting with b3ab4fde11634d6fc2d4a0ca60dc25a869215c2e6c7cd820e6dcbd49e69d9632 not found: ID does not exist" Mar 19 12:38:41.087844 master-0 kubenswrapper[31830]: I0319 12:38:41.087761 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e87b78cf-8720-4f07-8bb5-e8a2de404fea-combined-ca-bundle\") pod \"e87b78cf-8720-4f07-8bb5-e8a2de404fea\" (UID: \"e87b78cf-8720-4f07-8bb5-e8a2de404fea\") " Mar 19 12:38:41.088192 master-0 kubenswrapper[31830]: I0319 12:38:41.088156 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8g62\" (UniqueName: \"kubernetes.io/projected/e87b78cf-8720-4f07-8bb5-e8a2de404fea-kube-api-access-k8g62\") pod \"e87b78cf-8720-4f07-8bb5-e8a2de404fea\" (UID: \"e87b78cf-8720-4f07-8bb5-e8a2de404fea\") " Mar 19 12:38:41.088260 master-0 kubenswrapper[31830]: I0319 12:38:41.088245 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e87b78cf-8720-4f07-8bb5-e8a2de404fea-config-data\") pod \"e87b78cf-8720-4f07-8bb5-e8a2de404fea\" (UID: \"e87b78cf-8720-4f07-8bb5-e8a2de404fea\") " Mar 19 12:38:41.094909 master-0 kubenswrapper[31830]: I0319 12:38:41.094779 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e87b78cf-8720-4f07-8bb5-e8a2de404fea-kube-api-access-k8g62" (OuterVolumeSpecName: "kube-api-access-k8g62") pod "e87b78cf-8720-4f07-8bb5-e8a2de404fea" (UID: "e87b78cf-8720-4f07-8bb5-e8a2de404fea"). InnerVolumeSpecName "kube-api-access-k8g62". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:38:41.114906 master-0 kubenswrapper[31830]: I0319 12:38:41.114782 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e87b78cf-8720-4f07-8bb5-e8a2de404fea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e87b78cf-8720-4f07-8bb5-e8a2de404fea" (UID: "e87b78cf-8720-4f07-8bb5-e8a2de404fea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:38:41.122865 master-0 kubenswrapper[31830]: I0319 12:38:41.121775 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e87b78cf-8720-4f07-8bb5-e8a2de404fea-config-data" (OuterVolumeSpecName: "config-data") pod "e87b78cf-8720-4f07-8bb5-e8a2de404fea" (UID: "e87b78cf-8720-4f07-8bb5-e8a2de404fea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:38:41.191645 master-0 kubenswrapper[31830]: I0319 12:38:41.191580 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e87b78cf-8720-4f07-8bb5-e8a2de404fea-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:41.191645 master-0 kubenswrapper[31830]: I0319 12:38:41.191627 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8g62\" (UniqueName: \"kubernetes.io/projected/e87b78cf-8720-4f07-8bb5-e8a2de404fea-kube-api-access-k8g62\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:41.191645 master-0 kubenswrapper[31830]: I0319 12:38:41.191644 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e87b78cf-8720-4f07-8bb5-e8a2de404fea-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:41.247826 master-0 kubenswrapper[31830]: I0319 12:38:41.245775 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 19 12:38:41.247826 master-0 kubenswrapper[31830]: I0319 12:38:41.245870 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 19 12:38:41.352030 master-0 kubenswrapper[31830]: I0319 12:38:41.351649 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 12:38:41.381537 master-0 kubenswrapper[31830]: I0319 12:38:41.381482 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 12:38:41.404634 master-0 kubenswrapper[31830]: I0319 12:38:41.396731 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 12:38:41.404634 master-0 kubenswrapper[31830]: E0319 12:38:41.397367 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e87b78cf-8720-4f07-8bb5-e8a2de404fea" containerName="nova-cell1-novncproxy-novncproxy" Mar 19 12:38:41.404634 master-0 kubenswrapper[31830]: I0319 12:38:41.397382 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e87b78cf-8720-4f07-8bb5-e8a2de404fea" containerName="nova-cell1-novncproxy-novncproxy" Mar 19 12:38:41.404634 master-0 kubenswrapper[31830]: I0319 12:38:41.397642 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e87b78cf-8720-4f07-8bb5-e8a2de404fea" containerName="nova-cell1-novncproxy-novncproxy" Mar 19 12:38:41.404634 master-0 kubenswrapper[31830]: I0319 12:38:41.398467 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:41.404634 master-0 kubenswrapper[31830]: I0319 12:38:41.404462 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 19 12:38:41.406399 master-0 kubenswrapper[31830]: I0319 12:38:41.404897 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 19 12:38:41.406399 master-0 kubenswrapper[31830]: I0319 12:38:41.405038 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 19 12:38:41.408119 master-0 kubenswrapper[31830]: I0319 12:38:41.407068 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 12:38:41.503684 master-0 kubenswrapper[31830]: I0319 12:38:41.503607 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b10d7b-aab9-490d-a80b-633a24199fa9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"52b10d7b-aab9-490d-a80b-633a24199fa9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:41.503930 master-0 kubenswrapper[31830]: I0319 12:38:41.503690 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b10d7b-aab9-490d-a80b-633a24199fa9-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"52b10d7b-aab9-490d-a80b-633a24199fa9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:41.505028 master-0 kubenswrapper[31830]: I0319 12:38:41.504963 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b10d7b-aab9-490d-a80b-633a24199fa9-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"52b10d7b-aab9-490d-a80b-633a24199fa9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:41.505151 master-0 kubenswrapper[31830]: I0319 12:38:41.505124 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8grw\" (UniqueName: \"kubernetes.io/projected/52b10d7b-aab9-490d-a80b-633a24199fa9-kube-api-access-j8grw\") pod \"nova-cell1-novncproxy-0\" (UID: \"52b10d7b-aab9-490d-a80b-633a24199fa9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:41.505213 master-0 kubenswrapper[31830]: I0319 12:38:41.505190 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b10d7b-aab9-490d-a80b-633a24199fa9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"52b10d7b-aab9-490d-a80b-633a24199fa9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:41.607182 master-0 kubenswrapper[31830]: I0319 12:38:41.607032 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b10d7b-aab9-490d-a80b-633a24199fa9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"52b10d7b-aab9-490d-a80b-633a24199fa9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:41.607182 master-0 kubenswrapper[31830]: I0319 12:38:41.607126 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b10d7b-aab9-490d-a80b-633a24199fa9-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"52b10d7b-aab9-490d-a80b-633a24199fa9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:41.607425 master-0 kubenswrapper[31830]: I0319 12:38:41.607358 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b10d7b-aab9-490d-a80b-633a24199fa9-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"52b10d7b-aab9-490d-a80b-633a24199fa9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:41.607618 master-0 kubenswrapper[31830]: I0319 12:38:41.607594 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8grw\" (UniqueName: \"kubernetes.io/projected/52b10d7b-aab9-490d-a80b-633a24199fa9-kube-api-access-j8grw\") pod \"nova-cell1-novncproxy-0\" (UID: \"52b10d7b-aab9-490d-a80b-633a24199fa9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:41.610816 master-0 kubenswrapper[31830]: I0319 12:38:41.608108 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b10d7b-aab9-490d-a80b-633a24199fa9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"52b10d7b-aab9-490d-a80b-633a24199fa9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:41.614816 master-0 kubenswrapper[31830]: I0319 12:38:41.611445 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b10d7b-aab9-490d-a80b-633a24199fa9-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"52b10d7b-aab9-490d-a80b-633a24199fa9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:41.614816 master-0 kubenswrapper[31830]: I0319 12:38:41.611578 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b10d7b-aab9-490d-a80b-633a24199fa9-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"52b10d7b-aab9-490d-a80b-633a24199fa9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:41.614816 master-0 kubenswrapper[31830]: I0319 12:38:41.611947 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b10d7b-aab9-490d-a80b-633a24199fa9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"52b10d7b-aab9-490d-a80b-633a24199fa9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:41.614816 master-0 kubenswrapper[31830]: I0319 12:38:41.613757 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b10d7b-aab9-490d-a80b-633a24199fa9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"52b10d7b-aab9-490d-a80b-633a24199fa9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:41.625358 master-0 kubenswrapper[31830]: I0319 12:38:41.625301 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8grw\" (UniqueName: \"kubernetes.io/projected/52b10d7b-aab9-490d-a80b-633a24199fa9-kube-api-access-j8grw\") pod \"nova-cell1-novncproxy-0\" (UID: \"52b10d7b-aab9-490d-a80b-633a24199fa9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:41.692504 master-0 kubenswrapper[31830]: I0319 12:38:41.692446 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e87b78cf-8720-4f07-8bb5-e8a2de404fea" path="/var/lib/kubelet/pods/e87b78cf-8720-4f07-8bb5-e8a2de404fea/volumes" Mar 19 12:38:41.726177 master-0 kubenswrapper[31830]: I0319 12:38:41.726112 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:42.212970 master-0 kubenswrapper[31830]: W0319 12:38:42.212897 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52b10d7b_aab9_490d_a80b_633a24199fa9.slice/crio-1502d529621d9f85f80ddc1ba7fc87353be80c10c9159922b5993b47ea759403 WatchSource:0}: Error finding container 1502d529621d9f85f80ddc1ba7fc87353be80c10c9159922b5993b47ea759403: Status 404 returned error can't find the container with id 1502d529621d9f85f80ddc1ba7fc87353be80c10c9159922b5993b47ea759403 Mar 19 12:38:42.223360 master-0 kubenswrapper[31830]: I0319 12:38:42.223282 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 12:38:43.040941 master-0 kubenswrapper[31830]: I0319 12:38:43.040793 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"52b10d7b-aab9-490d-a80b-633a24199fa9","Type":"ContainerStarted","Data":"4f962286ac8e77d1815299db21709073a78bff59b8aa40919c6c2228bfe4289c"} Mar 19 12:38:43.040941 master-0 kubenswrapper[31830]: I0319 12:38:43.040866 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"52b10d7b-aab9-490d-a80b-633a24199fa9","Type":"ContainerStarted","Data":"1502d529621d9f85f80ddc1ba7fc87353be80c10c9159922b5993b47ea759403"} Mar 19 12:38:43.075412 master-0 kubenswrapper[31830]: I0319 12:38:43.075316 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.075293309 podStartE2EDuration="2.075293309s" podCreationTimestamp="2026-03-19 12:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:38:43.059980119 +0000 UTC m=+1461.608940843" watchObservedRunningTime="2026-03-19 12:38:43.075293309 +0000 UTC m=+1461.624254013" Mar 19 12:38:43.250232 master-0 kubenswrapper[31830]: I0319 12:38:43.250167 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 19 12:38:43.253048 master-0 kubenswrapper[31830]: I0319 12:38:43.253001 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 19 12:38:43.254512 master-0 kubenswrapper[31830]: I0319 12:38:43.254467 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 19 12:38:44.063215 master-0 kubenswrapper[31830]: I0319 12:38:44.063138 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 19 12:38:44.304235 master-0 kubenswrapper[31830]: I0319 12:38:44.303208 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7655479f8c-g8h6c"] Mar 19 12:38:44.323962 master-0 kubenswrapper[31830]: I0319 12:38:44.306126 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.356689 master-0 kubenswrapper[31830]: I0319 12:38:44.353870 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7655479f8c-g8h6c"] Mar 19 12:38:44.431840 master-0 kubenswrapper[31830]: I0319 12:38:44.422527 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ad8a503-0511-4da7-b07a-52da9ab0f637-dns-svc\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.431840 master-0 kubenswrapper[31830]: I0319 12:38:44.422618 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwjjr\" (UniqueName: \"kubernetes.io/projected/2ad8a503-0511-4da7-b07a-52da9ab0f637-kube-api-access-vwjjr\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.431840 master-0 kubenswrapper[31830]: I0319 12:38:44.424101 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ad8a503-0511-4da7-b07a-52da9ab0f637-dns-swift-storage-0\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.431840 master-0 kubenswrapper[31830]: I0319 12:38:44.424186 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/2ad8a503-0511-4da7-b07a-52da9ab0f637-edpm-a\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.431840 master-0 kubenswrapper[31830]: I0319 12:38:44.424262 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/2ad8a503-0511-4da7-b07a-52da9ab0f637-edpm-b\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.431840 master-0 kubenswrapper[31830]: I0319 12:38:44.424299 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ad8a503-0511-4da7-b07a-52da9ab0f637-config\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.431840 master-0 kubenswrapper[31830]: I0319 12:38:44.424363 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ad8a503-0511-4da7-b07a-52da9ab0f637-ovsdbserver-nb\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.431840 master-0 kubenswrapper[31830]: I0319 12:38:44.424498 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ad8a503-0511-4da7-b07a-52da9ab0f637-ovsdbserver-sb\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.527824 master-0 kubenswrapper[31830]: I0319 12:38:44.527064 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/2ad8a503-0511-4da7-b07a-52da9ab0f637-edpm-b\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.527824 master-0 kubenswrapper[31830]: I0319 12:38:44.527267 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ad8a503-0511-4da7-b07a-52da9ab0f637-config\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.527824 master-0 kubenswrapper[31830]: I0319 12:38:44.527385 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ad8a503-0511-4da7-b07a-52da9ab0f637-ovsdbserver-nb\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.527824 master-0 kubenswrapper[31830]: I0319 12:38:44.527480 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ad8a503-0511-4da7-b07a-52da9ab0f637-ovsdbserver-sb\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.527824 master-0 kubenswrapper[31830]: I0319 12:38:44.527597 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ad8a503-0511-4da7-b07a-52da9ab0f637-dns-svc\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.527824 master-0 kubenswrapper[31830]: I0319 12:38:44.527731 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwjjr\" (UniqueName: \"kubernetes.io/projected/2ad8a503-0511-4da7-b07a-52da9ab0f637-kube-api-access-vwjjr\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.528273 master-0 kubenswrapper[31830]: I0319 12:38:44.528017 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ad8a503-0511-4da7-b07a-52da9ab0f637-dns-swift-storage-0\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.528273 master-0 kubenswrapper[31830]: I0319 12:38:44.528083 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/2ad8a503-0511-4da7-b07a-52da9ab0f637-edpm-a\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.528273 master-0 kubenswrapper[31830]: I0319 12:38:44.528181 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/2ad8a503-0511-4da7-b07a-52da9ab0f637-edpm-b\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.531819 master-0 kubenswrapper[31830]: I0319 12:38:44.528848 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/2ad8a503-0511-4da7-b07a-52da9ab0f637-edpm-a\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.531819 master-0 kubenswrapper[31830]: I0319 12:38:44.529181 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ad8a503-0511-4da7-b07a-52da9ab0f637-dns-swift-storage-0\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.531819 master-0 kubenswrapper[31830]: I0319 12:38:44.529486 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ad8a503-0511-4da7-b07a-52da9ab0f637-ovsdbserver-sb\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.531819 master-0 kubenswrapper[31830]: I0319 12:38:44.529888 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ad8a503-0511-4da7-b07a-52da9ab0f637-dns-svc\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.531819 master-0 kubenswrapper[31830]: I0319 12:38:44.530839 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ad8a503-0511-4da7-b07a-52da9ab0f637-ovsdbserver-nb\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.531819 master-0 kubenswrapper[31830]: I0319 12:38:44.530864 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ad8a503-0511-4da7-b07a-52da9ab0f637-config\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.550820 master-0 kubenswrapper[31830]: I0319 12:38:44.550626 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwjjr\" (UniqueName: \"kubernetes.io/projected/2ad8a503-0511-4da7-b07a-52da9ab0f637-kube-api-access-vwjjr\") pod \"dnsmasq-dns-7655479f8c-g8h6c\" (UID: \"2ad8a503-0511-4da7-b07a-52da9ab0f637\") " pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:44.653068 master-0 kubenswrapper[31830]: I0319 12:38:44.652947 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:45.203960 master-0 kubenswrapper[31830]: I0319 12:38:45.203479 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7655479f8c-g8h6c"] Mar 19 12:38:46.092294 master-0 kubenswrapper[31830]: I0319 12:38:46.092241 31830 generic.go:334] "Generic (PLEG): container finished" podID="2ad8a503-0511-4da7-b07a-52da9ab0f637" containerID="0415fa7733145f2a4ed2231f5233a3cb72f8b3007cbc0e9563e8b76e0e9b7ebe" exitCode=0 Mar 19 12:38:46.092994 master-0 kubenswrapper[31830]: I0319 12:38:46.092300 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" event={"ID":"2ad8a503-0511-4da7-b07a-52da9ab0f637","Type":"ContainerDied","Data":"0415fa7733145f2a4ed2231f5233a3cb72f8b3007cbc0e9563e8b76e0e9b7ebe"} Mar 19 12:38:46.092994 master-0 kubenswrapper[31830]: I0319 12:38:46.092368 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" event={"ID":"2ad8a503-0511-4da7-b07a-52da9ab0f637","Type":"ContainerStarted","Data":"ee4e077852f7187af142ed3b78fb9f96f3bce5134b419a85d500d94bee61a5d3"} Mar 19 12:38:46.727049 master-0 kubenswrapper[31830]: I0319 12:38:46.726992 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:46.936525 master-0 kubenswrapper[31830]: I0319 12:38:46.936452 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 19 12:38:47.109639 master-0 kubenswrapper[31830]: I0319 12:38:47.109509 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" event={"ID":"2ad8a503-0511-4da7-b07a-52da9ab0f637","Type":"ContainerStarted","Data":"5d72b25423333ae19e6bf0016c7178f9784edd992f52c05ef359c6e3cdc86389"} Mar 19 12:38:47.110267 master-0 kubenswrapper[31830]: I0319 12:38:47.110180 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6d6bd496-fd6c-4f07-8f05-bef2fe80c398" containerName="nova-api-api" containerID="cri-o://33ed0630cca45b4f3dd14183783c57f493ff858c7d541373fc175c7bac521283" gracePeriod=30 Mar 19 12:38:47.110565 master-0 kubenswrapper[31830]: I0319 12:38:47.110097 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6d6bd496-fd6c-4f07-8f05-bef2fe80c398" containerName="nova-api-log" containerID="cri-o://aa40425f108f9da61bebf587b917fb91255509b7b9a77b38f7eaf6d53ac52635" gracePeriod=30 Mar 19 12:38:47.110565 master-0 kubenswrapper[31830]: I0319 12:38:47.110523 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:47.172942 master-0 kubenswrapper[31830]: I0319 12:38:47.172857 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" podStartSLOduration=3.172836543 podStartE2EDuration="3.172836543s" podCreationTimestamp="2026-03-19 12:38:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:38:47.165470943 +0000 UTC m=+1465.714431667" watchObservedRunningTime="2026-03-19 12:38:47.172836543 +0000 UTC m=+1465.721797247" Mar 19 12:38:48.122460 master-0 kubenswrapper[31830]: I0319 12:38:48.122410 31830 generic.go:334] "Generic (PLEG): container finished" podID="6d6bd496-fd6c-4f07-8f05-bef2fe80c398" containerID="aa40425f108f9da61bebf587b917fb91255509b7b9a77b38f7eaf6d53ac52635" exitCode=143 Mar 19 12:38:48.123017 master-0 kubenswrapper[31830]: I0319 12:38:48.122745 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d6bd496-fd6c-4f07-8f05-bef2fe80c398","Type":"ContainerDied","Data":"aa40425f108f9da61bebf587b917fb91255509b7b9a77b38f7eaf6d53ac52635"} Mar 19 12:38:50.731066 master-0 kubenswrapper[31830]: I0319 12:38:50.730989 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 12:38:50.796927 master-0 kubenswrapper[31830]: I0319 12:38:50.796836 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-logs\") pod \"6d6bd496-fd6c-4f07-8f05-bef2fe80c398\" (UID: \"6d6bd496-fd6c-4f07-8f05-bef2fe80c398\") " Mar 19 12:38:50.797164 master-0 kubenswrapper[31830]: I0319 12:38:50.796951 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-config-data\") pod \"6d6bd496-fd6c-4f07-8f05-bef2fe80c398\" (UID: \"6d6bd496-fd6c-4f07-8f05-bef2fe80c398\") " Mar 19 12:38:50.797164 master-0 kubenswrapper[31830]: I0319 12:38:50.796988 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxctj\" (UniqueName: \"kubernetes.io/projected/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-kube-api-access-dxctj\") pod \"6d6bd496-fd6c-4f07-8f05-bef2fe80c398\" (UID: \"6d6bd496-fd6c-4f07-8f05-bef2fe80c398\") " Mar 19 12:38:50.797264 master-0 kubenswrapper[31830]: I0319 12:38:50.797240 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-combined-ca-bundle\") pod \"6d6bd496-fd6c-4f07-8f05-bef2fe80c398\" (UID: \"6d6bd496-fd6c-4f07-8f05-bef2fe80c398\") " Mar 19 12:38:50.797339 master-0 kubenswrapper[31830]: I0319 12:38:50.797309 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-logs" (OuterVolumeSpecName: "logs") pod "6d6bd496-fd6c-4f07-8f05-bef2fe80c398" (UID: "6d6bd496-fd6c-4f07-8f05-bef2fe80c398"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:38:50.798380 master-0 kubenswrapper[31830]: I0319 12:38:50.798353 31830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:50.802012 master-0 kubenswrapper[31830]: I0319 12:38:50.801835 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-kube-api-access-dxctj" (OuterVolumeSpecName: "kube-api-access-dxctj") pod "6d6bd496-fd6c-4f07-8f05-bef2fe80c398" (UID: "6d6bd496-fd6c-4f07-8f05-bef2fe80c398"). InnerVolumeSpecName "kube-api-access-dxctj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:38:50.834992 master-0 kubenswrapper[31830]: I0319 12:38:50.834815 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d6bd496-fd6c-4f07-8f05-bef2fe80c398" (UID: "6d6bd496-fd6c-4f07-8f05-bef2fe80c398"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:38:50.835172 master-0 kubenswrapper[31830]: I0319 12:38:50.835018 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-config-data" (OuterVolumeSpecName: "config-data") pod "6d6bd496-fd6c-4f07-8f05-bef2fe80c398" (UID: "6d6bd496-fd6c-4f07-8f05-bef2fe80c398"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:38:50.901583 master-0 kubenswrapper[31830]: I0319 12:38:50.901440 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:50.901583 master-0 kubenswrapper[31830]: I0319 12:38:50.901503 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxctj\" (UniqueName: \"kubernetes.io/projected/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-kube-api-access-dxctj\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:50.901583 master-0 kubenswrapper[31830]: I0319 12:38:50.901515 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6bd496-fd6c-4f07-8f05-bef2fe80c398-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:51.167045 master-0 kubenswrapper[31830]: I0319 12:38:51.166901 31830 generic.go:334] "Generic (PLEG): container finished" podID="6d6bd496-fd6c-4f07-8f05-bef2fe80c398" containerID="33ed0630cca45b4f3dd14183783c57f493ff858c7d541373fc175c7bac521283" exitCode=0 Mar 19 12:38:51.167045 master-0 kubenswrapper[31830]: I0319 12:38:51.166951 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d6bd496-fd6c-4f07-8f05-bef2fe80c398","Type":"ContainerDied","Data":"33ed0630cca45b4f3dd14183783c57f493ff858c7d541373fc175c7bac521283"} Mar 19 12:38:51.167045 master-0 kubenswrapper[31830]: I0319 12:38:51.166994 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d6bd496-fd6c-4f07-8f05-bef2fe80c398","Type":"ContainerDied","Data":"83028f65c7c5b18ff1cfb4a19143359bdf1116fd0ee3a5a43c4be53452b8c24b"} Mar 19 12:38:51.167045 master-0 kubenswrapper[31830]: I0319 12:38:51.167011 31830 scope.go:117] "RemoveContainer" containerID="33ed0630cca45b4f3dd14183783c57f493ff858c7d541373fc175c7bac521283" Mar 19 12:38:51.167365 master-0 kubenswrapper[31830]: I0319 12:38:51.166969 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 12:38:51.214072 master-0 kubenswrapper[31830]: I0319 12:38:51.214000 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 19 12:38:51.217929 master-0 kubenswrapper[31830]: I0319 12:38:51.217877 31830 scope.go:117] "RemoveContainer" containerID="aa40425f108f9da61bebf587b917fb91255509b7b9a77b38f7eaf6d53ac52635" Mar 19 12:38:51.227075 master-0 kubenswrapper[31830]: I0319 12:38:51.225465 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 19 12:38:51.260484 master-0 kubenswrapper[31830]: I0319 12:38:51.260090 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 19 12:38:51.260687 master-0 kubenswrapper[31830]: E0319 12:38:51.260643 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d6bd496-fd6c-4f07-8f05-bef2fe80c398" containerName="nova-api-log" Mar 19 12:38:51.260687 master-0 kubenswrapper[31830]: I0319 12:38:51.260660 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d6bd496-fd6c-4f07-8f05-bef2fe80c398" containerName="nova-api-log" Mar 19 12:38:51.260868 master-0 kubenswrapper[31830]: E0319 12:38:51.260704 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d6bd496-fd6c-4f07-8f05-bef2fe80c398" containerName="nova-api-api" Mar 19 12:38:51.260868 master-0 kubenswrapper[31830]: I0319 12:38:51.260715 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d6bd496-fd6c-4f07-8f05-bef2fe80c398" containerName="nova-api-api" Mar 19 12:38:51.261346 master-0 kubenswrapper[31830]: I0319 12:38:51.261276 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d6bd496-fd6c-4f07-8f05-bef2fe80c398" containerName="nova-api-api" Mar 19 12:38:51.261432 master-0 kubenswrapper[31830]: I0319 12:38:51.261366 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d6bd496-fd6c-4f07-8f05-bef2fe80c398" containerName="nova-api-log" Mar 19 12:38:51.263619 master-0 kubenswrapper[31830]: I0319 12:38:51.263128 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 12:38:51.266016 master-0 kubenswrapper[31830]: I0319 12:38:51.265890 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 19 12:38:51.266246 master-0 kubenswrapper[31830]: I0319 12:38:51.266217 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 19 12:38:51.267195 master-0 kubenswrapper[31830]: I0319 12:38:51.266387 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 19 12:38:51.273564 master-0 kubenswrapper[31830]: I0319 12:38:51.273467 31830 scope.go:117] "RemoveContainer" containerID="33ed0630cca45b4f3dd14183783c57f493ff858c7d541373fc175c7bac521283" Mar 19 12:38:51.276709 master-0 kubenswrapper[31830]: E0319 12:38:51.276622 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33ed0630cca45b4f3dd14183783c57f493ff858c7d541373fc175c7bac521283\": container with ID starting with 33ed0630cca45b4f3dd14183783c57f493ff858c7d541373fc175c7bac521283 not found: ID does not exist" containerID="33ed0630cca45b4f3dd14183783c57f493ff858c7d541373fc175c7bac521283" Mar 19 12:38:51.276925 master-0 kubenswrapper[31830]: I0319 12:38:51.276889 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33ed0630cca45b4f3dd14183783c57f493ff858c7d541373fc175c7bac521283"} err="failed to get container status \"33ed0630cca45b4f3dd14183783c57f493ff858c7d541373fc175c7bac521283\": rpc error: code = NotFound desc = could not find container \"33ed0630cca45b4f3dd14183783c57f493ff858c7d541373fc175c7bac521283\": container with ID starting with 33ed0630cca45b4f3dd14183783c57f493ff858c7d541373fc175c7bac521283 not found: ID does not exist" Mar 19 12:38:51.277073 master-0 kubenswrapper[31830]: I0319 12:38:51.277057 31830 scope.go:117] "RemoveContainer" containerID="aa40425f108f9da61bebf587b917fb91255509b7b9a77b38f7eaf6d53ac52635" Mar 19 12:38:51.277764 master-0 kubenswrapper[31830]: E0319 12:38:51.277673 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa40425f108f9da61bebf587b917fb91255509b7b9a77b38f7eaf6d53ac52635\": container with ID starting with aa40425f108f9da61bebf587b917fb91255509b7b9a77b38f7eaf6d53ac52635 not found: ID does not exist" containerID="aa40425f108f9da61bebf587b917fb91255509b7b9a77b38f7eaf6d53ac52635" Mar 19 12:38:51.277764 master-0 kubenswrapper[31830]: I0319 12:38:51.277728 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa40425f108f9da61bebf587b917fb91255509b7b9a77b38f7eaf6d53ac52635"} err="failed to get container status \"aa40425f108f9da61bebf587b917fb91255509b7b9a77b38f7eaf6d53ac52635\": rpc error: code = NotFound desc = could not find container \"aa40425f108f9da61bebf587b917fb91255509b7b9a77b38f7eaf6d53ac52635\": container with ID starting with aa40425f108f9da61bebf587b917fb91255509b7b9a77b38f7eaf6d53ac52635 not found: ID does not exist" Mar 19 12:38:51.312539 master-0 kubenswrapper[31830]: I0319 12:38:51.312447 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 12:38:51.315418 master-0 kubenswrapper[31830]: I0319 12:38:51.315375 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9kf6\" (UniqueName: \"kubernetes.io/projected/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-kube-api-access-c9kf6\") pod \"nova-api-0\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " pod="openstack/nova-api-0" Mar 19 12:38:51.315516 master-0 kubenswrapper[31830]: I0319 12:38:51.315446 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-config-data\") pod \"nova-api-0\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " pod="openstack/nova-api-0" Mar 19 12:38:51.315584 master-0 kubenswrapper[31830]: I0319 12:38:51.315516 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " pod="openstack/nova-api-0" Mar 19 12:38:51.315584 master-0 kubenswrapper[31830]: I0319 12:38:51.315562 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-logs\") pod \"nova-api-0\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " pod="openstack/nova-api-0" Mar 19 12:38:51.315706 master-0 kubenswrapper[31830]: I0319 12:38:51.315644 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-public-tls-certs\") pod \"nova-api-0\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " pod="openstack/nova-api-0" Mar 19 12:38:51.315751 master-0 kubenswrapper[31830]: I0319 12:38:51.315703 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " pod="openstack/nova-api-0" Mar 19 12:38:51.418878 master-0 kubenswrapper[31830]: I0319 12:38:51.418714 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-public-tls-certs\") pod \"nova-api-0\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " pod="openstack/nova-api-0" Mar 19 12:38:51.419294 master-0 kubenswrapper[31830]: I0319 12:38:51.418981 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " pod="openstack/nova-api-0" Mar 19 12:38:51.419641 master-0 kubenswrapper[31830]: I0319 12:38:51.419480 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9kf6\" (UniqueName: \"kubernetes.io/projected/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-kube-api-access-c9kf6\") pod \"nova-api-0\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " pod="openstack/nova-api-0" Mar 19 12:38:51.419641 master-0 kubenswrapper[31830]: I0319 12:38:51.419575 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-config-data\") pod \"nova-api-0\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " pod="openstack/nova-api-0" Mar 19 12:38:51.419774 master-0 kubenswrapper[31830]: I0319 12:38:51.419751 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " pod="openstack/nova-api-0" Mar 19 12:38:51.419890 master-0 kubenswrapper[31830]: I0319 12:38:51.419870 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-logs\") pod \"nova-api-0\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " pod="openstack/nova-api-0" Mar 19 12:38:51.420426 master-0 kubenswrapper[31830]: I0319 12:38:51.420396 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-logs\") pod \"nova-api-0\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " pod="openstack/nova-api-0" Mar 19 12:38:51.423195 master-0 kubenswrapper[31830]: I0319 12:38:51.423153 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " pod="openstack/nova-api-0" Mar 19 12:38:51.424922 master-0 kubenswrapper[31830]: I0319 12:38:51.424885 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-config-data\") pod \"nova-api-0\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " pod="openstack/nova-api-0" Mar 19 12:38:51.425058 master-0 kubenswrapper[31830]: I0319 12:38:51.424889 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " pod="openstack/nova-api-0" Mar 19 12:38:51.442933 master-0 kubenswrapper[31830]: I0319 12:38:51.442861 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-public-tls-certs\") pod \"nova-api-0\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " pod="openstack/nova-api-0" Mar 19 12:38:51.446078 master-0 kubenswrapper[31830]: I0319 12:38:51.446021 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9kf6\" (UniqueName: \"kubernetes.io/projected/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-kube-api-access-c9kf6\") pod \"nova-api-0\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " pod="openstack/nova-api-0" Mar 19 12:38:51.590880 master-0 kubenswrapper[31830]: I0319 12:38:51.590790 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 12:38:51.714944 master-0 kubenswrapper[31830]: I0319 12:38:51.714053 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d6bd496-fd6c-4f07-8f05-bef2fe80c398" path="/var/lib/kubelet/pods/6d6bd496-fd6c-4f07-8f05-bef2fe80c398/volumes" Mar 19 12:38:51.728014 master-0 kubenswrapper[31830]: I0319 12:38:51.727507 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:51.753176 master-0 kubenswrapper[31830]: I0319 12:38:51.753118 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:52.148234 master-0 kubenswrapper[31830]: W0319 12:38:52.148150 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc19c5aa9_c890_4aa1_b53c_4a13d0a61d67.slice/crio-a2c648fa68b3ee76351e51f86cf56547ca38a7402b5edf6e1027a9782bd0baa2 WatchSource:0}: Error finding container a2c648fa68b3ee76351e51f86cf56547ca38a7402b5edf6e1027a9782bd0baa2: Status 404 returned error can't find the container with id a2c648fa68b3ee76351e51f86cf56547ca38a7402b5edf6e1027a9782bd0baa2 Mar 19 12:38:52.155367 master-0 kubenswrapper[31830]: I0319 12:38:52.154692 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 12:38:52.186087 master-0 kubenswrapper[31830]: I0319 12:38:52.185786 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67","Type":"ContainerStarted","Data":"a2c648fa68b3ee76351e51f86cf56547ca38a7402b5edf6e1027a9782bd0baa2"} Mar 19 12:38:52.212508 master-0 kubenswrapper[31830]: I0319 12:38:52.210567 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 19 12:38:52.420271 master-0 kubenswrapper[31830]: I0319 12:38:52.418474 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-g9dgh"] Mar 19 12:38:52.426588 master-0 kubenswrapper[31830]: I0319 12:38:52.426532 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-g9dgh" Mar 19 12:38:52.441218 master-0 kubenswrapper[31830]: I0319 12:38:52.440943 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 19 12:38:52.441931 master-0 kubenswrapper[31830]: I0319 12:38:52.441893 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 19 12:38:52.474937 master-0 kubenswrapper[31830]: I0319 12:38:52.474878 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f687b27-5451-41a8-a7cd-c90186c676ef-config-data\") pod \"nova-cell1-cell-mapping-g9dgh\" (UID: \"7f687b27-5451-41a8-a7cd-c90186c676ef\") " pod="openstack/nova-cell1-cell-mapping-g9dgh" Mar 19 12:38:52.475182 master-0 kubenswrapper[31830]: I0319 12:38:52.474955 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dccg\" (UniqueName: \"kubernetes.io/projected/7f687b27-5451-41a8-a7cd-c90186c676ef-kube-api-access-6dccg\") pod \"nova-cell1-cell-mapping-g9dgh\" (UID: \"7f687b27-5451-41a8-a7cd-c90186c676ef\") " pod="openstack/nova-cell1-cell-mapping-g9dgh" Mar 19 12:38:52.475182 master-0 kubenswrapper[31830]: I0319 12:38:52.475011 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f687b27-5451-41a8-a7cd-c90186c676ef-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-g9dgh\" (UID: \"7f687b27-5451-41a8-a7cd-c90186c676ef\") " pod="openstack/nova-cell1-cell-mapping-g9dgh" Mar 19 12:38:52.475182 master-0 kubenswrapper[31830]: I0319 12:38:52.475047 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f687b27-5451-41a8-a7cd-c90186c676ef-scripts\") pod \"nova-cell1-cell-mapping-g9dgh\" (UID: \"7f687b27-5451-41a8-a7cd-c90186c676ef\") " pod="openstack/nova-cell1-cell-mapping-g9dgh" Mar 19 12:38:52.476018 master-0 kubenswrapper[31830]: I0319 12:38:52.475921 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-g9dgh"] Mar 19 12:38:52.577632 master-0 kubenswrapper[31830]: I0319 12:38:52.577524 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f687b27-5451-41a8-a7cd-c90186c676ef-config-data\") pod \"nova-cell1-cell-mapping-g9dgh\" (UID: \"7f687b27-5451-41a8-a7cd-c90186c676ef\") " pod="openstack/nova-cell1-cell-mapping-g9dgh" Mar 19 12:38:52.577632 master-0 kubenswrapper[31830]: I0319 12:38:52.577586 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dccg\" (UniqueName: \"kubernetes.io/projected/7f687b27-5451-41a8-a7cd-c90186c676ef-kube-api-access-6dccg\") pod \"nova-cell1-cell-mapping-g9dgh\" (UID: \"7f687b27-5451-41a8-a7cd-c90186c676ef\") " pod="openstack/nova-cell1-cell-mapping-g9dgh" Mar 19 12:38:52.577962 master-0 kubenswrapper[31830]: I0319 12:38:52.577645 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f687b27-5451-41a8-a7cd-c90186c676ef-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-g9dgh\" (UID: \"7f687b27-5451-41a8-a7cd-c90186c676ef\") " pod="openstack/nova-cell1-cell-mapping-g9dgh" Mar 19 12:38:52.577962 master-0 kubenswrapper[31830]: I0319 12:38:52.577678 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f687b27-5451-41a8-a7cd-c90186c676ef-scripts\") pod \"nova-cell1-cell-mapping-g9dgh\" (UID: \"7f687b27-5451-41a8-a7cd-c90186c676ef\") " pod="openstack/nova-cell1-cell-mapping-g9dgh" Mar 19 12:38:52.583098 master-0 kubenswrapper[31830]: I0319 12:38:52.583001 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f687b27-5451-41a8-a7cd-c90186c676ef-scripts\") pod \"nova-cell1-cell-mapping-g9dgh\" (UID: \"7f687b27-5451-41a8-a7cd-c90186c676ef\") " pod="openstack/nova-cell1-cell-mapping-g9dgh" Mar 19 12:38:52.583098 master-0 kubenswrapper[31830]: I0319 12:38:52.583001 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f687b27-5451-41a8-a7cd-c90186c676ef-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-g9dgh\" (UID: \"7f687b27-5451-41a8-a7cd-c90186c676ef\") " pod="openstack/nova-cell1-cell-mapping-g9dgh" Mar 19 12:38:52.589595 master-0 kubenswrapper[31830]: I0319 12:38:52.589366 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f687b27-5451-41a8-a7cd-c90186c676ef-config-data\") pod \"nova-cell1-cell-mapping-g9dgh\" (UID: \"7f687b27-5451-41a8-a7cd-c90186c676ef\") " pod="openstack/nova-cell1-cell-mapping-g9dgh" Mar 19 12:38:52.601565 master-0 kubenswrapper[31830]: I0319 12:38:52.601249 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dccg\" (UniqueName: \"kubernetes.io/projected/7f687b27-5451-41a8-a7cd-c90186c676ef-kube-api-access-6dccg\") pod \"nova-cell1-cell-mapping-g9dgh\" (UID: \"7f687b27-5451-41a8-a7cd-c90186c676ef\") " pod="openstack/nova-cell1-cell-mapping-g9dgh" Mar 19 12:38:52.740620 master-0 kubenswrapper[31830]: I0319 12:38:52.740572 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-g9dgh" Mar 19 12:38:53.212862 master-0 kubenswrapper[31830]: I0319 12:38:53.207933 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67","Type":"ContainerStarted","Data":"106d028a0d253bff82c512a026210e2962143e22257ca098ef688363adf185bf"} Mar 19 12:38:53.212862 master-0 kubenswrapper[31830]: I0319 12:38:53.208010 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67","Type":"ContainerStarted","Data":"61b56427092a436fa2f7dc3f0c5731ad62e46663bc60cba2f039417ea70a38c7"} Mar 19 12:38:53.341160 master-0 kubenswrapper[31830]: I0319 12:38:53.341000 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-g9dgh"] Mar 19 12:38:53.341525 master-0 kubenswrapper[31830]: W0319 12:38:53.341463 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f687b27_5451_41a8_a7cd_c90186c676ef.slice/crio-281179663fa6ae9eaf0416253ff1706298831b653a402d66a621d9e721269424 WatchSource:0}: Error finding container 281179663fa6ae9eaf0416253ff1706298831b653a402d66a621d9e721269424: Status 404 returned error can't find the container with id 281179663fa6ae9eaf0416253ff1706298831b653a402d66a621d9e721269424 Mar 19 12:38:53.347967 master-0 kubenswrapper[31830]: I0319 12:38:53.347879 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.347857465 podStartE2EDuration="2.347857465s" podCreationTimestamp="2026-03-19 12:38:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:38:53.31955879 +0000 UTC m=+1471.868519524" watchObservedRunningTime="2026-03-19 12:38:53.347857465 +0000 UTC m=+1471.896818169" Mar 19 12:38:54.220827 master-0 kubenswrapper[31830]: I0319 12:38:54.220720 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-g9dgh" event={"ID":"7f687b27-5451-41a8-a7cd-c90186c676ef","Type":"ContainerStarted","Data":"cadf188387f699cefd40e7da794d710a45135e00ec249edfca6af7a209f47a7e"} Mar 19 12:38:54.220827 master-0 kubenswrapper[31830]: I0319 12:38:54.220779 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-g9dgh" event={"ID":"7f687b27-5451-41a8-a7cd-c90186c676ef","Type":"ContainerStarted","Data":"281179663fa6ae9eaf0416253ff1706298831b653a402d66a621d9e721269424"} Mar 19 12:38:54.241683 master-0 kubenswrapper[31830]: I0319 12:38:54.236425 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-g9dgh" podStartSLOduration=2.236406805 podStartE2EDuration="2.236406805s" podCreationTimestamp="2026-03-19 12:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:38:54.23462842 +0000 UTC m=+1472.783589124" watchObservedRunningTime="2026-03-19 12:38:54.236406805 +0000 UTC m=+1472.785367509" Mar 19 12:38:54.654935 master-0 kubenswrapper[31830]: I0319 12:38:54.654746 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7655479f8c-g8h6c" Mar 19 12:38:54.773098 master-0 kubenswrapper[31830]: I0319 12:38:54.772218 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b44cf4869-grng7"] Mar 19 12:38:54.773098 master-0 kubenswrapper[31830]: I0319 12:38:54.772484 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b44cf4869-grng7" podUID="aadf7978-e684-447a-897d-5e643ecbd822" containerName="dnsmasq-dns" containerID="cri-o://c158841c52e816832862d7c901540c7378c2736724961e566c7ae84ca116337a" gracePeriod=10 Mar 19 12:38:55.274927 master-0 kubenswrapper[31830]: I0319 12:38:55.271453 31830 generic.go:334] "Generic (PLEG): container finished" podID="aadf7978-e684-447a-897d-5e643ecbd822" containerID="c158841c52e816832862d7c901540c7378c2736724961e566c7ae84ca116337a" exitCode=0 Mar 19 12:38:55.274927 master-0 kubenswrapper[31830]: I0319 12:38:55.271530 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b44cf4869-grng7" event={"ID":"aadf7978-e684-447a-897d-5e643ecbd822","Type":"ContainerDied","Data":"c158841c52e816832862d7c901540c7378c2736724961e566c7ae84ca116337a"} Mar 19 12:38:55.449764 master-0 kubenswrapper[31830]: I0319 12:38:55.449704 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:55.569004 master-0 kubenswrapper[31830]: I0319 12:38:55.568835 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7grl\" (UniqueName: \"kubernetes.io/projected/aadf7978-e684-447a-897d-5e643ecbd822-kube-api-access-v7grl\") pod \"aadf7978-e684-447a-897d-5e643ecbd822\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " Mar 19 12:38:55.569004 master-0 kubenswrapper[31830]: I0319 12:38:55.568898 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-edpm-b\") pod \"aadf7978-e684-447a-897d-5e643ecbd822\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " Mar 19 12:38:55.569004 master-0 kubenswrapper[31830]: I0319 12:38:55.568917 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-ovsdbserver-sb\") pod \"aadf7978-e684-447a-897d-5e643ecbd822\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " Mar 19 12:38:55.569303 master-0 kubenswrapper[31830]: I0319 12:38:55.569027 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-edpm-a\") pod \"aadf7978-e684-447a-897d-5e643ecbd822\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " Mar 19 12:38:55.569303 master-0 kubenswrapper[31830]: I0319 12:38:55.569064 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-config\") pod \"aadf7978-e684-447a-897d-5e643ecbd822\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " Mar 19 12:38:55.569303 master-0 kubenswrapper[31830]: I0319 12:38:55.569108 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-ovsdbserver-nb\") pod \"aadf7978-e684-447a-897d-5e643ecbd822\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " Mar 19 12:38:55.569303 master-0 kubenswrapper[31830]: I0319 12:38:55.569195 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-dns-svc\") pod \"aadf7978-e684-447a-897d-5e643ecbd822\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " Mar 19 12:38:55.569303 master-0 kubenswrapper[31830]: I0319 12:38:55.569229 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-dns-swift-storage-0\") pod \"aadf7978-e684-447a-897d-5e643ecbd822\" (UID: \"aadf7978-e684-447a-897d-5e643ecbd822\") " Mar 19 12:38:55.588643 master-0 kubenswrapper[31830]: I0319 12:38:55.587131 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aadf7978-e684-447a-897d-5e643ecbd822-kube-api-access-v7grl" (OuterVolumeSpecName: "kube-api-access-v7grl") pod "aadf7978-e684-447a-897d-5e643ecbd822" (UID: "aadf7978-e684-447a-897d-5e643ecbd822"). InnerVolumeSpecName "kube-api-access-v7grl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:38:55.631912 master-0 kubenswrapper[31830]: I0319 12:38:55.630580 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aadf7978-e684-447a-897d-5e643ecbd822" (UID: "aadf7978-e684-447a-897d-5e643ecbd822"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:38:55.642899 master-0 kubenswrapper[31830]: I0319 12:38:55.637574 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-edpm-b" (OuterVolumeSpecName: "edpm-b") pod "aadf7978-e684-447a-897d-5e643ecbd822" (UID: "aadf7978-e684-447a-897d-5e643ecbd822"). InnerVolumeSpecName "edpm-b". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:38:55.656131 master-0 kubenswrapper[31830]: I0319 12:38:55.656063 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-edpm-a" (OuterVolumeSpecName: "edpm-a") pod "aadf7978-e684-447a-897d-5e643ecbd822" (UID: "aadf7978-e684-447a-897d-5e643ecbd822"). InnerVolumeSpecName "edpm-a". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:38:55.664385 master-0 kubenswrapper[31830]: I0319 12:38:55.663528 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "aadf7978-e684-447a-897d-5e643ecbd822" (UID: "aadf7978-e684-447a-897d-5e643ecbd822"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:38:55.664385 master-0 kubenswrapper[31830]: I0319 12:38:55.664181 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aadf7978-e684-447a-897d-5e643ecbd822" (UID: "aadf7978-e684-447a-897d-5e643ecbd822"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:38:55.672020 master-0 kubenswrapper[31830]: I0319 12:38:55.671891 31830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:55.672020 master-0 kubenswrapper[31830]: I0319 12:38:55.671934 31830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:55.672020 master-0 kubenswrapper[31830]: I0319 12:38:55.671948 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7grl\" (UniqueName: \"kubernetes.io/projected/aadf7978-e684-447a-897d-5e643ecbd822-kube-api-access-v7grl\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:55.672020 master-0 kubenswrapper[31830]: I0319 12:38:55.671960 31830 reconciler_common.go:293] "Volume detached for volume \"edpm-b\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-edpm-b\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:55.672020 master-0 kubenswrapper[31830]: I0319 12:38:55.671971 31830 reconciler_common.go:293] "Volume detached for volume \"edpm-a\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-edpm-a\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:55.672020 master-0 kubenswrapper[31830]: I0319 12:38:55.671982 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:55.679358 master-0 kubenswrapper[31830]: I0319 12:38:55.679298 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aadf7978-e684-447a-897d-5e643ecbd822" (UID: "aadf7978-e684-447a-897d-5e643ecbd822"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:38:55.691930 master-0 kubenswrapper[31830]: I0319 12:38:55.691789 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-config" (OuterVolumeSpecName: "config") pod "aadf7978-e684-447a-897d-5e643ecbd822" (UID: "aadf7978-e684-447a-897d-5e643ecbd822"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:38:55.776610 master-0 kubenswrapper[31830]: I0319 12:38:55.774667 31830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:55.776610 master-0 kubenswrapper[31830]: I0319 12:38:55.774707 31830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aadf7978-e684-447a-897d-5e643ecbd822-config\") on node \"master-0\" DevicePath \"\"" Mar 19 12:38:56.294002 master-0 kubenswrapper[31830]: I0319 12:38:56.293945 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b44cf4869-grng7" event={"ID":"aadf7978-e684-447a-897d-5e643ecbd822","Type":"ContainerDied","Data":"89c854a619b207ab5d0eff27d2e74a8ccb75d92e5c4aa316f86b5b94686db6a6"} Mar 19 12:38:56.294002 master-0 kubenswrapper[31830]: I0319 12:38:56.294002 31830 scope.go:117] "RemoveContainer" containerID="c158841c52e816832862d7c901540c7378c2736724961e566c7ae84ca116337a" Mar 19 12:38:56.294572 master-0 kubenswrapper[31830]: I0319 12:38:56.294117 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b44cf4869-grng7" Mar 19 12:38:56.323178 master-0 kubenswrapper[31830]: I0319 12:38:56.323125 31830 scope.go:117] "RemoveContainer" containerID="bfd8a7cb0a093b6f097c3d12098722dacc000d63b3976aea9245c67796f3d520" Mar 19 12:38:56.371819 master-0 kubenswrapper[31830]: I0319 12:38:56.368949 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b44cf4869-grng7"] Mar 19 12:38:56.383203 master-0 kubenswrapper[31830]: I0319 12:38:56.382951 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b44cf4869-grng7"] Mar 19 12:38:57.695008 master-0 kubenswrapper[31830]: I0319 12:38:57.694948 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aadf7978-e684-447a-897d-5e643ecbd822" path="/var/lib/kubelet/pods/aadf7978-e684-447a-897d-5e643ecbd822/volumes" Mar 19 12:38:59.347933 master-0 kubenswrapper[31830]: I0319 12:38:59.347890 31830 generic.go:334] "Generic (PLEG): container finished" podID="7f687b27-5451-41a8-a7cd-c90186c676ef" containerID="cadf188387f699cefd40e7da794d710a45135e00ec249edfca6af7a209f47a7e" exitCode=0 Mar 19 12:38:59.348483 master-0 kubenswrapper[31830]: I0319 12:38:59.347956 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-g9dgh" event={"ID":"7f687b27-5451-41a8-a7cd-c90186c676ef","Type":"ContainerDied","Data":"cadf188387f699cefd40e7da794d710a45135e00ec249edfca6af7a209f47a7e"} Mar 19 12:39:00.817524 master-0 kubenswrapper[31830]: I0319 12:39:00.817472 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-g9dgh" Mar 19 12:39:01.000996 master-0 kubenswrapper[31830]: I0319 12:39:01.000932 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f687b27-5451-41a8-a7cd-c90186c676ef-scripts\") pod \"7f687b27-5451-41a8-a7cd-c90186c676ef\" (UID: \"7f687b27-5451-41a8-a7cd-c90186c676ef\") " Mar 19 12:39:01.001241 master-0 kubenswrapper[31830]: I0319 12:39:01.001103 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dccg\" (UniqueName: \"kubernetes.io/projected/7f687b27-5451-41a8-a7cd-c90186c676ef-kube-api-access-6dccg\") pod \"7f687b27-5451-41a8-a7cd-c90186c676ef\" (UID: \"7f687b27-5451-41a8-a7cd-c90186c676ef\") " Mar 19 12:39:01.001297 master-0 kubenswrapper[31830]: I0319 12:39:01.001250 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f687b27-5451-41a8-a7cd-c90186c676ef-combined-ca-bundle\") pod \"7f687b27-5451-41a8-a7cd-c90186c676ef\" (UID: \"7f687b27-5451-41a8-a7cd-c90186c676ef\") " Mar 19 12:39:01.001297 master-0 kubenswrapper[31830]: I0319 12:39:01.001286 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f687b27-5451-41a8-a7cd-c90186c676ef-config-data\") pod \"7f687b27-5451-41a8-a7cd-c90186c676ef\" (UID: \"7f687b27-5451-41a8-a7cd-c90186c676ef\") " Mar 19 12:39:01.005377 master-0 kubenswrapper[31830]: I0319 12:39:01.005312 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f687b27-5451-41a8-a7cd-c90186c676ef-kube-api-access-6dccg" (OuterVolumeSpecName: "kube-api-access-6dccg") pod "7f687b27-5451-41a8-a7cd-c90186c676ef" (UID: "7f687b27-5451-41a8-a7cd-c90186c676ef"). InnerVolumeSpecName "kube-api-access-6dccg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:39:01.015047 master-0 kubenswrapper[31830]: I0319 12:39:01.014974 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f687b27-5451-41a8-a7cd-c90186c676ef-scripts" (OuterVolumeSpecName: "scripts") pod "7f687b27-5451-41a8-a7cd-c90186c676ef" (UID: "7f687b27-5451-41a8-a7cd-c90186c676ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:39:01.031123 master-0 kubenswrapper[31830]: I0319 12:39:01.031041 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f687b27-5451-41a8-a7cd-c90186c676ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f687b27-5451-41a8-a7cd-c90186c676ef" (UID: "7f687b27-5451-41a8-a7cd-c90186c676ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:39:01.035153 master-0 kubenswrapper[31830]: I0319 12:39:01.035113 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f687b27-5451-41a8-a7cd-c90186c676ef-config-data" (OuterVolumeSpecName: "config-data") pod "7f687b27-5451-41a8-a7cd-c90186c676ef" (UID: "7f687b27-5451-41a8-a7cd-c90186c676ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:39:01.104459 master-0 kubenswrapper[31830]: I0319 12:39:01.104404 31830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f687b27-5451-41a8-a7cd-c90186c676ef-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 12:39:01.104459 master-0 kubenswrapper[31830]: I0319 12:39:01.104444 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dccg\" (UniqueName: \"kubernetes.io/projected/7f687b27-5451-41a8-a7cd-c90186c676ef-kube-api-access-6dccg\") on node \"master-0\" DevicePath \"\"" Mar 19 12:39:01.104459 master-0 kubenswrapper[31830]: I0319 12:39:01.104456 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f687b27-5451-41a8-a7cd-c90186c676ef-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:39:01.104459 master-0 kubenswrapper[31830]: I0319 12:39:01.104465 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f687b27-5451-41a8-a7cd-c90186c676ef-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:39:01.373888 master-0 kubenswrapper[31830]: I0319 12:39:01.373712 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-g9dgh" event={"ID":"7f687b27-5451-41a8-a7cd-c90186c676ef","Type":"ContainerDied","Data":"281179663fa6ae9eaf0416253ff1706298831b653a402d66a621d9e721269424"} Mar 19 12:39:01.373888 master-0 kubenswrapper[31830]: I0319 12:39:01.373765 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="281179663fa6ae9eaf0416253ff1706298831b653a402d66a621d9e721269424" Mar 19 12:39:01.373888 master-0 kubenswrapper[31830]: I0319 12:39:01.373850 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-g9dgh" Mar 19 12:39:01.591240 master-0 kubenswrapper[31830]: I0319 12:39:01.591166 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 19 12:39:01.591240 master-0 kubenswrapper[31830]: I0319 12:39:01.591239 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 19 12:39:01.655473 master-0 kubenswrapper[31830]: I0319 12:39:01.655091 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 19 12:39:01.720465 master-0 kubenswrapper[31830]: I0319 12:39:01.720399 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 12:39:01.720732 master-0 kubenswrapper[31830]: I0319 12:39:01.720657 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ce924861-cb91-4340-9cd7-d3c74dc4b11c" containerName="nova-scheduler-scheduler" containerID="cri-o://92c66ec58f38fdc5a75100c4de18eb5e0dd3bf723e92a7aa54355b872a94bd7f" gracePeriod=30 Mar 19 12:39:01.747238 master-0 kubenswrapper[31830]: I0319 12:39:01.747178 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 12:39:01.747894 master-0 kubenswrapper[31830]: I0319 12:39:01.747835 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2d718087-0caf-46be-9c73-6464f876f335" containerName="nova-metadata-log" containerID="cri-o://6ac507d73488316acd06380b83b71eb842429e97623b18548ac5a27c25a19e46" gracePeriod=30 Mar 19 12:39:01.748278 master-0 kubenswrapper[31830]: I0319 12:39:01.748253 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2d718087-0caf-46be-9c73-6464f876f335" containerName="nova-metadata-metadata" containerID="cri-o://85194f8a11b46ba38d3729da1cc5b267a98d24628efc12287336e5a37ecd6a0d" gracePeriod=30 Mar 19 12:39:02.406742 master-0 kubenswrapper[31830]: I0319 12:39:02.406498 31830 generic.go:334] "Generic (PLEG): container finished" podID="2d718087-0caf-46be-9c73-6464f876f335" containerID="6ac507d73488316acd06380b83b71eb842429e97623b18548ac5a27c25a19e46" exitCode=143 Mar 19 12:39:02.409008 master-0 kubenswrapper[31830]: I0319 12:39:02.406772 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2d718087-0caf-46be-9c73-6464f876f335","Type":"ContainerDied","Data":"6ac507d73488316acd06380b83b71eb842429e97623b18548ac5a27c25a19e46"} Mar 19 12:39:02.411284 master-0 kubenswrapper[31830]: I0319 12:39:02.411231 31830 generic.go:334] "Generic (PLEG): container finished" podID="ce924861-cb91-4340-9cd7-d3c74dc4b11c" containerID="92c66ec58f38fdc5a75100c4de18eb5e0dd3bf723e92a7aa54355b872a94bd7f" exitCode=0 Mar 19 12:39:02.411582 master-0 kubenswrapper[31830]: I0319 12:39:02.411544 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ce924861-cb91-4340-9cd7-d3c74dc4b11c","Type":"ContainerDied","Data":"92c66ec58f38fdc5a75100c4de18eb5e0dd3bf723e92a7aa54355b872a94bd7f"} Mar 19 12:39:02.411660 master-0 kubenswrapper[31830]: I0319 12:39:02.411556 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c19c5aa9-c890-4aa1-b53c-4a13d0a61d67" containerName="nova-api-log" containerID="cri-o://61b56427092a436fa2f7dc3f0c5731ad62e46663bc60cba2f039417ea70a38c7" gracePeriod=30 Mar 19 12:39:02.411944 master-0 kubenswrapper[31830]: I0319 12:39:02.411905 31830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c19c5aa9-c890-4aa1-b53c-4a13d0a61d67" containerName="nova-api-api" containerID="cri-o://106d028a0d253bff82c512a026210e2962143e22257ca098ef688363adf185bf" gracePeriod=30 Mar 19 12:39:02.426865 master-0 kubenswrapper[31830]: I0319 12:39:02.425973 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c19c5aa9-c890-4aa1-b53c-4a13d0a61d67" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.1.4:8774/\": EOF" Mar 19 12:39:02.426865 master-0 kubenswrapper[31830]: I0319 12:39:02.426093 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c19c5aa9-c890-4aa1-b53c-4a13d0a61d67" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.1.4:8774/\": EOF" Mar 19 12:39:02.749814 master-0 kubenswrapper[31830]: I0319 12:39:02.749725 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 12:39:02.868819 master-0 kubenswrapper[31830]: I0319 12:39:02.863688 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce924861-cb91-4340-9cd7-d3c74dc4b11c-combined-ca-bundle\") pod \"ce924861-cb91-4340-9cd7-d3c74dc4b11c\" (UID: \"ce924861-cb91-4340-9cd7-d3c74dc4b11c\") " Mar 19 12:39:02.868819 master-0 kubenswrapper[31830]: I0319 12:39:02.863916 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce924861-cb91-4340-9cd7-d3c74dc4b11c-config-data\") pod \"ce924861-cb91-4340-9cd7-d3c74dc4b11c\" (UID: \"ce924861-cb91-4340-9cd7-d3c74dc4b11c\") " Mar 19 12:39:02.868819 master-0 kubenswrapper[31830]: I0319 12:39:02.864002 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cl4p\" (UniqueName: \"kubernetes.io/projected/ce924861-cb91-4340-9cd7-d3c74dc4b11c-kube-api-access-2cl4p\") pod \"ce924861-cb91-4340-9cd7-d3c74dc4b11c\" (UID: \"ce924861-cb91-4340-9cd7-d3c74dc4b11c\") " Mar 19 12:39:02.869868 master-0 kubenswrapper[31830]: I0319 12:39:02.869203 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce924861-cb91-4340-9cd7-d3c74dc4b11c-kube-api-access-2cl4p" (OuterVolumeSpecName: "kube-api-access-2cl4p") pod "ce924861-cb91-4340-9cd7-d3c74dc4b11c" (UID: "ce924861-cb91-4340-9cd7-d3c74dc4b11c"). InnerVolumeSpecName "kube-api-access-2cl4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:39:02.897391 master-0 kubenswrapper[31830]: I0319 12:39:02.896969 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce924861-cb91-4340-9cd7-d3c74dc4b11c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce924861-cb91-4340-9cd7-d3c74dc4b11c" (UID: "ce924861-cb91-4340-9cd7-d3c74dc4b11c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:39:02.899475 master-0 kubenswrapper[31830]: I0319 12:39:02.899417 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce924861-cb91-4340-9cd7-d3c74dc4b11c-config-data" (OuterVolumeSpecName: "config-data") pod "ce924861-cb91-4340-9cd7-d3c74dc4b11c" (UID: "ce924861-cb91-4340-9cd7-d3c74dc4b11c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:39:02.966738 master-0 kubenswrapper[31830]: I0319 12:39:02.966669 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cl4p\" (UniqueName: \"kubernetes.io/projected/ce924861-cb91-4340-9cd7-d3c74dc4b11c-kube-api-access-2cl4p\") on node \"master-0\" DevicePath \"\"" Mar 19 12:39:02.966738 master-0 kubenswrapper[31830]: I0319 12:39:02.966724 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce924861-cb91-4340-9cd7-d3c74dc4b11c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:39:02.966738 master-0 kubenswrapper[31830]: I0319 12:39:02.966738 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce924861-cb91-4340-9cd7-d3c74dc4b11c-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:39:03.427315 master-0 kubenswrapper[31830]: I0319 12:39:03.427265 31830 generic.go:334] "Generic (PLEG): container finished" podID="c19c5aa9-c890-4aa1-b53c-4a13d0a61d67" containerID="61b56427092a436fa2f7dc3f0c5731ad62e46663bc60cba2f039417ea70a38c7" exitCode=143 Mar 19 12:39:03.427889 master-0 kubenswrapper[31830]: I0319 12:39:03.427325 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67","Type":"ContainerDied","Data":"61b56427092a436fa2f7dc3f0c5731ad62e46663bc60cba2f039417ea70a38c7"} Mar 19 12:39:03.430050 master-0 kubenswrapper[31830]: I0319 12:39:03.430005 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ce924861-cb91-4340-9cd7-d3c74dc4b11c","Type":"ContainerDied","Data":"99e548499c29b62e1aa01610dfa7cc377868c4e6fc9d4a8f37c61199f379c03b"} Mar 19 12:39:03.430131 master-0 kubenswrapper[31830]: I0319 12:39:03.430063 31830 scope.go:117] "RemoveContainer" containerID="92c66ec58f38fdc5a75100c4de18eb5e0dd3bf723e92a7aa54355b872a94bd7f" Mar 19 12:39:03.430131 master-0 kubenswrapper[31830]: I0319 12:39:03.430076 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 12:39:03.490103 master-0 kubenswrapper[31830]: I0319 12:39:03.490052 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 12:39:03.514413 master-0 kubenswrapper[31830]: I0319 12:39:03.512909 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 12:39:03.548424 master-0 kubenswrapper[31830]: I0319 12:39:03.548149 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 12:39:03.554848 master-0 kubenswrapper[31830]: E0319 12:39:03.548976 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f687b27-5451-41a8-a7cd-c90186c676ef" containerName="nova-manage" Mar 19 12:39:03.554848 master-0 kubenswrapper[31830]: I0319 12:39:03.548998 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f687b27-5451-41a8-a7cd-c90186c676ef" containerName="nova-manage" Mar 19 12:39:03.554848 master-0 kubenswrapper[31830]: E0319 12:39:03.549039 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aadf7978-e684-447a-897d-5e643ecbd822" containerName="init" Mar 19 12:39:03.554848 master-0 kubenswrapper[31830]: I0319 12:39:03.549046 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="aadf7978-e684-447a-897d-5e643ecbd822" containerName="init" Mar 19 12:39:03.554848 master-0 kubenswrapper[31830]: E0319 12:39:03.549059 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce924861-cb91-4340-9cd7-d3c74dc4b11c" containerName="nova-scheduler-scheduler" Mar 19 12:39:03.554848 master-0 kubenswrapper[31830]: I0319 12:39:03.549065 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce924861-cb91-4340-9cd7-d3c74dc4b11c" containerName="nova-scheduler-scheduler" Mar 19 12:39:03.554848 master-0 kubenswrapper[31830]: E0319 12:39:03.549087 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aadf7978-e684-447a-897d-5e643ecbd822" containerName="dnsmasq-dns" Mar 19 12:39:03.554848 master-0 kubenswrapper[31830]: I0319 12:39:03.549131 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="aadf7978-e684-447a-897d-5e643ecbd822" containerName="dnsmasq-dns" Mar 19 12:39:03.554848 master-0 kubenswrapper[31830]: I0319 12:39:03.549378 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="aadf7978-e684-447a-897d-5e643ecbd822" containerName="dnsmasq-dns" Mar 19 12:39:03.554848 master-0 kubenswrapper[31830]: I0319 12:39:03.549409 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce924861-cb91-4340-9cd7-d3c74dc4b11c" containerName="nova-scheduler-scheduler" Mar 19 12:39:03.554848 master-0 kubenswrapper[31830]: I0319 12:39:03.549423 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f687b27-5451-41a8-a7cd-c90186c676ef" containerName="nova-manage" Mar 19 12:39:03.554848 master-0 kubenswrapper[31830]: I0319 12:39:03.550892 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 12:39:03.554848 master-0 kubenswrapper[31830]: I0319 12:39:03.553696 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 19 12:39:03.576515 master-0 kubenswrapper[31830]: I0319 12:39:03.576466 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 12:39:03.685785 master-0 kubenswrapper[31830]: I0319 12:39:03.685658 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b64fc39-b4e1-4fa2-bdd0-c991f4c35ba4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5b64fc39-b4e1-4fa2-bdd0-c991f4c35ba4\") " pod="openstack/nova-scheduler-0" Mar 19 12:39:03.685785 master-0 kubenswrapper[31830]: I0319 12:39:03.685750 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b64fc39-b4e1-4fa2-bdd0-c991f4c35ba4-config-data\") pod \"nova-scheduler-0\" (UID: \"5b64fc39-b4e1-4fa2-bdd0-c991f4c35ba4\") " pod="openstack/nova-scheduler-0" Mar 19 12:39:03.686040 master-0 kubenswrapper[31830]: I0319 12:39:03.685943 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnvf2\" (UniqueName: \"kubernetes.io/projected/5b64fc39-b4e1-4fa2-bdd0-c991f4c35ba4-kube-api-access-lnvf2\") pod \"nova-scheduler-0\" (UID: \"5b64fc39-b4e1-4fa2-bdd0-c991f4c35ba4\") " pod="openstack/nova-scheduler-0" Mar 19 12:39:03.695857 master-0 kubenswrapper[31830]: I0319 12:39:03.695791 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce924861-cb91-4340-9cd7-d3c74dc4b11c" path="/var/lib/kubelet/pods/ce924861-cb91-4340-9cd7-d3c74dc4b11c/volumes" Mar 19 12:39:03.787567 master-0 kubenswrapper[31830]: I0319 12:39:03.787499 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnvf2\" (UniqueName: \"kubernetes.io/projected/5b64fc39-b4e1-4fa2-bdd0-c991f4c35ba4-kube-api-access-lnvf2\") pod \"nova-scheduler-0\" (UID: \"5b64fc39-b4e1-4fa2-bdd0-c991f4c35ba4\") " pod="openstack/nova-scheduler-0" Mar 19 12:39:03.788673 master-0 kubenswrapper[31830]: I0319 12:39:03.788630 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b64fc39-b4e1-4fa2-bdd0-c991f4c35ba4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5b64fc39-b4e1-4fa2-bdd0-c991f4c35ba4\") " pod="openstack/nova-scheduler-0" Mar 19 12:39:03.788776 master-0 kubenswrapper[31830]: I0319 12:39:03.788755 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b64fc39-b4e1-4fa2-bdd0-c991f4c35ba4-config-data\") pod \"nova-scheduler-0\" (UID: \"5b64fc39-b4e1-4fa2-bdd0-c991f4c35ba4\") " pod="openstack/nova-scheduler-0" Mar 19 12:39:03.793214 master-0 kubenswrapper[31830]: I0319 12:39:03.792995 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b64fc39-b4e1-4fa2-bdd0-c991f4c35ba4-config-data\") pod \"nova-scheduler-0\" (UID: \"5b64fc39-b4e1-4fa2-bdd0-c991f4c35ba4\") " pod="openstack/nova-scheduler-0" Mar 19 12:39:03.794859 master-0 kubenswrapper[31830]: I0319 12:39:03.794189 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b64fc39-b4e1-4fa2-bdd0-c991f4c35ba4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5b64fc39-b4e1-4fa2-bdd0-c991f4c35ba4\") " pod="openstack/nova-scheduler-0" Mar 19 12:39:03.809576 master-0 kubenswrapper[31830]: I0319 12:39:03.809545 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnvf2\" (UniqueName: \"kubernetes.io/projected/5b64fc39-b4e1-4fa2-bdd0-c991f4c35ba4-kube-api-access-lnvf2\") pod \"nova-scheduler-0\" (UID: \"5b64fc39-b4e1-4fa2-bdd0-c991f4c35ba4\") " pod="openstack/nova-scheduler-0" Mar 19 12:39:03.908416 master-0 kubenswrapper[31830]: I0319 12:39:03.908339 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 12:39:04.394494 master-0 kubenswrapper[31830]: I0319 12:39:04.394444 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 12:39:04.458409 master-0 kubenswrapper[31830]: I0319 12:39:04.458345 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5b64fc39-b4e1-4fa2-bdd0-c991f4c35ba4","Type":"ContainerStarted","Data":"d935ca595ac864389ff70fe4aafbb5bf0b409b2f7e5282421a31a5e09e433cb5"} Mar 19 12:39:05.451188 master-0 kubenswrapper[31830]: I0319 12:39:05.451137 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 12:39:05.473461 master-0 kubenswrapper[31830]: I0319 12:39:05.473411 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5b64fc39-b4e1-4fa2-bdd0-c991f4c35ba4","Type":"ContainerStarted","Data":"05c7b71526d19179f9790ee6335a7b73382347739f32ec47a6a3ca206b95193e"} Mar 19 12:39:05.476390 master-0 kubenswrapper[31830]: I0319 12:39:05.476366 31830 generic.go:334] "Generic (PLEG): container finished" podID="2d718087-0caf-46be-9c73-6464f876f335" containerID="85194f8a11b46ba38d3729da1cc5b267a98d24628efc12287336e5a37ecd6a0d" exitCode=0 Mar 19 12:39:05.476501 master-0 kubenswrapper[31830]: I0319 12:39:05.476400 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2d718087-0caf-46be-9c73-6464f876f335","Type":"ContainerDied","Data":"85194f8a11b46ba38d3729da1cc5b267a98d24628efc12287336e5a37ecd6a0d"} Mar 19 12:39:05.476661 master-0 kubenswrapper[31830]: I0319 12:39:05.476435 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 12:39:05.476826 master-0 kubenswrapper[31830]: I0319 12:39:05.476627 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2d718087-0caf-46be-9c73-6464f876f335","Type":"ContainerDied","Data":"e79f89d84f77cfa83684a331b188e902046ffc1e144ea15f322b9130d177054c"} Mar 19 12:39:05.476907 master-0 kubenswrapper[31830]: I0319 12:39:05.476644 31830 scope.go:117] "RemoveContainer" containerID="85194f8a11b46ba38d3729da1cc5b267a98d24628efc12287336e5a37ecd6a0d" Mar 19 12:39:05.526488 master-0 kubenswrapper[31830]: I0319 12:39:05.526123 31830 scope.go:117] "RemoveContainer" containerID="6ac507d73488316acd06380b83b71eb842429e97623b18548ac5a27c25a19e46" Mar 19 12:39:05.566398 master-0 kubenswrapper[31830]: I0319 12:39:05.566370 31830 scope.go:117] "RemoveContainer" containerID="85194f8a11b46ba38d3729da1cc5b267a98d24628efc12287336e5a37ecd6a0d" Mar 19 12:39:05.597528 master-0 kubenswrapper[31830]: E0319 12:39:05.597472 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85194f8a11b46ba38d3729da1cc5b267a98d24628efc12287336e5a37ecd6a0d\": container with ID starting with 85194f8a11b46ba38d3729da1cc5b267a98d24628efc12287336e5a37ecd6a0d not found: ID does not exist" containerID="85194f8a11b46ba38d3729da1cc5b267a98d24628efc12287336e5a37ecd6a0d" Mar 19 12:39:05.597759 master-0 kubenswrapper[31830]: I0319 12:39:05.597532 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85194f8a11b46ba38d3729da1cc5b267a98d24628efc12287336e5a37ecd6a0d"} err="failed to get container status \"85194f8a11b46ba38d3729da1cc5b267a98d24628efc12287336e5a37ecd6a0d\": rpc error: code = NotFound desc = could not find container \"85194f8a11b46ba38d3729da1cc5b267a98d24628efc12287336e5a37ecd6a0d\": container with ID starting with 85194f8a11b46ba38d3729da1cc5b267a98d24628efc12287336e5a37ecd6a0d not found: ID does not exist" Mar 19 12:39:05.597759 master-0 kubenswrapper[31830]: I0319 12:39:05.597560 31830 scope.go:117] "RemoveContainer" containerID="6ac507d73488316acd06380b83b71eb842429e97623b18548ac5a27c25a19e46" Mar 19 12:39:05.603042 master-0 kubenswrapper[31830]: I0319 12:39:05.602945 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.602922257 podStartE2EDuration="2.602922257s" podCreationTimestamp="2026-03-19 12:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:39:05.576045926 +0000 UTC m=+1484.125006630" watchObservedRunningTime="2026-03-19 12:39:05.602922257 +0000 UTC m=+1484.151882961" Mar 19 12:39:05.608276 master-0 kubenswrapper[31830]: E0319 12:39:05.605979 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ac507d73488316acd06380b83b71eb842429e97623b18548ac5a27c25a19e46\": container with ID starting with 6ac507d73488316acd06380b83b71eb842429e97623b18548ac5a27c25a19e46 not found: ID does not exist" containerID="6ac507d73488316acd06380b83b71eb842429e97623b18548ac5a27c25a19e46" Mar 19 12:39:05.608276 master-0 kubenswrapper[31830]: I0319 12:39:05.606048 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ac507d73488316acd06380b83b71eb842429e97623b18548ac5a27c25a19e46"} err="failed to get container status \"6ac507d73488316acd06380b83b71eb842429e97623b18548ac5a27c25a19e46\": rpc error: code = NotFound desc = could not find container \"6ac507d73488316acd06380b83b71eb842429e97623b18548ac5a27c25a19e46\": container with ID starting with 6ac507d73488316acd06380b83b71eb842429e97623b18548ac5a27c25a19e46 not found: ID does not exist" Mar 19 12:39:05.650084 master-0 kubenswrapper[31830]: I0319 12:39:05.648543 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d718087-0caf-46be-9c73-6464f876f335-nova-metadata-tls-certs\") pod \"2d718087-0caf-46be-9c73-6464f876f335\" (UID: \"2d718087-0caf-46be-9c73-6464f876f335\") " Mar 19 12:39:05.650084 master-0 kubenswrapper[31830]: I0319 12:39:05.648606 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdctc\" (UniqueName: \"kubernetes.io/projected/2d718087-0caf-46be-9c73-6464f876f335-kube-api-access-vdctc\") pod \"2d718087-0caf-46be-9c73-6464f876f335\" (UID: \"2d718087-0caf-46be-9c73-6464f876f335\") " Mar 19 12:39:05.650084 master-0 kubenswrapper[31830]: I0319 12:39:05.648886 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d718087-0caf-46be-9c73-6464f876f335-config-data\") pod \"2d718087-0caf-46be-9c73-6464f876f335\" (UID: \"2d718087-0caf-46be-9c73-6464f876f335\") " Mar 19 12:39:05.650084 master-0 kubenswrapper[31830]: I0319 12:39:05.649246 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d718087-0caf-46be-9c73-6464f876f335-logs\") pod \"2d718087-0caf-46be-9c73-6464f876f335\" (UID: \"2d718087-0caf-46be-9c73-6464f876f335\") " Mar 19 12:39:05.650084 master-0 kubenswrapper[31830]: I0319 12:39:05.649443 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d718087-0caf-46be-9c73-6464f876f335-combined-ca-bundle\") pod \"2d718087-0caf-46be-9c73-6464f876f335\" (UID: \"2d718087-0caf-46be-9c73-6464f876f335\") " Mar 19 12:39:05.651044 master-0 kubenswrapper[31830]: I0319 12:39:05.651011 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d718087-0caf-46be-9c73-6464f876f335-logs" (OuterVolumeSpecName: "logs") pod "2d718087-0caf-46be-9c73-6464f876f335" (UID: "2d718087-0caf-46be-9c73-6464f876f335"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:39:05.657937 master-0 kubenswrapper[31830]: I0319 12:39:05.655320 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d718087-0caf-46be-9c73-6464f876f335-kube-api-access-vdctc" (OuterVolumeSpecName: "kube-api-access-vdctc") pod "2d718087-0caf-46be-9c73-6464f876f335" (UID: "2d718087-0caf-46be-9c73-6464f876f335"). InnerVolumeSpecName "kube-api-access-vdctc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:39:05.672489 master-0 kubenswrapper[31830]: I0319 12:39:05.672044 31830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d718087-0caf-46be-9c73-6464f876f335-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:39:05.672489 master-0 kubenswrapper[31830]: I0319 12:39:05.672081 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdctc\" (UniqueName: \"kubernetes.io/projected/2d718087-0caf-46be-9c73-6464f876f335-kube-api-access-vdctc\") on node \"master-0\" DevicePath \"\"" Mar 19 12:39:05.700970 master-0 kubenswrapper[31830]: I0319 12:39:05.700911 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d718087-0caf-46be-9c73-6464f876f335-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d718087-0caf-46be-9c73-6464f876f335" (UID: "2d718087-0caf-46be-9c73-6464f876f335"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:39:05.737412 master-0 kubenswrapper[31830]: I0319 12:39:05.737373 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d718087-0caf-46be-9c73-6464f876f335-config-data" (OuterVolumeSpecName: "config-data") pod "2d718087-0caf-46be-9c73-6464f876f335" (UID: "2d718087-0caf-46be-9c73-6464f876f335"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:39:05.779857 master-0 kubenswrapper[31830]: I0319 12:39:05.774107 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d718087-0caf-46be-9c73-6464f876f335-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:39:05.779857 master-0 kubenswrapper[31830]: I0319 12:39:05.774157 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d718087-0caf-46be-9c73-6464f876f335-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:39:05.785070 master-0 kubenswrapper[31830]: I0319 12:39:05.783064 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d718087-0caf-46be-9c73-6464f876f335-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2d718087-0caf-46be-9c73-6464f876f335" (UID: "2d718087-0caf-46be-9c73-6464f876f335"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:39:05.877135 master-0 kubenswrapper[31830]: I0319 12:39:05.877071 31830 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d718087-0caf-46be-9c73-6464f876f335-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:39:06.145322 master-0 kubenswrapper[31830]: I0319 12:39:06.144681 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 12:39:06.164253 master-0 kubenswrapper[31830]: I0319 12:39:06.164097 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 12:39:06.334831 master-0 kubenswrapper[31830]: I0319 12:39:06.333967 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 19 12:39:06.334831 master-0 kubenswrapper[31830]: E0319 12:39:06.334607 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d718087-0caf-46be-9c73-6464f876f335" containerName="nova-metadata-log" Mar 19 12:39:06.334831 master-0 kubenswrapper[31830]: I0319 12:39:06.334621 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d718087-0caf-46be-9c73-6464f876f335" containerName="nova-metadata-log" Mar 19 12:39:06.334831 master-0 kubenswrapper[31830]: E0319 12:39:06.334638 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d718087-0caf-46be-9c73-6464f876f335" containerName="nova-metadata-metadata" Mar 19 12:39:06.334831 master-0 kubenswrapper[31830]: I0319 12:39:06.334646 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d718087-0caf-46be-9c73-6464f876f335" containerName="nova-metadata-metadata" Mar 19 12:39:06.335363 master-0 kubenswrapper[31830]: I0319 12:39:06.334935 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d718087-0caf-46be-9c73-6464f876f335" containerName="nova-metadata-log" Mar 19 12:39:06.335363 master-0 kubenswrapper[31830]: I0319 12:39:06.335007 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d718087-0caf-46be-9c73-6464f876f335" containerName="nova-metadata-metadata" Mar 19 12:39:06.342683 master-0 kubenswrapper[31830]: I0319 12:39:06.340007 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 12:39:06.343336 master-0 kubenswrapper[31830]: I0319 12:39:06.342964 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 19 12:39:06.343336 master-0 kubenswrapper[31830]: I0319 12:39:06.343108 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 19 12:39:06.351400 master-0 kubenswrapper[31830]: I0319 12:39:06.351233 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 12:39:06.491537 master-0 kubenswrapper[31830]: I0319 12:39:06.491469 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/57dc08d4-80a2-48f0-b215-3ec2f688b480-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"57dc08d4-80a2-48f0-b215-3ec2f688b480\") " pod="openstack/nova-metadata-0" Mar 19 12:39:06.491537 master-0 kubenswrapper[31830]: I0319 12:39:06.491521 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnjtk\" (UniqueName: \"kubernetes.io/projected/57dc08d4-80a2-48f0-b215-3ec2f688b480-kube-api-access-cnjtk\") pod \"nova-metadata-0\" (UID: \"57dc08d4-80a2-48f0-b215-3ec2f688b480\") " pod="openstack/nova-metadata-0" Mar 19 12:39:06.492176 master-0 kubenswrapper[31830]: I0319 12:39:06.491680 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57dc08d4-80a2-48f0-b215-3ec2f688b480-logs\") pod \"nova-metadata-0\" (UID: \"57dc08d4-80a2-48f0-b215-3ec2f688b480\") " pod="openstack/nova-metadata-0" Mar 19 12:39:06.492176 master-0 kubenswrapper[31830]: I0319 12:39:06.491721 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57dc08d4-80a2-48f0-b215-3ec2f688b480-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"57dc08d4-80a2-48f0-b215-3ec2f688b480\") " pod="openstack/nova-metadata-0" Mar 19 12:39:06.492176 master-0 kubenswrapper[31830]: I0319 12:39:06.491838 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57dc08d4-80a2-48f0-b215-3ec2f688b480-config-data\") pod \"nova-metadata-0\" (UID: \"57dc08d4-80a2-48f0-b215-3ec2f688b480\") " pod="openstack/nova-metadata-0" Mar 19 12:39:06.593964 master-0 kubenswrapper[31830]: I0319 12:39:06.593916 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57dc08d4-80a2-48f0-b215-3ec2f688b480-config-data\") pod \"nova-metadata-0\" (UID: \"57dc08d4-80a2-48f0-b215-3ec2f688b480\") " pod="openstack/nova-metadata-0" Mar 19 12:39:06.594498 master-0 kubenswrapper[31830]: I0319 12:39:06.594467 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/57dc08d4-80a2-48f0-b215-3ec2f688b480-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"57dc08d4-80a2-48f0-b215-3ec2f688b480\") " pod="openstack/nova-metadata-0" Mar 19 12:39:06.594686 master-0 kubenswrapper[31830]: I0319 12:39:06.594654 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnjtk\" (UniqueName: \"kubernetes.io/projected/57dc08d4-80a2-48f0-b215-3ec2f688b480-kube-api-access-cnjtk\") pod \"nova-metadata-0\" (UID: \"57dc08d4-80a2-48f0-b215-3ec2f688b480\") " pod="openstack/nova-metadata-0" Mar 19 12:39:06.594874 master-0 kubenswrapper[31830]: I0319 12:39:06.594850 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57dc08d4-80a2-48f0-b215-3ec2f688b480-logs\") pod \"nova-metadata-0\" (UID: \"57dc08d4-80a2-48f0-b215-3ec2f688b480\") " pod="openstack/nova-metadata-0" Mar 19 12:39:06.594958 master-0 kubenswrapper[31830]: I0319 12:39:06.594879 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57dc08d4-80a2-48f0-b215-3ec2f688b480-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"57dc08d4-80a2-48f0-b215-3ec2f688b480\") " pod="openstack/nova-metadata-0" Mar 19 12:39:06.597871 master-0 kubenswrapper[31830]: I0319 12:39:06.595292 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57dc08d4-80a2-48f0-b215-3ec2f688b480-logs\") pod \"nova-metadata-0\" (UID: \"57dc08d4-80a2-48f0-b215-3ec2f688b480\") " pod="openstack/nova-metadata-0" Mar 19 12:39:06.598517 master-0 kubenswrapper[31830]: I0319 12:39:06.598473 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/57dc08d4-80a2-48f0-b215-3ec2f688b480-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"57dc08d4-80a2-48f0-b215-3ec2f688b480\") " pod="openstack/nova-metadata-0" Mar 19 12:39:06.599438 master-0 kubenswrapper[31830]: I0319 12:39:06.599398 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57dc08d4-80a2-48f0-b215-3ec2f688b480-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"57dc08d4-80a2-48f0-b215-3ec2f688b480\") " pod="openstack/nova-metadata-0" Mar 19 12:39:06.599649 master-0 kubenswrapper[31830]: I0319 12:39:06.599607 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57dc08d4-80a2-48f0-b215-3ec2f688b480-config-data\") pod \"nova-metadata-0\" (UID: \"57dc08d4-80a2-48f0-b215-3ec2f688b480\") " pod="openstack/nova-metadata-0" Mar 19 12:39:06.614020 master-0 kubenswrapper[31830]: I0319 12:39:06.613900 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnjtk\" (UniqueName: \"kubernetes.io/projected/57dc08d4-80a2-48f0-b215-3ec2f688b480-kube-api-access-cnjtk\") pod \"nova-metadata-0\" (UID: \"57dc08d4-80a2-48f0-b215-3ec2f688b480\") " pod="openstack/nova-metadata-0" Mar 19 12:39:06.700074 master-0 kubenswrapper[31830]: I0319 12:39:06.700013 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 12:39:07.244221 master-0 kubenswrapper[31830]: I0319 12:39:07.242892 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 12:39:07.244392 master-0 kubenswrapper[31830]: W0319 12:39:07.244289 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57dc08d4_80a2_48f0_b215_3ec2f688b480.slice/crio-db6adcd5b177b9c39bce2dc37db549dc2f51700a4d91df439ef219649a8ad950 WatchSource:0}: Error finding container db6adcd5b177b9c39bce2dc37db549dc2f51700a4d91df439ef219649a8ad950: Status 404 returned error can't find the container with id db6adcd5b177b9c39bce2dc37db549dc2f51700a4d91df439ef219649a8ad950 Mar 19 12:39:07.504641 master-0 kubenswrapper[31830]: I0319 12:39:07.504593 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57dc08d4-80a2-48f0-b215-3ec2f688b480","Type":"ContainerStarted","Data":"57210c7bd7401c186a408be1eaf094c1f72a7e398e49bcc5695f9c38d312282c"} Mar 19 12:39:07.504641 master-0 kubenswrapper[31830]: I0319 12:39:07.504645 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57dc08d4-80a2-48f0-b215-3ec2f688b480","Type":"ContainerStarted","Data":"db6adcd5b177b9c39bce2dc37db549dc2f51700a4d91df439ef219649a8ad950"} Mar 19 12:39:07.693980 master-0 kubenswrapper[31830]: I0319 12:39:07.693921 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d718087-0caf-46be-9c73-6464f876f335" path="/var/lib/kubelet/pods/2d718087-0caf-46be-9c73-6464f876f335/volumes" Mar 19 12:39:08.557850 master-0 kubenswrapper[31830]: I0319 12:39:08.557580 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57dc08d4-80a2-48f0-b215-3ec2f688b480","Type":"ContainerStarted","Data":"492a48c0b11ae2bc3de1ce7cc82a72316f6beeb8036b467dbccb2e92055bba51"} Mar 19 12:39:08.602446 master-0 kubenswrapper[31830]: I0319 12:39:08.601926 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.599983547 podStartE2EDuration="2.599983547s" podCreationTimestamp="2026-03-19 12:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:39:08.582206762 +0000 UTC m=+1487.131167486" watchObservedRunningTime="2026-03-19 12:39:08.599983547 +0000 UTC m=+1487.148944251" Mar 19 12:39:08.909582 master-0 kubenswrapper[31830]: I0319 12:39:08.909301 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 19 12:39:09.416192 master-0 kubenswrapper[31830]: I0319 12:39:09.416140 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 12:39:09.570695 master-0 kubenswrapper[31830]: I0319 12:39:09.570643 31830 generic.go:334] "Generic (PLEG): container finished" podID="c19c5aa9-c890-4aa1-b53c-4a13d0a61d67" containerID="106d028a0d253bff82c512a026210e2962143e22257ca098ef688363adf185bf" exitCode=0 Mar 19 12:39:09.571340 master-0 kubenswrapper[31830]: I0319 12:39:09.570704 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 12:39:09.571440 master-0 kubenswrapper[31830]: I0319 12:39:09.571412 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67","Type":"ContainerDied","Data":"106d028a0d253bff82c512a026210e2962143e22257ca098ef688363adf185bf"} Mar 19 12:39:09.571550 master-0 kubenswrapper[31830]: I0319 12:39:09.571530 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67","Type":"ContainerDied","Data":"a2c648fa68b3ee76351e51f86cf56547ca38a7402b5edf6e1027a9782bd0baa2"} Mar 19 12:39:09.571655 master-0 kubenswrapper[31830]: I0319 12:39:09.571634 31830 scope.go:117] "RemoveContainer" containerID="106d028a0d253bff82c512a026210e2962143e22257ca098ef688363adf185bf" Mar 19 12:39:09.573166 master-0 kubenswrapper[31830]: I0319 12:39:09.573143 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-config-data\") pod \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " Mar 19 12:39:09.574093 master-0 kubenswrapper[31830]: I0319 12:39:09.574017 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-internal-tls-certs\") pod \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " Mar 19 12:39:09.574601 master-0 kubenswrapper[31830]: I0319 12:39:09.574585 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-combined-ca-bundle\") pod \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " Mar 19 12:39:09.575041 master-0 kubenswrapper[31830]: I0319 12:39:09.575005 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-public-tls-certs\") pod \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " Mar 19 12:39:09.575223 master-0 kubenswrapper[31830]: I0319 12:39:09.575207 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9kf6\" (UniqueName: \"kubernetes.io/projected/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-kube-api-access-c9kf6\") pod \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " Mar 19 12:39:09.575413 master-0 kubenswrapper[31830]: I0319 12:39:09.575398 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-logs\") pod \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\" (UID: \"c19c5aa9-c890-4aa1-b53c-4a13d0a61d67\") " Mar 19 12:39:09.577310 master-0 kubenswrapper[31830]: I0319 12:39:09.577289 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-logs" (OuterVolumeSpecName: "logs") pod "c19c5aa9-c890-4aa1-b53c-4a13d0a61d67" (UID: "c19c5aa9-c890-4aa1-b53c-4a13d0a61d67"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 12:39:09.581737 master-0 kubenswrapper[31830]: I0319 12:39:09.581707 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-kube-api-access-c9kf6" (OuterVolumeSpecName: "kube-api-access-c9kf6") pod "c19c5aa9-c890-4aa1-b53c-4a13d0a61d67" (UID: "c19c5aa9-c890-4aa1-b53c-4a13d0a61d67"). InnerVolumeSpecName "kube-api-access-c9kf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:39:09.604420 master-0 kubenswrapper[31830]: I0319 12:39:09.604290 31830 scope.go:117] "RemoveContainer" containerID="61b56427092a436fa2f7dc3f0c5731ad62e46663bc60cba2f039417ea70a38c7" Mar 19 12:39:09.669914 master-0 kubenswrapper[31830]: I0319 12:39:09.669783 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-config-data" (OuterVolumeSpecName: "config-data") pod "c19c5aa9-c890-4aa1-b53c-4a13d0a61d67" (UID: "c19c5aa9-c890-4aa1-b53c-4a13d0a61d67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:39:09.671472 master-0 kubenswrapper[31830]: I0319 12:39:09.671150 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c19c5aa9-c890-4aa1-b53c-4a13d0a61d67" (UID: "c19c5aa9-c890-4aa1-b53c-4a13d0a61d67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:39:09.684726 master-0 kubenswrapper[31830]: I0319 12:39:09.684666 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 12:39:09.684726 master-0 kubenswrapper[31830]: I0319 12:39:09.684705 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9kf6\" (UniqueName: \"kubernetes.io/projected/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-kube-api-access-c9kf6\") on node \"master-0\" DevicePath \"\"" Mar 19 12:39:09.685006 master-0 kubenswrapper[31830]: I0319 12:39:09.684898 31830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:39:09.685006 master-0 kubenswrapper[31830]: I0319 12:39:09.684934 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 12:39:09.689690 master-0 kubenswrapper[31830]: I0319 12:39:09.689630 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c19c5aa9-c890-4aa1-b53c-4a13d0a61d67" (UID: "c19c5aa9-c890-4aa1-b53c-4a13d0a61d67"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:39:09.692395 master-0 kubenswrapper[31830]: I0319 12:39:09.692364 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c19c5aa9-c890-4aa1-b53c-4a13d0a61d67" (UID: "c19c5aa9-c890-4aa1-b53c-4a13d0a61d67"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:39:09.705126 master-0 kubenswrapper[31830]: I0319 12:39:09.705045 31830 scope.go:117] "RemoveContainer" containerID="106d028a0d253bff82c512a026210e2962143e22257ca098ef688363adf185bf" Mar 19 12:39:09.705607 master-0 kubenswrapper[31830]: E0319 12:39:09.705557 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"106d028a0d253bff82c512a026210e2962143e22257ca098ef688363adf185bf\": container with ID starting with 106d028a0d253bff82c512a026210e2962143e22257ca098ef688363adf185bf not found: ID does not exist" containerID="106d028a0d253bff82c512a026210e2962143e22257ca098ef688363adf185bf" Mar 19 12:39:09.705696 master-0 kubenswrapper[31830]: I0319 12:39:09.705620 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"106d028a0d253bff82c512a026210e2962143e22257ca098ef688363adf185bf"} err="failed to get container status \"106d028a0d253bff82c512a026210e2962143e22257ca098ef688363adf185bf\": rpc error: code = NotFound desc = could not find container \"106d028a0d253bff82c512a026210e2962143e22257ca098ef688363adf185bf\": container with ID starting with 106d028a0d253bff82c512a026210e2962143e22257ca098ef688363adf185bf not found: ID does not exist" Mar 19 12:39:09.705696 master-0 kubenswrapper[31830]: I0319 12:39:09.705649 31830 scope.go:117] "RemoveContainer" containerID="61b56427092a436fa2f7dc3f0c5731ad62e46663bc60cba2f039417ea70a38c7" Mar 19 12:39:09.706279 master-0 kubenswrapper[31830]: E0319 12:39:09.706251 31830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61b56427092a436fa2f7dc3f0c5731ad62e46663bc60cba2f039417ea70a38c7\": container with ID starting with 61b56427092a436fa2f7dc3f0c5731ad62e46663bc60cba2f039417ea70a38c7 not found: ID does not exist" containerID="61b56427092a436fa2f7dc3f0c5731ad62e46663bc60cba2f039417ea70a38c7" Mar 19 12:39:09.706361 master-0 kubenswrapper[31830]: I0319 12:39:09.706282 31830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61b56427092a436fa2f7dc3f0c5731ad62e46663bc60cba2f039417ea70a38c7"} err="failed to get container status \"61b56427092a436fa2f7dc3f0c5731ad62e46663bc60cba2f039417ea70a38c7\": rpc error: code = NotFound desc = could not find container \"61b56427092a436fa2f7dc3f0c5731ad62e46663bc60cba2f039417ea70a38c7\": container with ID starting with 61b56427092a436fa2f7dc3f0c5731ad62e46663bc60cba2f039417ea70a38c7 not found: ID does not exist" Mar 19 12:39:09.788862 master-0 kubenswrapper[31830]: I0319 12:39:09.788818 31830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:39:09.788862 master-0 kubenswrapper[31830]: I0319 12:39:09.788854 31830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 12:39:09.913959 master-0 kubenswrapper[31830]: I0319 12:39:09.913779 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 19 12:39:09.927652 master-0 kubenswrapper[31830]: I0319 12:39:09.927568 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 19 12:39:09.950442 master-0 kubenswrapper[31830]: I0319 12:39:09.950381 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 19 12:39:09.951308 master-0 kubenswrapper[31830]: E0319 12:39:09.951206 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c19c5aa9-c890-4aa1-b53c-4a13d0a61d67" containerName="nova-api-api" Mar 19 12:39:09.951308 master-0 kubenswrapper[31830]: I0319 12:39:09.951236 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c19c5aa9-c890-4aa1-b53c-4a13d0a61d67" containerName="nova-api-api" Mar 19 12:39:09.951308 master-0 kubenswrapper[31830]: E0319 12:39:09.951257 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c19c5aa9-c890-4aa1-b53c-4a13d0a61d67" containerName="nova-api-log" Mar 19 12:39:09.951308 master-0 kubenswrapper[31830]: I0319 12:39:09.951266 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c19c5aa9-c890-4aa1-b53c-4a13d0a61d67" containerName="nova-api-log" Mar 19 12:39:09.951819 master-0 kubenswrapper[31830]: I0319 12:39:09.951591 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c19c5aa9-c890-4aa1-b53c-4a13d0a61d67" containerName="nova-api-log" Mar 19 12:39:09.951819 master-0 kubenswrapper[31830]: I0319 12:39:09.951653 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c19c5aa9-c890-4aa1-b53c-4a13d0a61d67" containerName="nova-api-api" Mar 19 12:39:09.953633 master-0 kubenswrapper[31830]: I0319 12:39:09.953165 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 12:39:09.955261 master-0 kubenswrapper[31830]: I0319 12:39:09.955219 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 19 12:39:09.955449 master-0 kubenswrapper[31830]: I0319 12:39:09.955414 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 19 12:39:09.955504 master-0 kubenswrapper[31830]: I0319 12:39:09.955481 31830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 19 12:39:09.962870 master-0 kubenswrapper[31830]: I0319 12:39:09.962786 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 12:39:10.094906 master-0 kubenswrapper[31830]: I0319 12:39:10.094825 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsgx4\" (UniqueName: \"kubernetes.io/projected/d3e66899-914d-44fb-9a77-5a0dd045e6ce-kube-api-access-vsgx4\") pod \"nova-api-0\" (UID: \"d3e66899-914d-44fb-9a77-5a0dd045e6ce\") " pod="openstack/nova-api-0" Mar 19 12:39:10.095165 master-0 kubenswrapper[31830]: I0319 12:39:10.094958 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3e66899-914d-44fb-9a77-5a0dd045e6ce-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d3e66899-914d-44fb-9a77-5a0dd045e6ce\") " pod="openstack/nova-api-0" Mar 19 12:39:10.095165 master-0 kubenswrapper[31830]: I0319 12:39:10.095141 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3e66899-914d-44fb-9a77-5a0dd045e6ce-public-tls-certs\") pod \"nova-api-0\" (UID: \"d3e66899-914d-44fb-9a77-5a0dd045e6ce\") " pod="openstack/nova-api-0" Mar 19 12:39:10.095252 master-0 kubenswrapper[31830]: I0319 12:39:10.095212 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e66899-914d-44fb-9a77-5a0dd045e6ce-config-data\") pod \"nova-api-0\" (UID: \"d3e66899-914d-44fb-9a77-5a0dd045e6ce\") " pod="openstack/nova-api-0" Mar 19 12:39:10.095291 master-0 kubenswrapper[31830]: I0319 12:39:10.095261 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3e66899-914d-44fb-9a77-5a0dd045e6ce-logs\") pod \"nova-api-0\" (UID: \"d3e66899-914d-44fb-9a77-5a0dd045e6ce\") " pod="openstack/nova-api-0" Mar 19 12:39:10.095414 master-0 kubenswrapper[31830]: I0319 12:39:10.095364 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e66899-914d-44fb-9a77-5a0dd045e6ce-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d3e66899-914d-44fb-9a77-5a0dd045e6ce\") " pod="openstack/nova-api-0" Mar 19 12:39:10.197615 master-0 kubenswrapper[31830]: I0319 12:39:10.197463 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3e66899-914d-44fb-9a77-5a0dd045e6ce-public-tls-certs\") pod \"nova-api-0\" (UID: \"d3e66899-914d-44fb-9a77-5a0dd045e6ce\") " pod="openstack/nova-api-0" Mar 19 12:39:10.198914 master-0 kubenswrapper[31830]: I0319 12:39:10.198887 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e66899-914d-44fb-9a77-5a0dd045e6ce-config-data\") pod \"nova-api-0\" (UID: \"d3e66899-914d-44fb-9a77-5a0dd045e6ce\") " pod="openstack/nova-api-0" Mar 19 12:39:10.199084 master-0 kubenswrapper[31830]: I0319 12:39:10.199064 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3e66899-914d-44fb-9a77-5a0dd045e6ce-logs\") pod \"nova-api-0\" (UID: \"d3e66899-914d-44fb-9a77-5a0dd045e6ce\") " pod="openstack/nova-api-0" Mar 19 12:39:10.199452 master-0 kubenswrapper[31830]: I0319 12:39:10.199430 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e66899-914d-44fb-9a77-5a0dd045e6ce-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d3e66899-914d-44fb-9a77-5a0dd045e6ce\") " pod="openstack/nova-api-0" Mar 19 12:39:10.199664 master-0 kubenswrapper[31830]: I0319 12:39:10.199641 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsgx4\" (UniqueName: \"kubernetes.io/projected/d3e66899-914d-44fb-9a77-5a0dd045e6ce-kube-api-access-vsgx4\") pod \"nova-api-0\" (UID: \"d3e66899-914d-44fb-9a77-5a0dd045e6ce\") " pod="openstack/nova-api-0" Mar 19 12:39:10.199836 master-0 kubenswrapper[31830]: I0319 12:39:10.199462 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3e66899-914d-44fb-9a77-5a0dd045e6ce-logs\") pod \"nova-api-0\" (UID: \"d3e66899-914d-44fb-9a77-5a0dd045e6ce\") " pod="openstack/nova-api-0" Mar 19 12:39:10.199935 master-0 kubenswrapper[31830]: I0319 12:39:10.199916 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3e66899-914d-44fb-9a77-5a0dd045e6ce-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d3e66899-914d-44fb-9a77-5a0dd045e6ce\") " pod="openstack/nova-api-0" Mar 19 12:39:10.203862 master-0 kubenswrapper[31830]: I0319 12:39:10.203079 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e66899-914d-44fb-9a77-5a0dd045e6ce-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d3e66899-914d-44fb-9a77-5a0dd045e6ce\") " pod="openstack/nova-api-0" Mar 19 12:39:10.203862 master-0 kubenswrapper[31830]: I0319 12:39:10.203531 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3e66899-914d-44fb-9a77-5a0dd045e6ce-public-tls-certs\") pod \"nova-api-0\" (UID: \"d3e66899-914d-44fb-9a77-5a0dd045e6ce\") " pod="openstack/nova-api-0" Mar 19 12:39:10.204930 master-0 kubenswrapper[31830]: I0319 12:39:10.204851 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3e66899-914d-44fb-9a77-5a0dd045e6ce-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d3e66899-914d-44fb-9a77-5a0dd045e6ce\") " pod="openstack/nova-api-0" Mar 19 12:39:10.212673 master-0 kubenswrapper[31830]: I0319 12:39:10.212621 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e66899-914d-44fb-9a77-5a0dd045e6ce-config-data\") pod \"nova-api-0\" (UID: \"d3e66899-914d-44fb-9a77-5a0dd045e6ce\") " pod="openstack/nova-api-0" Mar 19 12:39:10.232942 master-0 kubenswrapper[31830]: I0319 12:39:10.232887 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsgx4\" (UniqueName: \"kubernetes.io/projected/d3e66899-914d-44fb-9a77-5a0dd045e6ce-kube-api-access-vsgx4\") pod \"nova-api-0\" (UID: \"d3e66899-914d-44fb-9a77-5a0dd045e6ce\") " pod="openstack/nova-api-0" Mar 19 12:39:10.313007 master-0 kubenswrapper[31830]: I0319 12:39:10.312941 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 12:39:10.858354 master-0 kubenswrapper[31830]: I0319 12:39:10.858236 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 12:39:11.610701 master-0 kubenswrapper[31830]: I0319 12:39:11.610640 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d3e66899-914d-44fb-9a77-5a0dd045e6ce","Type":"ContainerStarted","Data":"4613e3214ab69cfda97bab5676a46bccffbf61d1b02242eca0d0a37f392c2e81"} Mar 19 12:39:11.610701 master-0 kubenswrapper[31830]: I0319 12:39:11.610700 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d3e66899-914d-44fb-9a77-5a0dd045e6ce","Type":"ContainerStarted","Data":"770e0695b36e274443c3642223413884f9fcc61649e3e6bc5b72662357115223"} Mar 19 12:39:11.610701 master-0 kubenswrapper[31830]: I0319 12:39:11.610714 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d3e66899-914d-44fb-9a77-5a0dd045e6ce","Type":"ContainerStarted","Data":"2d6a382296aaa68bce87eba4601ccdc7b39cf6eee3141157cd789b65fc7a3d1f"} Mar 19 12:39:11.636093 master-0 kubenswrapper[31830]: I0319 12:39:11.635993 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.635971936 podStartE2EDuration="2.635971936s" podCreationTimestamp="2026-03-19 12:39:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 12:39:11.628901396 +0000 UTC m=+1490.177862110" watchObservedRunningTime="2026-03-19 12:39:11.635971936 +0000 UTC m=+1490.184932640" Mar 19 12:39:11.694324 master-0 kubenswrapper[31830]: I0319 12:39:11.694283 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c19c5aa9-c890-4aa1-b53c-4a13d0a61d67" path="/var/lib/kubelet/pods/c19c5aa9-c890-4aa1-b53c-4a13d0a61d67/volumes" Mar 19 12:39:13.909367 master-0 kubenswrapper[31830]: I0319 12:39:13.909318 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 19 12:39:13.937112 master-0 kubenswrapper[31830]: I0319 12:39:13.937077 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 19 12:39:14.669780 master-0 kubenswrapper[31830]: I0319 12:39:14.669728 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 19 12:39:16.701447 master-0 kubenswrapper[31830]: I0319 12:39:16.701221 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 19 12:39:16.701447 master-0 kubenswrapper[31830]: I0319 12:39:16.701274 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 19 12:39:17.717076 master-0 kubenswrapper[31830]: I0319 12:39:17.716999 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="57dc08d4-80a2-48f0-b215-3ec2f688b480" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.7:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 12:39:17.717643 master-0 kubenswrapper[31830]: I0319 12:39:17.717048 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="57dc08d4-80a2-48f0-b215-3ec2f688b480" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.7:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 12:39:20.314653 master-0 kubenswrapper[31830]: I0319 12:39:20.314579 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 19 12:39:20.314653 master-0 kubenswrapper[31830]: I0319 12:39:20.314658 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 19 12:39:21.327164 master-0 kubenswrapper[31830]: I0319 12:39:21.327054 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d3e66899-914d-44fb-9a77-5a0dd045e6ce" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.1.8:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 12:39:21.327709 master-0 kubenswrapper[31830]: I0319 12:39:21.327075 31830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d3e66899-914d-44fb-9a77-5a0dd045e6ce" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.1.8:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 12:39:24.701585 master-0 kubenswrapper[31830]: I0319 12:39:24.701501 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 19 12:39:24.701585 master-0 kubenswrapper[31830]: I0319 12:39:24.701573 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 19 12:39:26.706553 master-0 kubenswrapper[31830]: I0319 12:39:26.706482 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 19 12:39:26.707239 master-0 kubenswrapper[31830]: I0319 12:39:26.706617 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 19 12:39:26.711723 master-0 kubenswrapper[31830]: I0319 12:39:26.711652 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 19 12:39:26.714832 master-0 kubenswrapper[31830]: I0319 12:39:26.714765 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 19 12:39:28.314136 master-0 kubenswrapper[31830]: I0319 12:39:28.314073 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 19 12:39:28.314749 master-0 kubenswrapper[31830]: I0319 12:39:28.314148 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 19 12:39:30.320016 master-0 kubenswrapper[31830]: I0319 12:39:30.319971 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 19 12:39:30.320550 master-0 kubenswrapper[31830]: I0319 12:39:30.320311 31830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 19 12:39:30.326890 master-0 kubenswrapper[31830]: I0319 12:39:30.326759 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 19 12:39:30.874334 master-0 kubenswrapper[31830]: I0319 12:39:30.874282 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 19 12:40:50.069113 master-0 kubenswrapper[31830]: I0319 12:40:50.069057 31830 scope.go:117] "RemoveContainer" containerID="fd83cfd0a5030aa8964d4c66ff8815628c26f20ae2479ce72f5d922d8ec37a7e" Mar 19 12:40:50.105327 master-0 kubenswrapper[31830]: I0319 12:40:50.105295 31830 scope.go:117] "RemoveContainer" containerID="92fefa25e12d7b8a81ae96a8caa460bd21bac9b47e6a2e53320932f87e6b687f" Mar 19 12:40:50.128531 master-0 kubenswrapper[31830]: I0319 12:40:50.128500 31830 scope.go:117] "RemoveContainer" containerID="2f87b8c68ab857eef8cf34308ba968c1014a0a051c2ab11f727223874338b6a9" Mar 19 12:40:50.150430 master-0 kubenswrapper[31830]: I0319 12:40:50.150381 31830 scope.go:117] "RemoveContainer" containerID="ce24334c12809539e014acf572552cc188d98912ab79f2c4e36eb3def8067921" Mar 19 12:41:50.242052 master-0 kubenswrapper[31830]: I0319 12:41:50.241983 31830 scope.go:117] "RemoveContainer" containerID="c6b6c2dfd53f8ce71a8ac8df5e32bb29b6a0598341925ba5374d674dbcfd0c09" Mar 19 12:41:50.275555 master-0 kubenswrapper[31830]: I0319 12:41:50.275506 31830 scope.go:117] "RemoveContainer" containerID="329ba7480b6e708e250c679c2559841524a126d289614d3d443afda9ed16ada0" Mar 19 12:41:50.297635 master-0 kubenswrapper[31830]: I0319 12:41:50.297418 31830 scope.go:117] "RemoveContainer" containerID="94720d41e4fa7e96d5c44a8a22c4f5e6ec5e00bb25811056d8a600795c74540b" Mar 19 12:41:50.319861 master-0 kubenswrapper[31830]: I0319 12:41:50.319736 31830 scope.go:117] "RemoveContainer" containerID="8d1ac8ee1f8360dc06e0adf1aeb52ec9f0d7891f07e9f1014d748c748b70b638" Mar 19 12:41:50.342888 master-0 kubenswrapper[31830]: I0319 12:41:50.342737 31830 scope.go:117] "RemoveContainer" containerID="fac1dfd8fe49e8139b79a255b8309437775b2298893d36f2236b561952a3d8e9" Mar 19 12:42:50.427296 master-0 kubenswrapper[31830]: I0319 12:42:50.427233 31830 scope.go:117] "RemoveContainer" containerID="c1e486bc1b061db94e8c2a39ba8abda61e5e754c92bcec99626f94dd2915ed34" Mar 19 12:42:50.455018 master-0 kubenswrapper[31830]: I0319 12:42:50.454958 31830 scope.go:117] "RemoveContainer" containerID="cb6af135f4ae69eedbb4aec9e3cbe89d878ef397b2a48c0d77f21c32471ee978" Mar 19 12:44:33.059855 master-0 kubenswrapper[31830]: I0319 12:44:33.059763 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-2nfl5"] Mar 19 12:44:33.073151 master-0 kubenswrapper[31830]: I0319 12:44:33.073062 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-2nfl5"] Mar 19 12:44:33.694300 master-0 kubenswrapper[31830]: I0319 12:44:33.694252 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="674e9cd6-bf60-4cca-951f-de66c55e8ce5" path="/var/lib/kubelet/pods/674e9cd6-bf60-4cca-951f-de66c55e8ce5/volumes" Mar 19 12:44:34.145067 master-0 kubenswrapper[31830]: I0319 12:44:34.144988 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-xj4l5"] Mar 19 12:44:34.162781 master-0 kubenswrapper[31830]: I0319 12:44:34.161404 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-adbe-account-create-update-q4gms"] Mar 19 12:44:34.176213 master-0 kubenswrapper[31830]: I0319 12:44:34.176140 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-a5bb-account-create-update-cvftl"] Mar 19 12:44:34.186697 master-0 kubenswrapper[31830]: I0319 12:44:34.186616 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-a5bb-account-create-update-cvftl"] Mar 19 12:44:34.197898 master-0 kubenswrapper[31830]: I0319 12:44:34.197834 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-xj4l5"] Mar 19 12:44:34.208370 master-0 kubenswrapper[31830]: I0319 12:44:34.208306 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-adbe-account-create-update-q4gms"] Mar 19 12:44:34.218775 master-0 kubenswrapper[31830]: I0319 12:44:34.218703 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-72bf-account-create-update-tzxdw"] Mar 19 12:44:34.229273 master-0 kubenswrapper[31830]: I0319 12:44:34.229189 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-72bf-account-create-update-tzxdw"] Mar 19 12:44:35.108043 master-0 kubenswrapper[31830]: I0319 12:44:35.107950 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-58xtb"] Mar 19 12:44:35.118579 master-0 kubenswrapper[31830]: I0319 12:44:35.118501 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-58xtb"] Mar 19 12:44:35.693427 master-0 kubenswrapper[31830]: I0319 12:44:35.693350 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="196bfd6e-b584-4ca9-a94d-f9928ae87a7f" path="/var/lib/kubelet/pods/196bfd6e-b584-4ca9-a94d-f9928ae87a7f/volumes" Mar 19 12:44:35.694134 master-0 kubenswrapper[31830]: I0319 12:44:35.694112 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85fb70b5-81f7-417c-b0cf-f3c917d1bc90" path="/var/lib/kubelet/pods/85fb70b5-81f7-417c-b0cf-f3c917d1bc90/volumes" Mar 19 12:44:35.694753 master-0 kubenswrapper[31830]: I0319 12:44:35.694723 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88c4c83b-bbcc-44f3-aa58-880fd24e1e3f" path="/var/lib/kubelet/pods/88c4c83b-bbcc-44f3-aa58-880fd24e1e3f/volumes" Mar 19 12:44:35.695418 master-0 kubenswrapper[31830]: I0319 12:44:35.695388 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec706aca-7a17-4a8c-a287-80b6b964eed4" path="/var/lib/kubelet/pods/ec706aca-7a17-4a8c-a287-80b6b964eed4/volumes" Mar 19 12:44:35.696642 master-0 kubenswrapper[31830]: I0319 12:44:35.696610 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa7e2b32-a302-4e00-8941-21b35df641fc" path="/var/lib/kubelet/pods/fa7e2b32-a302-4e00-8941-21b35df641fc/volumes" Mar 19 12:44:45.045693 master-0 kubenswrapper[31830]: I0319 12:44:45.045534 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vztcb"] Mar 19 12:44:45.064258 master-0 kubenswrapper[31830]: I0319 12:44:45.064196 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vztcb"] Mar 19 12:44:45.692975 master-0 kubenswrapper[31830]: I0319 12:44:45.692897 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0a0609b-7128-44ad-b501-9216196d8987" path="/var/lib/kubelet/pods/b0a0609b-7128-44ad-b501-9216196d8987/volumes" Mar 19 12:44:50.556565 master-0 kubenswrapper[31830]: I0319 12:44:50.556450 31830 scope.go:117] "RemoveContainer" containerID="0c4530a62fdf3cd68c58f24eb1d2e025b47fe120675f8cbf55c79a5a2e24e77a" Mar 19 12:44:50.581749 master-0 kubenswrapper[31830]: I0319 12:44:50.581693 31830 scope.go:117] "RemoveContainer" containerID="a103e6a30198ac998cec4273345472dd70f07f0327e5a658845518c70f558342" Mar 19 12:44:50.602344 master-0 kubenswrapper[31830]: I0319 12:44:50.602295 31830 scope.go:117] "RemoveContainer" containerID="2838e46f93b6698fd5daffcb4661afff24710a7bddea2a23ded88e4f7a25b00d" Mar 19 12:44:50.639534 master-0 kubenswrapper[31830]: I0319 12:44:50.639438 31830 scope.go:117] "RemoveContainer" containerID="00867437923c4518c18d7788ac7fe2d5afaa1a2f7e97ed1c1ea3dda87757f76b" Mar 19 12:44:50.674281 master-0 kubenswrapper[31830]: I0319 12:44:50.674161 31830 scope.go:117] "RemoveContainer" containerID="2364beb2d098a3420df16dd7c7a9d31908ea694493bacb80512b05fe0ba45bca" Mar 19 12:44:50.697953 master-0 kubenswrapper[31830]: I0319 12:44:50.697773 31830 scope.go:117] "RemoveContainer" containerID="4af6ce1afd031981e1c519b4dc4c4c8877fdf83ab0d2a2c31403eb0d4ddae00d" Mar 19 12:44:50.718431 master-0 kubenswrapper[31830]: I0319 12:44:50.718395 31830 scope.go:117] "RemoveContainer" containerID="3ac9d32465df04aa108558289ba5246d36f83fb17fd89e3eb122e66cab88d517" Mar 19 12:45:07.050294 master-0 kubenswrapper[31830]: I0319 12:45:07.050200 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ec8b-account-create-update-bs98g"] Mar 19 12:45:07.061554 master-0 kubenswrapper[31830]: I0319 12:45:07.061483 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ec8b-account-create-update-bs98g"] Mar 19 12:45:07.705963 master-0 kubenswrapper[31830]: I0319 12:45:07.705897 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="373619bf-a142-44fd-b4b4-25d7cc74dda4" path="/var/lib/kubelet/pods/373619bf-a142-44fd-b4b4-25d7cc74dda4/volumes" Mar 19 12:45:08.050719 master-0 kubenswrapper[31830]: I0319 12:45:08.050637 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-mhxb6"] Mar 19 12:45:08.066475 master-0 kubenswrapper[31830]: I0319 12:45:08.066397 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-mhxb6"] Mar 19 12:45:09.035510 master-0 kubenswrapper[31830]: I0319 12:45:09.035424 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6934-account-create-update-q844m"] Mar 19 12:45:09.052715 master-0 kubenswrapper[31830]: I0319 12:45:09.052658 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6934-account-create-update-q844m"] Mar 19 12:45:09.066616 master-0 kubenswrapper[31830]: I0319 12:45:09.066561 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-m2k5f"] Mar 19 12:45:09.076897 master-0 kubenswrapper[31830]: I0319 12:45:09.076832 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-m2k5f"] Mar 19 12:45:09.722730 master-0 kubenswrapper[31830]: I0319 12:45:09.722666 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48c9a901-d8c0-453d-8525-bf69f7710e6b" path="/var/lib/kubelet/pods/48c9a901-d8c0-453d-8525-bf69f7710e6b/volumes" Mar 19 12:45:09.724042 master-0 kubenswrapper[31830]: I0319 12:45:09.724007 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9559d792-d79a-48bf-9ad0-b157b0e2684f" path="/var/lib/kubelet/pods/9559d792-d79a-48bf-9ad0-b157b0e2684f/volumes" Mar 19 12:45:09.725155 master-0 kubenswrapper[31830]: I0319 12:45:09.725083 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5409fe3-cc0f-4ba4-a1f3-93f2ae986204" path="/var/lib/kubelet/pods/b5409fe3-cc0f-4ba4-a1f3-93f2ae986204/volumes" Mar 19 12:45:15.035887 master-0 kubenswrapper[31830]: I0319 12:45:15.035813 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-dfsd7"] Mar 19 12:45:15.045576 master-0 kubenswrapper[31830]: I0319 12:45:15.045517 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-dfsd7"] Mar 19 12:45:15.699709 master-0 kubenswrapper[31830]: I0319 12:45:15.699639 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82a35ae5-08db-4571-977b-95d26158480e" path="/var/lib/kubelet/pods/82a35ae5-08db-4571-977b-95d26158480e/volumes" Mar 19 12:45:23.037240 master-0 kubenswrapper[31830]: I0319 12:45:23.037166 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-t4w4t"] Mar 19 12:45:23.049392 master-0 kubenswrapper[31830]: I0319 12:45:23.049332 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-t4w4t"] Mar 19 12:45:23.702010 master-0 kubenswrapper[31830]: I0319 12:45:23.701945 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78caf503-3472-47a9-9107-4d260f898fb2" path="/var/lib/kubelet/pods/78caf503-3472-47a9-9107-4d260f898fb2/volumes" Mar 19 12:45:50.884787 master-0 kubenswrapper[31830]: I0319 12:45:50.884697 31830 scope.go:117] "RemoveContainer" containerID="a4ec721b7a729caf0e9c5cc83e7ede0e06a43afa55e6181e4b7e78f645a4f25e" Mar 19 12:45:50.917209 master-0 kubenswrapper[31830]: I0319 12:45:50.917152 31830 scope.go:117] "RemoveContainer" containerID="496d2f74441c0012111b3d65a363d46cea5ee91d2808eb80d63004f1fafc2520" Mar 19 12:45:50.949361 master-0 kubenswrapper[31830]: I0319 12:45:50.949299 31830 scope.go:117] "RemoveContainer" containerID="390461eb1a45005396151b9e9ee89c0e9643d193cd2f51a5a9bfac9fc52f4a20" Mar 19 12:45:50.972248 master-0 kubenswrapper[31830]: I0319 12:45:50.972187 31830 scope.go:117] "RemoveContainer" containerID="7436c639eb9f8491ca4d4f335c8422f72e856a969b84b5eef85431950e8c53ad" Mar 19 12:45:50.995541 master-0 kubenswrapper[31830]: I0319 12:45:50.995493 31830 scope.go:117] "RemoveContainer" containerID="1124965895bf1b35cf84997dac5a09c36254fdfa7302efcdbbb34dba7c6419a2" Mar 19 12:45:51.018506 master-0 kubenswrapper[31830]: I0319 12:45:51.018439 31830 scope.go:117] "RemoveContainer" containerID="c8541bac8f6f1c48bacf942e26bbee206e33e3a2dd966b97b11dc0a4e13012a3" Mar 19 12:46:02.065058 master-0 kubenswrapper[31830]: I0319 12:46:02.064992 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-lpz7t"] Mar 19 12:46:02.080124 master-0 kubenswrapper[31830]: I0319 12:46:02.080067 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-lpz7t"] Mar 19 12:46:03.705621 master-0 kubenswrapper[31830]: I0319 12:46:03.705552 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48e729f7-b182-49a0-8d92-174b44693dad" path="/var/lib/kubelet/pods/48e729f7-b182-49a0-8d92-174b44693dad/volumes" Mar 19 12:46:08.033394 master-0 kubenswrapper[31830]: I0319 12:46:08.033325 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-v2lnb"] Mar 19 12:46:08.045052 master-0 kubenswrapper[31830]: I0319 12:46:08.044977 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-zl2rr"] Mar 19 12:46:08.055638 master-0 kubenswrapper[31830]: I0319 12:46:08.055550 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-cce1e-db-sync-n5228"] Mar 19 12:46:08.067012 master-0 kubenswrapper[31830]: I0319 12:46:08.066964 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-v2lnb"] Mar 19 12:46:08.079471 master-0 kubenswrapper[31830]: I0319 12:46:08.079402 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-zl2rr"] Mar 19 12:46:08.096836 master-0 kubenswrapper[31830]: I0319 12:46:08.095152 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-cce1e-db-sync-n5228"] Mar 19 12:46:09.692420 master-0 kubenswrapper[31830]: I0319 12:46:09.692332 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14936556-fa0b-48fb-91e5-0ca806871a6c" path="/var/lib/kubelet/pods/14936556-fa0b-48fb-91e5-0ca806871a6c/volumes" Mar 19 12:46:09.693560 master-0 kubenswrapper[31830]: I0319 12:46:09.693533 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="538593b3-ec2b-4d6e-9f10-3e7add4f7b41" path="/var/lib/kubelet/pods/538593b3-ec2b-4d6e-9f10-3e7add4f7b41/volumes" Mar 19 12:46:09.694416 master-0 kubenswrapper[31830]: I0319 12:46:09.694374 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afeb235b-1d56-46d5-9d18-dbbb5f50e141" path="/var/lib/kubelet/pods/afeb235b-1d56-46d5-9d18-dbbb5f50e141/volumes" Mar 19 12:46:35.071207 master-0 kubenswrapper[31830]: I0319 12:46:35.071128 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/edpm-b-provisionserver-checksum-discovery-m9sgp"] Mar 19 12:46:35.086938 master-0 kubenswrapper[31830]: I0319 12:46:35.086862 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/edpm-a-provisionserver-checksum-discovery-g8vpc"] Mar 19 12:46:35.100537 master-0 kubenswrapper[31830]: I0319 12:46:35.100421 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/edpm-b-provisionserver-checksum-discovery-m9sgp"] Mar 19 12:46:35.117892 master-0 kubenswrapper[31830]: I0319 12:46:35.117748 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/edpm-a-provisionserver-checksum-discovery-g8vpc"] Mar 19 12:46:35.690896 master-0 kubenswrapper[31830]: I0319 12:46:35.690850 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35f4844e-6e9b-4f93-a711-1e673e39add8" path="/var/lib/kubelet/pods/35f4844e-6e9b-4f93-a711-1e673e39add8/volumes" Mar 19 12:46:35.691543 master-0 kubenswrapper[31830]: I0319 12:46:35.691519 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f33cd3dd-af62-465e-8e5d-6a2ad7e86748" path="/var/lib/kubelet/pods/f33cd3dd-af62-465e-8e5d-6a2ad7e86748/volumes" Mar 19 12:46:51.172919 master-0 kubenswrapper[31830]: I0319 12:46:51.172866 31830 scope.go:117] "RemoveContainer" containerID="dd0b18994ca266cf34b06540b08eedc5c31e1c3a1dcc6208aa709913cac07117" Mar 19 12:46:51.196151 master-0 kubenswrapper[31830]: I0319 12:46:51.196070 31830 scope.go:117] "RemoveContainer" containerID="47e74fdc2de15d7bbd751f4fe10b141c16a6b7ba87d12191394e6ff93e2fac5f" Mar 19 12:46:51.220524 master-0 kubenswrapper[31830]: I0319 12:46:51.220469 31830 scope.go:117] "RemoveContainer" containerID="a89e35e083fda0f7b8496b39ebf3184418372ea80a01637cafd88d8463a543c1" Mar 19 12:46:51.244157 master-0 kubenswrapper[31830]: I0319 12:46:51.244119 31830 scope.go:117] "RemoveContainer" containerID="5d5134864377ff6ef0c678afb80694340e7ba20296330cbdce0ce81558d2ee8d" Mar 19 12:46:51.282163 master-0 kubenswrapper[31830]: I0319 12:46:51.282108 31830 scope.go:117] "RemoveContainer" containerID="94708eec96ba6c731b90eff4b60c1dd5d1133f800120f84a99a86a59922034b4" Mar 19 12:46:51.308741 master-0 kubenswrapper[31830]: I0319 12:46:51.308592 31830 scope.go:117] "RemoveContainer" containerID="680fb3101048559f79bb52dbc1af33ada1a89966aa320d17df762228772fa09b" Mar 19 12:46:51.335701 master-0 kubenswrapper[31830]: I0319 12:46:51.335109 31830 scope.go:117] "RemoveContainer" containerID="55aeeab99a6e9fda0c5166cfb5d594105808d1546be7583628ed115f2fbfb80e" Mar 19 12:46:51.363296 master-0 kubenswrapper[31830]: I0319 12:46:51.362552 31830 scope.go:117] "RemoveContainer" containerID="b24b3a3e71f958f5220aefdf55eef0c7125e6e352b028eb2a67ee1354e09a18c" Mar 19 12:47:06.056715 master-0 kubenswrapper[31830]: I0319 12:47:06.056631 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-q72gc"] Mar 19 12:47:06.073876 master-0 kubenswrapper[31830]: I0319 12:47:06.073024 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-q72gc"] Mar 19 12:47:07.695747 master-0 kubenswrapper[31830]: I0319 12:47:07.695678 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a04e9d8-ceee-449a-9e77-ebbe7b230aa7" path="/var/lib/kubelet/pods/2a04e9d8-ceee-449a-9e77-ebbe7b230aa7/volumes" Mar 19 12:47:08.036584 master-0 kubenswrapper[31830]: I0319 12:47:08.036527 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-2lrh7"] Mar 19 12:47:08.053055 master-0 kubenswrapper[31830]: I0319 12:47:08.052991 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-fd15-account-create-update-bn6tm"] Mar 19 12:47:08.069992 master-0 kubenswrapper[31830]: I0319 12:47:08.069941 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4ebf-account-create-update-kdfvk"] Mar 19 12:47:08.083264 master-0 kubenswrapper[31830]: I0319 12:47:08.083203 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-fd15-account-create-update-bn6tm"] Mar 19 12:47:08.096076 master-0 kubenswrapper[31830]: I0319 12:47:08.096022 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-2lrh7"] Mar 19 12:47:08.105749 master-0 kubenswrapper[31830]: I0319 12:47:08.105689 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-4ebf-account-create-update-kdfvk"] Mar 19 12:47:09.694678 master-0 kubenswrapper[31830]: I0319 12:47:09.694619 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16baa845-ba2a-42b4-b0a8-3745b32f4f3e" path="/var/lib/kubelet/pods/16baa845-ba2a-42b4-b0a8-3745b32f4f3e/volumes" Mar 19 12:47:09.695420 master-0 kubenswrapper[31830]: I0319 12:47:09.695391 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="617862b3-8acc-478a-a829-74116f0d4a3d" path="/var/lib/kubelet/pods/617862b3-8acc-478a-a829-74116f0d4a3d/volumes" Mar 19 12:47:09.696117 master-0 kubenswrapper[31830]: I0319 12:47:09.696079 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abc7c0ae-fdcf-449d-86d3-51d23d43be6c" path="/var/lib/kubelet/pods/abc7c0ae-fdcf-449d-86d3-51d23d43be6c/volumes" Mar 19 12:47:11.067629 master-0 kubenswrapper[31830]: I0319 12:47:11.067567 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-eb56-account-create-update-6w8tc"] Mar 19 12:47:11.082611 master-0 kubenswrapper[31830]: I0319 12:47:11.082545 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-eb56-account-create-update-6w8tc"] Mar 19 12:47:11.696906 master-0 kubenswrapper[31830]: I0319 12:47:11.696604 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ab21923-ce35-4200-b1d5-d0d20931131c" path="/var/lib/kubelet/pods/6ab21923-ce35-4200-b1d5-d0d20931131c/volumes" Mar 19 12:47:19.032503 master-0 kubenswrapper[31830]: I0319 12:47:19.032449 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-n2tdc"] Mar 19 12:47:19.042936 master-0 kubenswrapper[31830]: I0319 12:47:19.042890 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-n2tdc"] Mar 19 12:47:19.697303 master-0 kubenswrapper[31830]: I0319 12:47:19.697019 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baac9526-2845-4c1c-8a75-3ed2dcc2f3fb" path="/var/lib/kubelet/pods/baac9526-2845-4c1c-8a75-3ed2dcc2f3fb/volumes" Mar 19 12:47:51.512050 master-0 kubenswrapper[31830]: I0319 12:47:51.511974 31830 scope.go:117] "RemoveContainer" containerID="e49918a5328c9e8fc609ee5bfdeb6285aab3f4a17f79dd20701d65d315b463d2" Mar 19 12:47:51.537492 master-0 kubenswrapper[31830]: I0319 12:47:51.537357 31830 scope.go:117] "RemoveContainer" containerID="41680a64ff551eb443dedace1d9017d81651ef0c8b09f3989fcecb0b1193b8cb" Mar 19 12:47:51.562579 master-0 kubenswrapper[31830]: I0319 12:47:51.562537 31830 scope.go:117] "RemoveContainer" containerID="ccd4901cb7f40b867767eb4b65967f7c160f009a63bd042f805a8cc274ee1215" Mar 19 12:47:51.584834 master-0 kubenswrapper[31830]: I0319 12:47:51.584582 31830 scope.go:117] "RemoveContainer" containerID="c6864d98e56e75c2b05fbdbcc589365e61373a6916698cf5fad181408914debb" Mar 19 12:47:51.615726 master-0 kubenswrapper[31830]: I0319 12:47:51.615681 31830 scope.go:117] "RemoveContainer" containerID="5b6a4c0fbb52f639b00b943872c63c86cc221d3ba41fe0d38720ea5f5a4d4ab6" Mar 19 12:47:51.654744 master-0 kubenswrapper[31830]: I0319 12:47:51.654677 31830 scope.go:117] "RemoveContainer" containerID="83c2b6d970e047f0cbe3cad688f6a2442f0248cfab20efd8c147fea71e37daff" Mar 19 12:47:56.057015 master-0 kubenswrapper[31830]: I0319 12:47:56.056941 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ntd2j"] Mar 19 12:47:56.069673 master-0 kubenswrapper[31830]: I0319 12:47:56.069605 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ntd2j"] Mar 19 12:47:57.692080 master-0 kubenswrapper[31830]: I0319 12:47:57.691635 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="644035f0-0f52-4762-a1d3-1d4ce8745615" path="/var/lib/kubelet/pods/644035f0-0f52-4762-a1d3-1d4ce8745615/volumes" Mar 19 12:48:16.052464 master-0 kubenswrapper[31830]: I0319 12:48:16.052320 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-mk6fp"] Mar 19 12:48:16.065900 master-0 kubenswrapper[31830]: I0319 12:48:16.065834 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-mk6fp"] Mar 19 12:48:17.694091 master-0 kubenswrapper[31830]: I0319 12:48:17.694027 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f24450d6-f939-4621-8d88-e0ecc012ebb6" path="/var/lib/kubelet/pods/f24450d6-f939-4621-8d88-e0ecc012ebb6/volumes" Mar 19 12:48:21.043116 master-0 kubenswrapper[31830]: I0319 12:48:21.043051 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7qtfk"] Mar 19 12:48:21.059736 master-0 kubenswrapper[31830]: I0319 12:48:21.059674 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7qtfk"] Mar 19 12:48:21.691045 master-0 kubenswrapper[31830]: I0319 12:48:21.690996 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8188767-a3a9-4859-aa0f-bc448a038114" path="/var/lib/kubelet/pods/d8188767-a3a9-4859-aa0f-bc448a038114/volumes" Mar 19 12:48:51.807868 master-0 kubenswrapper[31830]: I0319 12:48:51.807776 31830 scope.go:117] "RemoveContainer" containerID="a2c6f72f1e5bd45fdcd80e2b8f2624d82a6ae03875df1294a92a348d60246a6a" Mar 19 12:48:51.835761 master-0 kubenswrapper[31830]: I0319 12:48:51.835713 31830 scope.go:117] "RemoveContainer" containerID="8ad1ad251def71f89d1bda5f55c372d1b9191806f1cb572e601abe10afabdfe9" Mar 19 12:48:51.861428 master-0 kubenswrapper[31830]: I0319 12:48:51.861337 31830 scope.go:117] "RemoveContainer" containerID="83b29b8f4d632e9bb87c36f92639bab0483a79f1a1f95063473e19c4696969fd" Mar 19 12:49:01.059682 master-0 kubenswrapper[31830]: I0319 12:49:01.059620 31830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-g9dgh"] Mar 19 12:49:01.075886 master-0 kubenswrapper[31830]: I0319 12:49:01.075622 31830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-g9dgh"] Mar 19 12:49:01.693094 master-0 kubenswrapper[31830]: I0319 12:49:01.693023 31830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f687b27-5451-41a8-a7cd-c90186c676ef" path="/var/lib/kubelet/pods/7f687b27-5451-41a8-a7cd-c90186c676ef/volumes" Mar 19 12:49:51.960487 master-0 kubenswrapper[31830]: I0319 12:49:51.960419 31830 scope.go:117] "RemoveContainer" containerID="cadf188387f699cefd40e7da794d710a45135e00ec249edfca6af7a209f47a7e" Mar 19 13:01:00.187441 master-0 kubenswrapper[31830]: I0319 13:01:00.187365 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29565421-z8tcz"] Mar 19 13:01:00.191039 master-0 kubenswrapper[31830]: I0319 13:01:00.190986 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29565421-z8tcz" Mar 19 13:01:00.224351 master-0 kubenswrapper[31830]: I0319 13:01:00.224311 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29565421-z8tcz"] Mar 19 13:01:00.308296 master-0 kubenswrapper[31830]: I0319 13:01:00.308218 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmwhb\" (UniqueName: \"kubernetes.io/projected/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-kube-api-access-nmwhb\") pod \"keystone-cron-29565421-z8tcz\" (UID: \"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37\") " pod="openstack/keystone-cron-29565421-z8tcz" Mar 19 13:01:00.308637 master-0 kubenswrapper[31830]: I0319 13:01:00.308592 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-config-data\") pod \"keystone-cron-29565421-z8tcz\" (UID: \"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37\") " pod="openstack/keystone-cron-29565421-z8tcz" Mar 19 13:01:00.308838 master-0 kubenswrapper[31830]: I0319 13:01:00.308814 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-combined-ca-bundle\") pod \"keystone-cron-29565421-z8tcz\" (UID: \"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37\") " pod="openstack/keystone-cron-29565421-z8tcz" Mar 19 13:01:00.308911 master-0 kubenswrapper[31830]: I0319 13:01:00.308879 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-fernet-keys\") pod \"keystone-cron-29565421-z8tcz\" (UID: \"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37\") " pod="openstack/keystone-cron-29565421-z8tcz" Mar 19 13:01:00.410867 master-0 kubenswrapper[31830]: I0319 13:01:00.410772 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-config-data\") pod \"keystone-cron-29565421-z8tcz\" (UID: \"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37\") " pod="openstack/keystone-cron-29565421-z8tcz" Mar 19 13:01:00.411084 master-0 kubenswrapper[31830]: I0319 13:01:00.410919 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-combined-ca-bundle\") pod \"keystone-cron-29565421-z8tcz\" (UID: \"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37\") " pod="openstack/keystone-cron-29565421-z8tcz" Mar 19 13:01:00.411084 master-0 kubenswrapper[31830]: I0319 13:01:00.410956 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-fernet-keys\") pod \"keystone-cron-29565421-z8tcz\" (UID: \"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37\") " pod="openstack/keystone-cron-29565421-z8tcz" Mar 19 13:01:00.411084 master-0 kubenswrapper[31830]: I0319 13:01:00.411067 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmwhb\" (UniqueName: \"kubernetes.io/projected/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-kube-api-access-nmwhb\") pod \"keystone-cron-29565421-z8tcz\" (UID: \"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37\") " pod="openstack/keystone-cron-29565421-z8tcz" Mar 19 13:01:00.416668 master-0 kubenswrapper[31830]: I0319 13:01:00.414517 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-combined-ca-bundle\") pod \"keystone-cron-29565421-z8tcz\" (UID: \"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37\") " pod="openstack/keystone-cron-29565421-z8tcz" Mar 19 13:01:00.427619 master-0 kubenswrapper[31830]: I0319 13:01:00.427565 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-fernet-keys\") pod \"keystone-cron-29565421-z8tcz\" (UID: \"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37\") " pod="openstack/keystone-cron-29565421-z8tcz" Mar 19 13:01:00.427830 master-0 kubenswrapper[31830]: I0319 13:01:00.427759 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmwhb\" (UniqueName: \"kubernetes.io/projected/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-kube-api-access-nmwhb\") pod \"keystone-cron-29565421-z8tcz\" (UID: \"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37\") " pod="openstack/keystone-cron-29565421-z8tcz" Mar 19 13:01:00.428587 master-0 kubenswrapper[31830]: I0319 13:01:00.428552 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-config-data\") pod \"keystone-cron-29565421-z8tcz\" (UID: \"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37\") " pod="openstack/keystone-cron-29565421-z8tcz" Mar 19 13:01:00.527704 master-0 kubenswrapper[31830]: I0319 13:01:00.527633 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29565421-z8tcz" Mar 19 13:01:01.080826 master-0 kubenswrapper[31830]: I0319 13:01:01.080320 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29565421-z8tcz"] Mar 19 13:01:01.977887 master-0 kubenswrapper[31830]: I0319 13:01:01.977792 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29565421-z8tcz" event={"ID":"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37","Type":"ContainerStarted","Data":"6d7c789830800fbd02acf0037be9d99c133db5aa245ff85b452032aea499c85e"} Mar 19 13:01:01.977887 master-0 kubenswrapper[31830]: I0319 13:01:01.977870 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29565421-z8tcz" event={"ID":"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37","Type":"ContainerStarted","Data":"1054886c1286bb8d093a7c234e86046cf7ceb0dc67f61e644e977eef4f1914fd"} Mar 19 13:01:02.002929 master-0 kubenswrapper[31830]: I0319 13:01:02.002817 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29565421-z8tcz" podStartSLOduration=2.002783476 podStartE2EDuration="2.002783476s" podCreationTimestamp="2026-03-19 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 13:01:01.997027186 +0000 UTC m=+2800.545987900" watchObservedRunningTime="2026-03-19 13:01:02.002783476 +0000 UTC m=+2800.551744170" Mar 19 13:01:05.006367 master-0 kubenswrapper[31830]: I0319 13:01:05.006240 31830 generic.go:334] "Generic (PLEG): container finished" podID="3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37" containerID="6d7c789830800fbd02acf0037be9d99c133db5aa245ff85b452032aea499c85e" exitCode=0 Mar 19 13:01:05.006367 master-0 kubenswrapper[31830]: I0319 13:01:05.006297 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29565421-z8tcz" event={"ID":"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37","Type":"ContainerDied","Data":"6d7c789830800fbd02acf0037be9d99c133db5aa245ff85b452032aea499c85e"} Mar 19 13:01:06.478892 master-0 kubenswrapper[31830]: I0319 13:01:06.478845 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29565421-z8tcz" Mar 19 13:01:06.657978 master-0 kubenswrapper[31830]: I0319 13:01:06.657865 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmwhb\" (UniqueName: \"kubernetes.io/projected/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-kube-api-access-nmwhb\") pod \"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37\" (UID: \"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37\") " Mar 19 13:01:06.658161 master-0 kubenswrapper[31830]: I0319 13:01:06.658038 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-fernet-keys\") pod \"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37\" (UID: \"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37\") " Mar 19 13:01:06.658161 master-0 kubenswrapper[31830]: I0319 13:01:06.658120 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-combined-ca-bundle\") pod \"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37\" (UID: \"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37\") " Mar 19 13:01:06.658235 master-0 kubenswrapper[31830]: I0319 13:01:06.658213 31830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-config-data\") pod \"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37\" (UID: \"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37\") " Mar 19 13:01:06.662029 master-0 kubenswrapper[31830]: I0319 13:01:06.661973 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-kube-api-access-nmwhb" (OuterVolumeSpecName: "kube-api-access-nmwhb") pod "3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37" (UID: "3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37"). InnerVolumeSpecName "kube-api-access-nmwhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 13:01:06.664813 master-0 kubenswrapper[31830]: I0319 13:01:06.664775 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37" (UID: "3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 13:01:06.702023 master-0 kubenswrapper[31830]: I0319 13:01:06.701967 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37" (UID: "3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 13:01:06.726156 master-0 kubenswrapper[31830]: I0319 13:01:06.726097 31830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-config-data" (OuterVolumeSpecName: "config-data") pod "3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37" (UID: "3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 13:01:06.761938 master-0 kubenswrapper[31830]: I0319 13:01:06.761868 31830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmwhb\" (UniqueName: \"kubernetes.io/projected/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-kube-api-access-nmwhb\") on node \"master-0\" DevicePath \"\"" Mar 19 13:01:06.761938 master-0 kubenswrapper[31830]: I0319 13:01:06.761918 31830 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-fernet-keys\") on node \"master-0\" DevicePath \"\"" Mar 19 13:01:06.761938 master-0 kubenswrapper[31830]: I0319 13:01:06.761930 31830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 13:01:06.761938 master-0 kubenswrapper[31830]: I0319 13:01:06.761942 31830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 13:01:07.031235 master-0 kubenswrapper[31830]: I0319 13:01:07.031162 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29565421-z8tcz" event={"ID":"3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37","Type":"ContainerDied","Data":"1054886c1286bb8d093a7c234e86046cf7ceb0dc67f61e644e977eef4f1914fd"} Mar 19 13:01:07.031235 master-0 kubenswrapper[31830]: I0319 13:01:07.031204 31830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1054886c1286bb8d093a7c234e86046cf7ceb0dc67f61e644e977eef4f1914fd" Mar 19 13:01:07.031619 master-0 kubenswrapper[31830]: I0319 13:01:07.031244 31830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29565421-z8tcz" Mar 19 13:45:30.069766 master-0 kubenswrapper[31830]: I0319 13:45:30.069692 31830 trace.go:236] Trace[825841772]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (19-Mar-2026 13:45:28.929) (total time: 1140ms): Mar 19 13:45:30.069766 master-0 kubenswrapper[31830]: Trace[825841772]: [1.140274458s] [1.140274458s] END Mar 19 13:55:11.090144 master-0 kubenswrapper[31830]: I0319 13:55:11.090012 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jgd9n/must-gather-vfgql"] Mar 19 13:55:11.090763 master-0 kubenswrapper[31830]: E0319 13:55:11.090737 31830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37" containerName="keystone-cron" Mar 19 13:55:11.090763 master-0 kubenswrapper[31830]: I0319 13:55:11.090762 31830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37" containerName="keystone-cron" Mar 19 13:55:11.091210 master-0 kubenswrapper[31830]: I0319 13:55:11.091174 31830 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37" containerName="keystone-cron" Mar 19 13:55:11.110037 master-0 kubenswrapper[31830]: I0319 13:55:11.109941 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jgd9n/must-gather-ggbbc"] Mar 19 13:55:11.113830 master-0 kubenswrapper[31830]: I0319 13:55:11.111608 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgd9n/must-gather-vfgql" Mar 19 13:55:11.113830 master-0 kubenswrapper[31830]: I0319 13:55:11.112406 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgd9n/must-gather-ggbbc" Mar 19 13:55:11.116069 master-0 kubenswrapper[31830]: I0319 13:55:11.116009 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jgd9n"/"openshift-service-ca.crt" Mar 19 13:55:11.116276 master-0 kubenswrapper[31830]: I0319 13:55:11.116231 31830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jgd9n"/"kube-root-ca.crt" Mar 19 13:55:11.130819 master-0 kubenswrapper[31830]: I0319 13:55:11.127742 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jgd9n/must-gather-vfgql"] Mar 19 13:55:11.143962 master-0 kubenswrapper[31830]: I0319 13:55:11.143898 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jgd9n/must-gather-ggbbc"] Mar 19 13:55:11.289822 master-0 kubenswrapper[31830]: I0319 13:55:11.277627 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smdcd\" (UniqueName: \"kubernetes.io/projected/79187274-3077-4648-9dcd-7bb7fa356d8f-kube-api-access-smdcd\") pod \"must-gather-ggbbc\" (UID: \"79187274-3077-4648-9dcd-7bb7fa356d8f\") " pod="openshift-must-gather-jgd9n/must-gather-ggbbc" Mar 19 13:55:11.290696 master-0 kubenswrapper[31830]: I0319 13:55:11.290658 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/79187274-3077-4648-9dcd-7bb7fa356d8f-must-gather-output\") pod \"must-gather-ggbbc\" (UID: \"79187274-3077-4648-9dcd-7bb7fa356d8f\") " pod="openshift-must-gather-jgd9n/must-gather-ggbbc" Mar 19 13:55:11.290984 master-0 kubenswrapper[31830]: I0319 13:55:11.290968 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgmmp\" (UniqueName: \"kubernetes.io/projected/aee2c4ca-fcb9-47e8-b6f4-c6cf82e925ea-kube-api-access-jgmmp\") pod \"must-gather-vfgql\" (UID: \"aee2c4ca-fcb9-47e8-b6f4-c6cf82e925ea\") " pod="openshift-must-gather-jgd9n/must-gather-vfgql" Mar 19 13:55:11.291138 master-0 kubenswrapper[31830]: I0319 13:55:11.291125 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/aee2c4ca-fcb9-47e8-b6f4-c6cf82e925ea-must-gather-output\") pod \"must-gather-vfgql\" (UID: \"aee2c4ca-fcb9-47e8-b6f4-c6cf82e925ea\") " pod="openshift-must-gather-jgd9n/must-gather-vfgql" Mar 19 13:55:11.393684 master-0 kubenswrapper[31830]: I0319 13:55:11.393557 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smdcd\" (UniqueName: \"kubernetes.io/projected/79187274-3077-4648-9dcd-7bb7fa356d8f-kube-api-access-smdcd\") pod \"must-gather-ggbbc\" (UID: \"79187274-3077-4648-9dcd-7bb7fa356d8f\") " pod="openshift-must-gather-jgd9n/must-gather-ggbbc" Mar 19 13:55:11.393684 master-0 kubenswrapper[31830]: I0319 13:55:11.393624 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/79187274-3077-4648-9dcd-7bb7fa356d8f-must-gather-output\") pod \"must-gather-ggbbc\" (UID: \"79187274-3077-4648-9dcd-7bb7fa356d8f\") " pod="openshift-must-gather-jgd9n/must-gather-ggbbc" Mar 19 13:55:11.393966 master-0 kubenswrapper[31830]: I0319 13:55:11.393722 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgmmp\" (UniqueName: \"kubernetes.io/projected/aee2c4ca-fcb9-47e8-b6f4-c6cf82e925ea-kube-api-access-jgmmp\") pod \"must-gather-vfgql\" (UID: \"aee2c4ca-fcb9-47e8-b6f4-c6cf82e925ea\") " pod="openshift-must-gather-jgd9n/must-gather-vfgql" Mar 19 13:55:11.393966 master-0 kubenswrapper[31830]: I0319 13:55:11.393772 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/aee2c4ca-fcb9-47e8-b6f4-c6cf82e925ea-must-gather-output\") pod \"must-gather-vfgql\" (UID: \"aee2c4ca-fcb9-47e8-b6f4-c6cf82e925ea\") " pod="openshift-must-gather-jgd9n/must-gather-vfgql" Mar 19 13:55:11.394369 master-0 kubenswrapper[31830]: I0319 13:55:11.394343 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/aee2c4ca-fcb9-47e8-b6f4-c6cf82e925ea-must-gather-output\") pod \"must-gather-vfgql\" (UID: \"aee2c4ca-fcb9-47e8-b6f4-c6cf82e925ea\") " pod="openshift-must-gather-jgd9n/must-gather-vfgql" Mar 19 13:55:11.394698 master-0 kubenswrapper[31830]: I0319 13:55:11.394674 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/79187274-3077-4648-9dcd-7bb7fa356d8f-must-gather-output\") pod \"must-gather-ggbbc\" (UID: \"79187274-3077-4648-9dcd-7bb7fa356d8f\") " pod="openshift-must-gather-jgd9n/must-gather-ggbbc" Mar 19 13:55:11.423523 master-0 kubenswrapper[31830]: I0319 13:55:11.423468 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smdcd\" (UniqueName: \"kubernetes.io/projected/79187274-3077-4648-9dcd-7bb7fa356d8f-kube-api-access-smdcd\") pod \"must-gather-ggbbc\" (UID: \"79187274-3077-4648-9dcd-7bb7fa356d8f\") " pod="openshift-must-gather-jgd9n/must-gather-ggbbc" Mar 19 13:55:11.430165 master-0 kubenswrapper[31830]: I0319 13:55:11.430134 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgmmp\" (UniqueName: \"kubernetes.io/projected/aee2c4ca-fcb9-47e8-b6f4-c6cf82e925ea-kube-api-access-jgmmp\") pod \"must-gather-vfgql\" (UID: \"aee2c4ca-fcb9-47e8-b6f4-c6cf82e925ea\") " pod="openshift-must-gather-jgd9n/must-gather-vfgql" Mar 19 13:55:11.492239 master-0 kubenswrapper[31830]: I0319 13:55:11.492203 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgd9n/must-gather-vfgql" Mar 19 13:55:11.537417 master-0 kubenswrapper[31830]: I0319 13:55:11.537367 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgd9n/must-gather-ggbbc" Mar 19 13:55:12.425831 master-0 kubenswrapper[31830]: I0319 13:55:12.422793 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jgd9n/must-gather-vfgql"] Mar 19 13:55:12.435439 master-0 kubenswrapper[31830]: I0319 13:55:12.433344 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jgd9n/must-gather-ggbbc"] Mar 19 13:55:12.454400 master-0 kubenswrapper[31830]: I0319 13:55:12.454027 31830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 13:55:12.738217 master-0 kubenswrapper[31830]: I0319 13:55:12.738145 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgd9n/must-gather-vfgql" event={"ID":"aee2c4ca-fcb9-47e8-b6f4-c6cf82e925ea","Type":"ContainerStarted","Data":"27967f5f458dcbeaed729e11973b41887ad1a0409afc6bec091cf5cf4ffa0512"} Mar 19 13:55:12.741168 master-0 kubenswrapper[31830]: I0319 13:55:12.741116 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgd9n/must-gather-ggbbc" event={"ID":"79187274-3077-4648-9dcd-7bb7fa356d8f","Type":"ContainerStarted","Data":"681c5a3666a1b2d609e6eaa173124911a2faa2da08085be383a9811e4494bcba"} Mar 19 13:55:14.774624 master-0 kubenswrapper[31830]: I0319 13:55:14.774491 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgd9n/must-gather-ggbbc" event={"ID":"79187274-3077-4648-9dcd-7bb7fa356d8f","Type":"ContainerStarted","Data":"e88100aae36f1fbec4b8ce97ad04f689cd7d96942e4df0040aa1304274d08d29"} Mar 19 13:55:18.670933 master-0 kubenswrapper[31830]: I0319 13:55:18.670735 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-7d58488df-czxxt_3661faaa-2c9d-4fcd-a41f-71aa71a2e464/cluster-version-operator/0.log" Mar 19 13:55:20.955020 master-0 kubenswrapper[31830]: I0319 13:55:20.953065 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-r5pn2_b8442707-1048-49e7-883b-9dfc0c48eb15/controller/0.log" Mar 19 13:55:20.964461 master-0 kubenswrapper[31830]: I0319 13:55:20.964425 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-r5pn2_b8442707-1048-49e7-883b-9dfc0c48eb15/kube-rbac-proxy/0.log" Mar 19 13:55:21.070425 master-0 kubenswrapper[31830]: I0319 13:55:21.070375 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hx5pt_450fdf42-489c-4403-9c52-03c51471160c/controller/0.log" Mar 19 13:55:22.134543 master-0 kubenswrapper[31830]: I0319 13:55:22.133969 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-c2bxj_4572c3d4-9030-4bd3-9f56-346d9f954254/nmstate-console-plugin/0.log" Mar 19 13:55:22.160460 master-0 kubenswrapper[31830]: I0319 13:55:22.157269 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-kf4wb_6e09c4f2-c7cc-46b5-b00c-385fde5f190f/nmstate-handler/0.log" Mar 19 13:55:22.229153 master-0 kubenswrapper[31830]: I0319 13:55:22.228166 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-m9dsd_3e910bd8-61ee-4627-a7ff-fc2ae9aec770/nmstate-metrics/0.log" Mar 19 13:55:22.240668 master-0 kubenswrapper[31830]: I0319 13:55:22.240625 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-m9dsd_3e910bd8-61ee-4627-a7ff-fc2ae9aec770/kube-rbac-proxy/0.log" Mar 19 13:55:22.263694 master-0 kubenswrapper[31830]: I0319 13:55:22.263589 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-jx76l_dd31e5af-9ecd-4aee-b004-dff990a8c353/nmstate-operator/0.log" Mar 19 13:55:22.288656 master-0 kubenswrapper[31830]: I0319 13:55:22.288614 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-4f74g_db5a3424-916f-441f-87c8-31bf62b4a07b/nmstate-webhook/0.log" Mar 19 13:55:22.511792 master-0 kubenswrapper[31830]: I0319 13:55:22.511734 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hx5pt_450fdf42-489c-4403-9c52-03c51471160c/frr/0.log" Mar 19 13:55:22.523200 master-0 kubenswrapper[31830]: I0319 13:55:22.523144 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hx5pt_450fdf42-489c-4403-9c52-03c51471160c/reloader/0.log" Mar 19 13:55:22.540086 master-0 kubenswrapper[31830]: I0319 13:55:22.540043 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hx5pt_450fdf42-489c-4403-9c52-03c51471160c/frr-metrics/0.log" Mar 19 13:55:22.553956 master-0 kubenswrapper[31830]: I0319 13:55:22.551991 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hx5pt_450fdf42-489c-4403-9c52-03c51471160c/kube-rbac-proxy/0.log" Mar 19 13:55:22.567187 master-0 kubenswrapper[31830]: I0319 13:55:22.565667 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hx5pt_450fdf42-489c-4403-9c52-03c51471160c/kube-rbac-proxy-frr/0.log" Mar 19 13:55:22.590674 master-0 kubenswrapper[31830]: I0319 13:55:22.588278 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hx5pt_450fdf42-489c-4403-9c52-03c51471160c/cp-frr-files/0.log" Mar 19 13:55:22.600252 master-0 kubenswrapper[31830]: I0319 13:55:22.600221 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hx5pt_450fdf42-489c-4403-9c52-03c51471160c/cp-reloader/0.log" Mar 19 13:55:22.614825 master-0 kubenswrapper[31830]: I0319 13:55:22.613215 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hx5pt_450fdf42-489c-4403-9c52-03c51471160c/cp-metrics/0.log" Mar 19 13:55:22.643760 master-0 kubenswrapper[31830]: I0319 13:55:22.640567 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-qltq6_7f0f4018-1edf-45aa-ae8d-9798bed919a2/frr-k8s-webhook-server/0.log" Mar 19 13:55:22.681946 master-0 kubenswrapper[31830]: I0319 13:55:22.681372 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-8ddbf4b7-fw4vt_a25ef66c-55db-41fb-83bc-be7e7981145b/manager/0.log" Mar 19 13:55:22.698840 master-0 kubenswrapper[31830]: I0319 13:55:22.698528 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-8665ccc68-62qpd_87509d6c-30c1-48aa-a256-54fa004adcb6/webhook-server/0.log" Mar 19 13:55:23.185829 master-0 kubenswrapper[31830]: I0319 13:55:23.185080 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fxx75_7dc06b24-66ec-4b57-88d2-90bb6d42bb60/speaker/0.log" Mar 19 13:55:23.197843 master-0 kubenswrapper[31830]: I0319 13:55:23.192328 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fxx75_7dc06b24-66ec-4b57-88d2-90bb6d42bb60/kube-rbac-proxy/0.log" Mar 19 13:55:24.944718 master-0 kubenswrapper[31830]: I0319 13:55:24.944651 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgd9n/must-gather-vfgql" event={"ID":"aee2c4ca-fcb9-47e8-b6f4-c6cf82e925ea","Type":"ContainerStarted","Data":"4b58b7de2a5136b92c1939b46d17c17fb3883f03e924a36e47f46b1ffb287656"} Mar 19 13:55:24.946544 master-0 kubenswrapper[31830]: I0319 13:55:24.946513 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgd9n/must-gather-vfgql" event={"ID":"aee2c4ca-fcb9-47e8-b6f4-c6cf82e925ea","Type":"ContainerStarted","Data":"7f88e99e9170b0f8848a97dffbd87c3e6fa387cb8092509ac4e1fd2c92f269cf"} Mar 19 13:55:24.950536 master-0 kubenswrapper[31830]: I0319 13:55:24.950472 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgd9n/must-gather-ggbbc" event={"ID":"79187274-3077-4648-9dcd-7bb7fa356d8f","Type":"ContainerStarted","Data":"d6400caaea03f1fc0bfc790d18f1481a4f4d8796b76581f1c43effea8fa52ba5"} Mar 19 13:55:24.968549 master-0 kubenswrapper[31830]: I0319 13:55:24.968447 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jgd9n/must-gather-vfgql" podStartSLOduration=3.293371613 podStartE2EDuration="14.968406949s" podCreationTimestamp="2026-03-19 13:55:10 +0000 UTC" firstStartedPulling="2026-03-19 13:55:12.453950315 +0000 UTC m=+6051.002911019" lastFinishedPulling="2026-03-19 13:55:24.128985651 +0000 UTC m=+6062.677946355" observedRunningTime="2026-03-19 13:55:24.961924996 +0000 UTC m=+6063.510885700" watchObservedRunningTime="2026-03-19 13:55:24.968406949 +0000 UTC m=+6063.517367653" Mar 19 13:55:24.994204 master-0 kubenswrapper[31830]: I0319 13:55:24.994068 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jgd9n/must-gather-ggbbc" podStartSLOduration=13.456269407 podStartE2EDuration="14.994045803s" podCreationTimestamp="2026-03-19 13:55:10 +0000 UTC" firstStartedPulling="2026-03-19 13:55:12.453925364 +0000 UTC m=+6051.002886088" lastFinishedPulling="2026-03-19 13:55:13.99170176 +0000 UTC m=+6052.540662484" observedRunningTime="2026-03-19 13:55:24.988605011 +0000 UTC m=+6063.537565705" watchObservedRunningTime="2026-03-19 13:55:24.994045803 +0000 UTC m=+6063.543006507" Mar 19 13:55:26.069694 master-0 kubenswrapper[31830]: I0319 13:55:26.069637 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-868d558fdf-npzgm_3bc9f2d2-5538-4448-842f-37acfc790ae0/oauth-openshift/0.log" Mar 19 13:55:26.268908 master-0 kubenswrapper[31830]: I0319 13:55:26.265079 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcdctl/0.log" Mar 19 13:55:26.930078 master-0 kubenswrapper[31830]: I0319 13:55:26.929360 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd/0.log" Mar 19 13:55:26.954963 master-0 kubenswrapper[31830]: I0319 13:55:26.953786 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-metrics/0.log" Mar 19 13:55:26.974066 master-0 kubenswrapper[31830]: I0319 13:55:26.974024 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-readyz/0.log" Mar 19 13:55:26.984823 master-0 kubenswrapper[31830]: I0319 13:55:26.984763 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-rev/0.log" Mar 19 13:55:26.999610 master-0 kubenswrapper[31830]: I0319 13:55:26.998142 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/setup/0.log" Mar 19 13:55:27.023776 master-0 kubenswrapper[31830]: I0319 13:55:27.023248 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-ensure-env-vars/0.log" Mar 19 13:55:27.037604 master-0 kubenswrapper[31830]: I0319 13:55:27.037358 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-resources-copy/0.log" Mar 19 13:55:27.082863 master-0 kubenswrapper[31830]: I0319 13:55:27.082811 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_11f83dfb-da04-483f-b281-ebdb39f3ab27/installer/0.log" Mar 19 13:55:27.115379 master-0 kubenswrapper[31830]: I0319 13:55:27.114473 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_8b48817c-05cd-430b-9b1f-9cc037f1ca77/installer/0.log" Mar 19 13:55:27.427863 master-0 kubenswrapper[31830]: I0319 13:55:27.427740 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-pkgvq_d3017b5e-178e-49de-89d2-817a18398203/authentication-operator/1.log" Mar 19 13:55:27.440073 master-0 kubenswrapper[31830]: I0319 13:55:27.440023 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-pkgvq_d3017b5e-178e-49de-89d2-817a18398203/authentication-operator/2.log" Mar 19 13:55:27.973827 master-0 kubenswrapper[31830]: I0319 13:55:27.973750 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh"] Mar 19 13:55:27.976942 master-0 kubenswrapper[31830]: I0319 13:55:27.976893 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" Mar 19 13:55:27.989512 master-0 kubenswrapper[31830]: I0319 13:55:27.989447 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh"] Mar 19 13:55:28.034729 master-0 kubenswrapper[31830]: I0319 13:55:28.034668 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6bb82b0d-9813-4d4c-baf1-9f133966d955-sys\") pod \"perf-node-gather-daemonset-vgtsh\" (UID: \"6bb82b0d-9813-4d4c-baf1-9f133966d955\") " pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" Mar 19 13:55:28.034979 master-0 kubenswrapper[31830]: I0319 13:55:28.034932 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bb82b0d-9813-4d4c-baf1-9f133966d955-lib-modules\") pod \"perf-node-gather-daemonset-vgtsh\" (UID: \"6bb82b0d-9813-4d4c-baf1-9f133966d955\") " pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" Mar 19 13:55:28.035074 master-0 kubenswrapper[31830]: I0319 13:55:28.034998 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/6bb82b0d-9813-4d4c-baf1-9f133966d955-podres\") pod \"perf-node-gather-daemonset-vgtsh\" (UID: \"6bb82b0d-9813-4d4c-baf1-9f133966d955\") " pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" Mar 19 13:55:28.035327 master-0 kubenswrapper[31830]: I0319 13:55:28.035296 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/6bb82b0d-9813-4d4c-baf1-9f133966d955-proc\") pod \"perf-node-gather-daemonset-vgtsh\" (UID: \"6bb82b0d-9813-4d4c-baf1-9f133966d955\") " pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" Mar 19 13:55:28.035612 master-0 kubenswrapper[31830]: I0319 13:55:28.035557 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rgs6\" (UniqueName: \"kubernetes.io/projected/6bb82b0d-9813-4d4c-baf1-9f133966d955-kube-api-access-8rgs6\") pod \"perf-node-gather-daemonset-vgtsh\" (UID: \"6bb82b0d-9813-4d4c-baf1-9f133966d955\") " pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" Mar 19 13:55:28.147816 master-0 kubenswrapper[31830]: I0319 13:55:28.140011 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rgs6\" (UniqueName: \"kubernetes.io/projected/6bb82b0d-9813-4d4c-baf1-9f133966d955-kube-api-access-8rgs6\") pod \"perf-node-gather-daemonset-vgtsh\" (UID: \"6bb82b0d-9813-4d4c-baf1-9f133966d955\") " pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" Mar 19 13:55:28.147816 master-0 kubenswrapper[31830]: I0319 13:55:28.140147 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6bb82b0d-9813-4d4c-baf1-9f133966d955-sys\") pod \"perf-node-gather-daemonset-vgtsh\" (UID: \"6bb82b0d-9813-4d4c-baf1-9f133966d955\") " pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" Mar 19 13:55:28.147816 master-0 kubenswrapper[31830]: I0319 13:55:28.140213 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bb82b0d-9813-4d4c-baf1-9f133966d955-lib-modules\") pod \"perf-node-gather-daemonset-vgtsh\" (UID: \"6bb82b0d-9813-4d4c-baf1-9f133966d955\") " pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" Mar 19 13:55:28.147816 master-0 kubenswrapper[31830]: I0319 13:55:28.140245 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/6bb82b0d-9813-4d4c-baf1-9f133966d955-podres\") pod \"perf-node-gather-daemonset-vgtsh\" (UID: \"6bb82b0d-9813-4d4c-baf1-9f133966d955\") " pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" Mar 19 13:55:28.147816 master-0 kubenswrapper[31830]: I0319 13:55:28.140335 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/6bb82b0d-9813-4d4c-baf1-9f133966d955-proc\") pod \"perf-node-gather-daemonset-vgtsh\" (UID: \"6bb82b0d-9813-4d4c-baf1-9f133966d955\") " pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" Mar 19 13:55:28.147816 master-0 kubenswrapper[31830]: I0319 13:55:28.140542 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/6bb82b0d-9813-4d4c-baf1-9f133966d955-proc\") pod \"perf-node-gather-daemonset-vgtsh\" (UID: \"6bb82b0d-9813-4d4c-baf1-9f133966d955\") " pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" Mar 19 13:55:28.147816 master-0 kubenswrapper[31830]: I0319 13:55:28.140946 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6bb82b0d-9813-4d4c-baf1-9f133966d955-sys\") pod \"perf-node-gather-daemonset-vgtsh\" (UID: \"6bb82b0d-9813-4d4c-baf1-9f133966d955\") " pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" Mar 19 13:55:28.147816 master-0 kubenswrapper[31830]: I0319 13:55:28.141037 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bb82b0d-9813-4d4c-baf1-9f133966d955-lib-modules\") pod \"perf-node-gather-daemonset-vgtsh\" (UID: \"6bb82b0d-9813-4d4c-baf1-9f133966d955\") " pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" Mar 19 13:55:28.147816 master-0 kubenswrapper[31830]: I0319 13:55:28.141118 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/6bb82b0d-9813-4d4c-baf1-9f133966d955-podres\") pod \"perf-node-gather-daemonset-vgtsh\" (UID: \"6bb82b0d-9813-4d4c-baf1-9f133966d955\") " pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" Mar 19 13:55:28.155622 master-0 kubenswrapper[31830]: I0319 13:55:28.155588 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-b6qm2_a9819a56-abb1-485c-b424-5c62e30d5afc/assisted-installer-controller/0.log" Mar 19 13:55:28.157458 master-0 kubenswrapper[31830]: I0319 13:55:28.157418 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rgs6\" (UniqueName: \"kubernetes.io/projected/6bb82b0d-9813-4d4c-baf1-9f133966d955-kube-api-access-8rgs6\") pod \"perf-node-gather-daemonset-vgtsh\" (UID: \"6bb82b0d-9813-4d4c-baf1-9f133966d955\") " pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" Mar 19 13:55:28.372182 master-0 kubenswrapper[31830]: I0319 13:55:28.372027 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" Mar 19 13:55:28.603350 master-0 kubenswrapper[31830]: I0319 13:55:28.603092 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7dcf5569b5-lkpgl_91112ce6-4f9d-44c1-a4e7-fea126554bcf/router/7.log" Mar 19 13:55:28.620887 master-0 kubenswrapper[31830]: I0319 13:55:28.620745 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7dcf5569b5-lkpgl_91112ce6-4f9d-44c1-a4e7-fea126554bcf/router/6.log" Mar 19 13:55:28.940231 master-0 kubenswrapper[31830]: I0319 13:55:28.939107 31830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh"] Mar 19 13:55:28.997059 master-0 kubenswrapper[31830]: I0319 13:55:28.996976 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" event={"ID":"6bb82b0d-9813-4d4c-baf1-9f133966d955","Type":"ContainerStarted","Data":"74fd98a6ea2e9dad80b9d604b61725f05cda81b8674ab897d231fdd98f776e34"} Mar 19 13:55:29.617707 master-0 kubenswrapper[31830]: I0319 13:55:29.617665 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-fdc5db968-8zh6r_979ba8cc-5a7b-4188-bf9e-c22d810888e9/oauth-apiserver/0.log" Mar 19 13:55:29.629664 master-0 kubenswrapper[31830]: I0319 13:55:29.629629 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-fdc5db968-8zh6r_979ba8cc-5a7b-4188-bf9e-c22d810888e9/fix-audit-permissions/0.log" Mar 19 13:55:30.008427 master-0 kubenswrapper[31830]: I0319 13:55:30.008367 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" event={"ID":"6bb82b0d-9813-4d4c-baf1-9f133966d955","Type":"ContainerStarted","Data":"b147995894e872c88f67ead8faaa5a96ba450d5d6df356332793c5db7e716266"} Mar 19 13:55:30.008667 master-0 kubenswrapper[31830]: I0319 13:55:30.008579 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" Mar 19 13:55:30.030264 master-0 kubenswrapper[31830]: I0319 13:55:30.030180 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" podStartSLOduration=3.030160874 podStartE2EDuration="3.030160874s" podCreationTimestamp="2026-03-19 13:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 13:55:30.024339592 +0000 UTC m=+6068.573300296" watchObservedRunningTime="2026-03-19 13:55:30.030160874 +0000 UTC m=+6068.579121578" Mar 19 13:55:30.634787 master-0 kubenswrapper[31830]: I0319 13:55:30.634730 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-fblgs_cf6b6560-1731-4fb1-b3c2-8257002842d6/kube-rbac-proxy/0.log" Mar 19 13:55:30.707543 master-0 kubenswrapper[31830]: I0319 13:55:30.707479 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-fblgs_cf6b6560-1731-4fb1-b3c2-8257002842d6/cluster-autoscaler-operator/0.log" Mar 19 13:55:30.726592 master-0 kubenswrapper[31830]: I0319 13:55:30.726513 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-ftml6_19de6601-10d4-4112-a21f-0398d2b160d1/cluster-baremetal-operator/2.log" Mar 19 13:55:30.728616 master-0 kubenswrapper[31830]: I0319 13:55:30.728572 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-ftml6_19de6601-10d4-4112-a21f-0398d2b160d1/cluster-baremetal-operator/3.log" Mar 19 13:55:30.745748 master-0 kubenswrapper[31830]: I0319 13:55:30.745701 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-ftml6_19de6601-10d4-4112-a21f-0398d2b160d1/baremetal-kube-rbac-proxy/0.log" Mar 19 13:55:30.766952 master-0 kubenswrapper[31830]: I0319 13:55:30.766658 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-tql86_44469a78-9300-4260-89e9-ea939de1357b/control-plane-machine-set-operator/0.log" Mar 19 13:55:30.767629 master-0 kubenswrapper[31830]: I0319 13:55:30.767598 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-tql86_44469a78-9300-4260-89e9-ea939de1357b/control-plane-machine-set-operator/1.log" Mar 19 13:55:30.790628 master-0 kubenswrapper[31830]: I0319 13:55:30.790569 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_ironic-proxy-mnfjh_0d92c44a-db10-4400-8eef-4d9930650684/ironic-proxy/0.log" Mar 19 13:55:30.809403 master-0 kubenswrapper[31830]: I0319 13:55:30.809259 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-75w5c_7b2ecb08-a0f9-4127-967c-7087dea4c0f6/kube-rbac-proxy/0.log" Mar 19 13:55:30.827176 master-0 kubenswrapper[31830]: I0319 13:55:30.827127 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-75w5c_7b2ecb08-a0f9-4127-967c-7087dea4c0f6/machine-api-operator/0.log" Mar 19 13:55:32.636848 master-0 kubenswrapper[31830]: I0319 13:55:32.634695 31830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jgd9n/master-0-debug-4zwv9"] Mar 19 13:55:32.636848 master-0 kubenswrapper[31830]: I0319 13:55:32.636357 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgd9n/master-0-debug-4zwv9" Mar 19 13:55:32.655812 master-0 kubenswrapper[31830]: I0319 13:55:32.654912 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b94p\" (UniqueName: \"kubernetes.io/projected/2bfdafd9-9994-4445-90b4-16dad46f4a52-kube-api-access-7b94p\") pod \"master-0-debug-4zwv9\" (UID: \"2bfdafd9-9994-4445-90b4-16dad46f4a52\") " pod="openshift-must-gather-jgd9n/master-0-debug-4zwv9" Mar 19 13:55:32.655812 master-0 kubenswrapper[31830]: I0319 13:55:32.654970 31830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2bfdafd9-9994-4445-90b4-16dad46f4a52-host\") pod \"master-0-debug-4zwv9\" (UID: \"2bfdafd9-9994-4445-90b4-16dad46f4a52\") " pod="openshift-must-gather-jgd9n/master-0-debug-4zwv9" Mar 19 13:55:32.757818 master-0 kubenswrapper[31830]: I0319 13:55:32.757743 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b94p\" (UniqueName: \"kubernetes.io/projected/2bfdafd9-9994-4445-90b4-16dad46f4a52-kube-api-access-7b94p\") pod \"master-0-debug-4zwv9\" (UID: \"2bfdafd9-9994-4445-90b4-16dad46f4a52\") " pod="openshift-must-gather-jgd9n/master-0-debug-4zwv9" Mar 19 13:55:32.757818 master-0 kubenswrapper[31830]: I0319 13:55:32.757825 31830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2bfdafd9-9994-4445-90b4-16dad46f4a52-host\") pod \"master-0-debug-4zwv9\" (UID: \"2bfdafd9-9994-4445-90b4-16dad46f4a52\") " pod="openshift-must-gather-jgd9n/master-0-debug-4zwv9" Mar 19 13:55:32.758550 master-0 kubenswrapper[31830]: I0319 13:55:32.758517 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2bfdafd9-9994-4445-90b4-16dad46f4a52-host\") pod \"master-0-debug-4zwv9\" (UID: \"2bfdafd9-9994-4445-90b4-16dad46f4a52\") " pod="openshift-must-gather-jgd9n/master-0-debug-4zwv9" Mar 19 13:55:32.775686 master-0 kubenswrapper[31830]: I0319 13:55:32.775635 31830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b94p\" (UniqueName: \"kubernetes.io/projected/2bfdafd9-9994-4445-90b4-16dad46f4a52-kube-api-access-7b94p\") pod \"master-0-debug-4zwv9\" (UID: \"2bfdafd9-9994-4445-90b4-16dad46f4a52\") " pod="openshift-must-gather-jgd9n/master-0-debug-4zwv9" Mar 19 13:55:32.952011 master-0 kubenswrapper[31830]: I0319 13:55:32.951875 31830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgd9n/master-0-debug-4zwv9" Mar 19 13:55:32.991472 master-0 kubenswrapper[31830]: W0319 13:55:32.991422 31830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bfdafd9_9994_4445_90b4_16dad46f4a52.slice/crio-bbd300269ede990c7bd426303a4c04aa3991b31fc0259495451eab8ee35849a1 WatchSource:0}: Error finding container bbd300269ede990c7bd426303a4c04aa3991b31fc0259495451eab8ee35849a1: Status 404 returned error can't find the container with id bbd300269ede990c7bd426303a4c04aa3991b31fc0259495451eab8ee35849a1 Mar 19 13:55:33.038977 master-0 kubenswrapper[31830]: I0319 13:55:33.038926 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgd9n/master-0-debug-4zwv9" event={"ID":"2bfdafd9-9994-4445-90b4-16dad46f4a52","Type":"ContainerStarted","Data":"bbd300269ede990c7bd426303a4c04aa3991b31fc0259495451eab8ee35849a1"} Mar 19 13:55:34.458517 master-0 kubenswrapper[31830]: I0319 13:55:34.458392 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_metal3-65f8c5cc94-trthc_f262d280-de9c-40ab-a879-abfec51007e6/metal3-httpd/0.log" Mar 19 13:55:34.766834 master-0 kubenswrapper[31830]: I0319 13:55:34.766708 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-cce1e-api-0_d02c596d-10f7-46cc-baef-11d61e942bb3/cinder-cce1e-api-log/0.log" Mar 19 13:55:34.956871 master-0 kubenswrapper[31830]: I0319 13:55:34.956826 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-cce1e-api-0_d02c596d-10f7-46cc-baef-11d61e942bb3/cinder-api/0.log" Mar 19 13:55:35.136696 master-0 kubenswrapper[31830]: I0319 13:55:35.136603 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-cce1e-backup-0_416ec9bb-4708-40d4-84c4-b5aec90024b6/cinder-backup/0.log" Mar 19 13:55:35.243869 master-0 kubenswrapper[31830]: I0319 13:55:35.243758 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-cce1e-backup-0_416ec9bb-4708-40d4-84c4-b5aec90024b6/probe/0.log" Mar 19 13:55:35.401026 master-0 kubenswrapper[31830]: I0319 13:55:35.400931 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-cce1e-scheduler-0_44170cd5-1ea2-462a-bfff-dc6f881e6138/cinder-scheduler/0.log" Mar 19 13:55:35.463255 master-0 kubenswrapper[31830]: I0319 13:55:35.463200 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_metal3-65f8c5cc94-trthc_f262d280-de9c-40ab-a879-abfec51007e6/metal3-ironic/0.log" Mar 19 13:55:35.484322 master-0 kubenswrapper[31830]: I0319 13:55:35.484271 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_metal3-65f8c5cc94-trthc_f262d280-de9c-40ab-a879-abfec51007e6/metal3-ramdisk-logs/0.log" Mar 19 13:55:35.487082 master-0 kubenswrapper[31830]: I0319 13:55:35.487057 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-cce1e-scheduler-0_44170cd5-1ea2-462a-bfff-dc6f881e6138/probe/0.log" Mar 19 13:55:35.496622 master-0 kubenswrapper[31830]: I0319 13:55:35.496576 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_metal3-65f8c5cc94-trthc_f262d280-de9c-40ab-a879-abfec51007e6/machine-os-images/0.log" Mar 19 13:55:35.694580 master-0 kubenswrapper[31830]: I0319 13:55:35.694445 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-cce1e-volume-lvm-iscsi-0_3949bf7f-94ca-404b-ab0a-37fbed571a00/cinder-volume/0.log" Mar 19 13:55:35.791205 master-0 kubenswrapper[31830]: I0319 13:55:35.791151 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-cce1e-volume-lvm-iscsi-0_3949bf7f-94ca-404b-ab0a-37fbed571a00/probe/0.log" Mar 19 13:55:35.836769 master-0 kubenswrapper[31830]: I0319 13:55:35.836715 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7655479f8c-g8h6c_2ad8a503-0511-4da7-b07a-52da9ab0f637/dnsmasq-dns/0.log" Mar 19 13:55:35.843157 master-0 kubenswrapper[31830]: I0319 13:55:35.843087 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7655479f8c-g8h6c_2ad8a503-0511-4da7-b07a-52da9ab0f637/init/0.log" Mar 19 13:55:36.008387 master-0 kubenswrapper[31830]: I0319 13:55:36.008328 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb_b7a848b2-11a9-47c9-881c-6ed12d3e3d1b/osp-httpd/0.log" Mar 19 13:55:36.014740 master-0 kubenswrapper[31830]: I0319 13:55:36.014701 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_edpm-a-provisionserver-openstackprovisionserver-7444d659762wbcb_b7a848b2-11a9-47c9-881c-6ed12d3e3d1b/init/0.log" Mar 19 13:55:36.116829 master-0 kubenswrapper[31830]: I0319 13:55:36.114352 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_metal3-baremetal-operator-78474bdc48-sl88n_c4d8205e-157b-4a66-9ee7-318bae255129/metal3-baremetal-operator/0.log" Mar 19 13:55:36.160764 master-0 kubenswrapper[31830]: I0319 13:55:36.158669 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_metal3-image-customization-5b889bff9b-dxbkp_ed8f0c5d-4f16-444c-b706-e78cf4036b87/machine-image-customization-controller/0.log" Mar 19 13:55:36.184878 master-0 kubenswrapper[31830]: I0319 13:55:36.184814 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h_3bb563fb-d536-4cb0-9614-d331baa95e1b/osp-httpd/0.log" Mar 19 13:55:36.186957 master-0 kubenswrapper[31830]: I0319 13:55:36.186911 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_metal3-image-customization-5b889bff9b-dxbkp_ed8f0c5d-4f16-444c-b706-e78cf4036b87/machine-os-images/2.log" Mar 19 13:55:36.192820 master-0 kubenswrapper[31830]: I0319 13:55:36.191726 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_edpm-b-provisionserver-openstackprovisionserver-85bcff5d5fq8d8h_3bb563fb-d536-4cb0-9614-d331baa95e1b/init/0.log" Mar 19 13:55:36.289349 master-0 kubenswrapper[31830]: I0319 13:55:36.289165 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-f4e38-default-external-api-0_a6a28bf0-c9db-427e-9f5e-dd58ee654662/glance-log/0.log" Mar 19 13:55:36.318820 master-0 kubenswrapper[31830]: I0319 13:55:36.318181 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-f4e38-default-external-api-0_a6a28bf0-c9db-427e-9f5e-dd58ee654662/glance-httpd/0.log" Mar 19 13:55:36.413745 master-0 kubenswrapper[31830]: I0319 13:55:36.410941 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-f4e38-default-internal-api-0_d05de021-992c-4c11-bea3-1fea7fade5e5/glance-log/0.log" Mar 19 13:55:36.444827 master-0 kubenswrapper[31830]: I0319 13:55:36.443855 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-f4e38-default-internal-api-0_d05de021-992c-4c11-bea3-1fea7fade5e5/glance-httpd/0.log" Mar 19 13:55:36.487828 master-0 kubenswrapper[31830]: I0319 13:55:36.487761 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-569d794d4c-pmgr5_39eba887-ef1e-47a9-b6cf-6d445d0ae88b/keystone-api/0.log" Mar 19 13:55:36.502583 master-0 kubenswrapper[31830]: I0319 13:55:36.500054 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29565421-z8tcz_3d7ca7e3-5c5d-4f7e-81aa-a04d2d19ec37/keystone-cron/0.log" Mar 19 13:55:37.807217 master-0 kubenswrapper[31830]: I0319 13:55:37.807040 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-84gh4_ee3529ac-6135-438b-9334-40c63c1fbd3d/cluster-cloud-controller-manager/0.log" Mar 19 13:55:37.819556 master-0 kubenswrapper[31830]: I0319 13:55:37.816153 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-84gh4_ee3529ac-6135-438b-9334-40c63c1fbd3d/cluster-cloud-controller-manager/1.log" Mar 19 13:55:37.837820 master-0 kubenswrapper[31830]: I0319 13:55:37.833911 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-84gh4_ee3529ac-6135-438b-9334-40c63c1fbd3d/config-sync-controllers/0.log" Mar 19 13:55:37.841223 master-0 kubenswrapper[31830]: I0319 13:55:37.841194 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-84gh4_ee3529ac-6135-438b-9334-40c63c1fbd3d/config-sync-controllers/1.log" Mar 19 13:55:37.862837 master-0 kubenswrapper[31830]: I0319 13:55:37.860606 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-84gh4_ee3529ac-6135-438b-9334-40c63c1fbd3d/kube-rbac-proxy/0.log" Mar 19 13:55:38.404254 master-0 kubenswrapper[31830]: I0319 13:55:38.404204 31830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-jgd9n/perf-node-gather-daemonset-vgtsh" Mar 19 13:55:40.366865 master-0 kubenswrapper[31830]: I0319 13:55:40.366253 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-744f9dbf77-nr2k4_ad327a59-7879-4215-bb95-3f2be64cb97f/kube-rbac-proxy/0.log" Mar 19 13:55:40.441045 master-0 kubenswrapper[31830]: I0319 13:55:40.438935 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-744f9dbf77-nr2k4_ad327a59-7879-4215-bb95-3f2be64cb97f/cloud-credential-operator/0.log" Mar 19 13:55:40.649711 master-0 kubenswrapper[31830]: I0319 13:55:40.649459 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f3cbc6ce-25bb-4672-bcf9-813c973d8bcf/memcached/0.log" Mar 19 13:55:40.794427 master-0 kubenswrapper[31830]: I0319 13:55:40.794382 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-949dd44b5-vklms_3514735a-13b6-4fed-a4e7-377a12bbc374/neutron-api/0.log" Mar 19 13:55:40.817278 master-0 kubenswrapper[31830]: I0319 13:55:40.817226 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-949dd44b5-vklms_3514735a-13b6-4fed-a4e7-377a12bbc374/neutron-httpd/0.log" Mar 19 13:55:40.951601 master-0 kubenswrapper[31830]: I0319 13:55:40.951497 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d3e66899-914d-44fb-9a77-5a0dd045e6ce/nova-api-log/0.log" Mar 19 13:55:41.600021 master-0 kubenswrapper[31830]: I0319 13:55:41.598467 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d3e66899-914d-44fb-9a77-5a0dd045e6ce/nova-api-api/0.log" Mar 19 13:55:41.717669 master-0 kubenswrapper[31830]: I0319 13:55:41.717619 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_50910557-81a5-4255-84eb-bd2ef2691a00/nova-cell0-conductor-conductor/0.log" Mar 19 13:55:41.838159 master-0 kubenswrapper[31830]: I0319 13:55:41.838096 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_19617489-e8f7-405b-b047-7344b57f32b4/nova-cell1-conductor-conductor/0.log" Mar 19 13:55:41.951150 master-0 kubenswrapper[31830]: I0319 13:55:41.951018 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_52b10d7b-aab9-490d-a80b-633a24199fa9/nova-cell1-novncproxy-novncproxy/0.log" Mar 19 13:55:42.044725 master-0 kubenswrapper[31830]: I0319 13:55:42.044686 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_57dc08d4-80a2-48f0-b215-3ec2f688b480/nova-metadata-log/0.log" Mar 19 13:55:42.101498 master-0 kubenswrapper[31830]: E0319 13:55:42.101438 31830 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:51264->192.168.32.10:43841: write tcp 192.168.32.10:51264->192.168.32.10:43841: write: broken pipe Mar 19 13:55:42.796599 master-0 kubenswrapper[31830]: I0319 13:55:42.796544 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_57dc08d4-80a2-48f0-b215-3ec2f688b480/nova-metadata-metadata/0.log" Mar 19 13:55:42.903860 master-0 kubenswrapper[31830]: I0319 13:55:42.903809 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_5b64fc39-b4e1-4fa2-bdd0-c991f4c35ba4/nova-scheduler-scheduler/0.log" Mar 19 13:55:42.953017 master-0 kubenswrapper[31830]: I0319 13:55:42.952904 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48/galera/0.log" Mar 19 13:55:42.991853 master-0 kubenswrapper[31830]: I0319 13:55:42.991784 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c8eb2be2-4a7e-414b-ac9d-e3fb25b21c48/mysql-bootstrap/0.log" Mar 19 13:55:43.039188 master-0 kubenswrapper[31830]: I0319 13:55:43.039081 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ae148a74-f9ec-4ee8-be58-c14c466f4b9f/galera/0.log" Mar 19 13:55:43.053558 master-0 kubenswrapper[31830]: I0319 13:55:43.053486 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ae148a74-f9ec-4ee8-be58-c14c466f4b9f/mysql-bootstrap/0.log" Mar 19 13:55:43.070512 master-0 kubenswrapper[31830]: I0319 13:55:43.070470 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c928f8ae-cc84-4887-9b3b-dc1900338aab/openstackclient/0.log" Mar 19 13:55:43.095375 master-0 kubenswrapper[31830]: I0319 13:55:43.095211 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-kmq6z_0d516497-0523-41c4-a5cc-75fe94977ac3/ovn-controller/0.log" Mar 19 13:55:43.108338 master-0 kubenswrapper[31830]: I0319 13:55:43.108150 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-5sd9s_02abb8d5-6e39-493e-bc9c-7bcd2f99b423/openstack-network-exporter/0.log" Mar 19 13:55:43.128790 master-0 kubenswrapper[31830]: I0319 13:55:43.128641 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xpwvp_3cc6301e-c3c2-4a62-af7b-122fbdcd5552/ovsdb-server/0.log" Mar 19 13:55:43.241840 master-0 kubenswrapper[31830]: I0319 13:55:43.241013 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xpwvp_3cc6301e-c3c2-4a62-af7b-122fbdcd5552/ovs-vswitchd/0.log" Mar 19 13:55:43.249976 master-0 kubenswrapper[31830]: I0319 13:55:43.249947 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xpwvp_3cc6301e-c3c2-4a62-af7b-122fbdcd5552/ovsdb-server-init/0.log" Mar 19 13:55:43.276063 master-0 kubenswrapper[31830]: I0319 13:55:43.276028 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f/ovn-northd/0.log" Mar 19 13:55:43.286899 master-0 kubenswrapper[31830]: I0319 13:55:43.286867 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e94c1fdb-20f2-4b64-b0c2-2ae1ef69f04f/openstack-network-exporter/0.log" Mar 19 13:55:43.305320 master-0 kubenswrapper[31830]: I0319 13:55:43.305221 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_63ea9eeb-9288-44f6-82fb-70ccfb935857/ovsdbserver-nb/0.log" Mar 19 13:55:43.321406 master-0 kubenswrapper[31830]: I0319 13:55:43.321368 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_63ea9eeb-9288-44f6-82fb-70ccfb935857/openstack-network-exporter/0.log" Mar 19 13:55:43.344757 master-0 kubenswrapper[31830]: I0319 13:55:43.343250 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_17d9c5b7-67e7-4189-9917-722938b3a343/ovsdbserver-sb/0.log" Mar 19 13:55:43.367012 master-0 kubenswrapper[31830]: I0319 13:55:43.366897 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_17d9c5b7-67e7-4189-9917-722938b3a343/openstack-network-exporter/0.log" Mar 19 13:55:43.441776 master-0 kubenswrapper[31830]: I0319 13:55:43.441694 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7d99c66444-6vrxg_29793e73-ea31-4460-9aa6-85235971e586/placement-log/0.log" Mar 19 13:55:43.472816 master-0 kubenswrapper[31830]: I0319 13:55:43.472760 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-nhvl4_aef8e03f-0363-4e13-b7ca-4fa871d77c62/openshift-config-operator/1.log" Mar 19 13:55:43.485639 master-0 kubenswrapper[31830]: I0319 13:55:43.485580 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-nhvl4_aef8e03f-0363-4e13-b7ca-4fa871d77c62/openshift-config-operator/2.log" Mar 19 13:55:43.506525 master-0 kubenswrapper[31830]: I0319 13:55:43.506471 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-nhvl4_aef8e03f-0363-4e13-b7ca-4fa871d77c62/openshift-api/0.log" Mar 19 13:55:43.515226 master-0 kubenswrapper[31830]: I0319 13:55:43.515171 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7d99c66444-6vrxg_29793e73-ea31-4460-9aa6-85235971e586/placement-api/0.log" Mar 19 13:55:43.565880 master-0 kubenswrapper[31830]: I0319 13:55:43.563030 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e496a21c-f671-402f-a15c-911b063428c5/rabbitmq/0.log" Mar 19 13:55:43.574115 master-0 kubenswrapper[31830]: I0319 13:55:43.574036 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e496a21c-f671-402f-a15c-911b063428c5/setup-container/0.log" Mar 19 13:55:43.686524 master-0 kubenswrapper[31830]: I0319 13:55:43.685851 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_aee036d1-9a03-42ac-9beb-ef7ecc09c98d/rabbitmq/0.log" Mar 19 13:55:43.695870 master-0 kubenswrapper[31830]: I0319 13:55:43.695785 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_aee036d1-9a03-42ac-9beb-ef7ecc09c98d/setup-container/0.log" Mar 19 13:55:43.961228 master-0 kubenswrapper[31830]: I0319 13:55:43.959764 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f6fffcbf4-vwj74_a9c3ee17-ae52-4dac-829c-7217ec01755d/proxy-httpd/0.log" Mar 19 13:55:43.976050 master-0 kubenswrapper[31830]: I0319 13:55:43.975993 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f6fffcbf4-vwj74_a9c3ee17-ae52-4dac-829c-7217ec01755d/proxy-server/0.log" Mar 19 13:55:43.991231 master-0 kubenswrapper[31830]: I0319 13:55:43.988276 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-cm2zc_512e045f-7b25-4992-a593-227de5818bb3/swift-ring-rebalance/0.log" Mar 19 13:55:44.019853 master-0 kubenswrapper[31830]: I0319 13:55:44.019365 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_736d878b-1328-4a36-873f-62849c4e2d07/account-server/0.log" Mar 19 13:55:44.072958 master-0 kubenswrapper[31830]: I0319 13:55:44.072830 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_736d878b-1328-4a36-873f-62849c4e2d07/account-replicator/0.log" Mar 19 13:55:44.080481 master-0 kubenswrapper[31830]: I0319 13:55:44.080433 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_736d878b-1328-4a36-873f-62849c4e2d07/account-auditor/0.log" Mar 19 13:55:44.105417 master-0 kubenswrapper[31830]: I0319 13:55:44.105369 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_736d878b-1328-4a36-873f-62849c4e2d07/account-reaper/0.log" Mar 19 13:55:44.118965 master-0 kubenswrapper[31830]: I0319 13:55:44.118525 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_736d878b-1328-4a36-873f-62849c4e2d07/container-server/0.log" Mar 19 13:55:44.191664 master-0 kubenswrapper[31830]: I0319 13:55:44.191584 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_736d878b-1328-4a36-873f-62849c4e2d07/container-replicator/0.log" Mar 19 13:55:44.201072 master-0 kubenswrapper[31830]: I0319 13:55:44.201030 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_736d878b-1328-4a36-873f-62849c4e2d07/container-auditor/0.log" Mar 19 13:55:44.220864 master-0 kubenswrapper[31830]: I0319 13:55:44.220773 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_736d878b-1328-4a36-873f-62849c4e2d07/container-updater/0.log" Mar 19 13:55:44.232098 master-0 kubenswrapper[31830]: I0319 13:55:44.232043 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_736d878b-1328-4a36-873f-62849c4e2d07/object-server/0.log" Mar 19 13:55:44.261085 master-0 kubenswrapper[31830]: I0319 13:55:44.261035 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_736d878b-1328-4a36-873f-62849c4e2d07/object-replicator/0.log" Mar 19 13:55:44.294371 master-0 kubenswrapper[31830]: I0319 13:55:44.294325 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_736d878b-1328-4a36-873f-62849c4e2d07/object-auditor/0.log" Mar 19 13:55:44.304848 master-0 kubenswrapper[31830]: I0319 13:55:44.304779 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_736d878b-1328-4a36-873f-62849c4e2d07/object-updater/0.log" Mar 19 13:55:44.313579 master-0 kubenswrapper[31830]: I0319 13:55:44.313526 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_736d878b-1328-4a36-873f-62849c4e2d07/object-expirer/0.log" Mar 19 13:55:44.326268 master-0 kubenswrapper[31830]: I0319 13:55:44.326226 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_736d878b-1328-4a36-873f-62849c4e2d07/rsync/0.log" Mar 19 13:55:44.337675 master-0 kubenswrapper[31830]: I0319 13:55:44.337619 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_736d878b-1328-4a36-873f-62849c4e2d07/swift-recon-cron/0.log" Mar 19 13:55:44.932638 master-0 kubenswrapper[31830]: I0319 13:55:44.932590 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-5dzwk_2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5/console-operator/0.log" Mar 19 13:55:44.991834 master-0 kubenswrapper[31830]: I0319 13:55:44.991771 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-5dzwk_2d70b7a8-5cd6-4fdf-a9a5-c15cc137b2d5/console-operator/1.log" Mar 19 13:55:45.992625 master-0 kubenswrapper[31830]: I0319 13:55:45.992567 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-778974b6d8-gqqzj_fc8fbfa9-d55d-470b-aabc-96b9f0c15790/console/0.log" Mar 19 13:55:46.098314 master-0 kubenswrapper[31830]: I0319 13:55:46.097958 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-66b8ffb895-264cc_32ddfe6f-9155-424c-979c-5b4cf426680c/download-server/0.log" Mar 19 13:55:47.431286 master-0 kubenswrapper[31830]: I0319 13:55:47.431234 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-7d87854d6-6wzws_0f97d998-530c-4d9d-a030-ca1d9d2d4490/cluster-storage-operator/0.log" Mar 19 13:55:47.436457 master-0 kubenswrapper[31830]: I0319 13:55:47.435872 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-7d87854d6-6wzws_0f97d998-530c-4d9d-a030-ca1d9d2d4490/cluster-storage-operator/1.log" Mar 19 13:55:47.455681 master-0 kubenswrapper[31830]: I0319 13:55:47.455644 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-6m654_944eac68-e72b-4aed-b5dc-d7d9703178a3/snapshot-controller/4.log" Mar 19 13:55:47.456730 master-0 kubenswrapper[31830]: I0319 13:55:47.456698 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-6m654_944eac68-e72b-4aed-b5dc-d7d9703178a3/snapshot-controller/5.log" Mar 19 13:55:47.491881 master-0 kubenswrapper[31830]: I0319 13:55:47.488776 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5f5d689c6b-2chdm_a7747954-a222-4809-8656-818203b55ee8/csi-snapshot-controller-operator/0.log" Mar 19 13:55:48.468821 master-0 kubenswrapper[31830]: I0319 13:55:48.468529 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-9c5679d8f-z6kvm_ab54833d-e57b-479d-b171-68155f6566f1/dns-operator/0.log" Mar 19 13:55:48.520837 master-0 kubenswrapper[31830]: I0319 13:55:48.520281 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-9c5679d8f-z6kvm_ab54833d-e57b-479d-b171-68155f6566f1/kube-rbac-proxy/0.log" Mar 19 13:55:49.382921 master-0 kubenswrapper[31830]: I0319 13:55:49.382867 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-zjdkm_f236a5ab-b400-46fc-94ee-1fff476d6458/dns/0.log" Mar 19 13:55:49.397302 master-0 kubenswrapper[31830]: I0319 13:55:49.397252 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-zjdkm_f236a5ab-b400-46fc-94ee-1fff476d6458/kube-rbac-proxy/0.log" Mar 19 13:55:49.414642 master-0 kubenswrapper[31830]: I0319 13:55:49.414561 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-jqzxt_4800b72f-7e54-4069-b771-87fb459eeb78/dns-node-resolver/0.log" Mar 19 13:55:50.308237 master-0 kubenswrapper[31830]: I0319 13:55:50.308174 31830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgd9n/master-0-debug-4zwv9" event={"ID":"2bfdafd9-9994-4445-90b4-16dad46f4a52","Type":"ContainerStarted","Data":"a38a67e452d8ae7921862f9e689fabdb07c65b6ee5262090c92efca3f4e0abad"} Mar 19 13:55:50.325117 master-0 kubenswrapper[31830]: I0319 13:55:50.325042 31830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jgd9n/master-0-debug-4zwv9" podStartSLOduration=1.4678085379999999 podStartE2EDuration="18.325024152s" podCreationTimestamp="2026-03-19 13:55:32 +0000 UTC" firstStartedPulling="2026-03-19 13:55:32.99894609 +0000 UTC m=+6071.547906794" lastFinishedPulling="2026-03-19 13:55:49.856161704 +0000 UTC m=+6088.405122408" observedRunningTime="2026-03-19 13:55:50.320510471 +0000 UTC m=+6088.869471175" watchObservedRunningTime="2026-03-19 13:55:50.325024152 +0000 UTC m=+6088.873984856" Mar 19 13:55:50.379150 master-0 kubenswrapper[31830]: I0319 13:55:50.379103 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-sc4kz_9702fc8c-4fe0-413b-b2d4-db23021d42b8/etcd-operator/0.log" Mar 19 13:55:50.384873 master-0 kubenswrapper[31830]: I0319 13:55:50.384596 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-sc4kz_9702fc8c-4fe0-413b-b2d4-db23021d42b8/etcd-operator/1.log" Mar 19 13:55:51.331620 master-0 kubenswrapper[31830]: I0319 13:55:51.331506 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcdctl/0.log" Mar 19 13:55:51.944439 master-0 kubenswrapper[31830]: I0319 13:55:51.944390 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd/0.log" Mar 19 13:55:51.974783 master-0 kubenswrapper[31830]: I0319 13:55:51.974533 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-metrics/0.log" Mar 19 13:55:51.989208 master-0 kubenswrapper[31830]: I0319 13:55:51.989166 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-readyz/0.log" Mar 19 13:55:52.006830 master-0 kubenswrapper[31830]: I0319 13:55:52.005842 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-rev/0.log" Mar 19 13:55:52.023920 master-0 kubenswrapper[31830]: I0319 13:55:52.023878 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/setup/0.log" Mar 19 13:55:52.039358 master-0 kubenswrapper[31830]: I0319 13:55:52.039314 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-ensure-env-vars/0.log" Mar 19 13:55:52.061713 master-0 kubenswrapper[31830]: I0319 13:55:52.061656 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-resources-copy/0.log" Mar 19 13:55:52.109983 master-0 kubenswrapper[31830]: I0319 13:55:52.109929 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_11f83dfb-da04-483f-b281-ebdb39f3ab27/installer/0.log" Mar 19 13:55:52.159766 master-0 kubenswrapper[31830]: I0319 13:55:52.159723 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_8b48817c-05cd-430b-9b1f-9cc037f1ca77/installer/0.log" Mar 19 13:55:53.145100 master-0 kubenswrapper[31830]: I0319 13:55:53.145044 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-5549dc66cb-g6sn6_82b98dca-59f9-42be-94ca-4a2a2b6fea0f/cluster-image-registry-operator/0.log" Mar 19 13:55:53.162616 master-0 kubenswrapper[31830]: I0319 13:55:53.162022 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-8p7qr_6241ae9b-177b-4d97-9366-479855d8464f/node-ca/0.log" Mar 19 13:55:53.945949 master-0 kubenswrapper[31830]: I0319 13:55:53.945899 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/5.log" Mar 19 13:55:53.952387 master-0 kubenswrapper[31830]: I0319 13:55:53.952340 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/ingress-operator/6.log" Mar 19 13:55:53.965917 master-0 kubenswrapper[31830]: I0319 13:55:53.965875 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-btppx_b80027fd-7b39-477a-a337-ff9bb08e7eeb/kube-rbac-proxy/0.log" Mar 19 13:55:54.789047 master-0 kubenswrapper[31830]: I0319 13:55:54.789015 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-w8jqs_36e5fec9-7fb5-4460-8bb4-4b9e36fae978/serve-healthcheck-canary/0.log" Mar 19 13:55:55.486882 master-0 kubenswrapper[31830]: I0319 13:55:55.486831 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-68bf6ff9d6-djdmh_4264e82c-387f-4aa6-9ef6-b7beb61e098c/insights-operator/0.log" Mar 19 13:55:57.586466 master-0 kubenswrapper[31830]: I0319 13:55:57.586337 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3ae07d43-4069-4d70-9960-0fd6b158fa76/alertmanager/0.log" Mar 19 13:55:57.606831 master-0 kubenswrapper[31830]: I0319 13:55:57.606716 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3ae07d43-4069-4d70-9960-0fd6b158fa76/config-reloader/0.log" Mar 19 13:55:57.623131 master-0 kubenswrapper[31830]: I0319 13:55:57.622926 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3ae07d43-4069-4d70-9960-0fd6b158fa76/kube-rbac-proxy-web/0.log" Mar 19 13:55:57.639614 master-0 kubenswrapper[31830]: I0319 13:55:57.638717 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3ae07d43-4069-4d70-9960-0fd6b158fa76/kube-rbac-proxy/0.log" Mar 19 13:55:57.654874 master-0 kubenswrapper[31830]: I0319 13:55:57.654781 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3ae07d43-4069-4d70-9960-0fd6b158fa76/kube-rbac-proxy-metric/0.log" Mar 19 13:55:57.666890 master-0 kubenswrapper[31830]: I0319 13:55:57.666840 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3ae07d43-4069-4d70-9960-0fd6b158fa76/prom-label-proxy/0.log" Mar 19 13:55:57.687029 master-0 kubenswrapper[31830]: I0319 13:55:57.686958 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3ae07d43-4069-4d70-9960-0fd6b158fa76/init-config-reloader/0.log" Mar 19 13:55:57.796776 master-0 kubenswrapper[31830]: I0319 13:55:57.796725 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5_0dda4422-e7ac-48a5-8e06-5ebab86395ab/extract/0.log" Mar 19 13:55:57.801111 master-0 kubenswrapper[31830]: I0319 13:55:57.801054 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-58845fbb57-92c5d_7241bf11-192e-47db-9d80-2324938ed34c/cluster-monitoring-operator/0.log" Mar 19 13:55:57.802989 master-0 kubenswrapper[31830]: I0319 13:55:57.802957 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5_0dda4422-e7ac-48a5-8e06-5ebab86395ab/util/0.log" Mar 19 13:55:57.812724 master-0 kubenswrapper[31830]: I0319 13:55:57.812679 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cmbhs5_0dda4422-e7ac-48a5-8e06-5ebab86395ab/pull/0.log" Mar 19 13:55:57.819732 master-0 kubenswrapper[31830]: I0319 13:55:57.819688 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-bnf7q_bb1000ab-4419-43ce-b1b7-8f43413e017f/kube-state-metrics/0.log" Mar 19 13:55:57.826511 master-0 kubenswrapper[31830]: I0319 13:55:57.826445 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59bc569d95-x6nhk_528a7681-3153-4efc-9a5b-538929555c6d/manager/0.log" Mar 19 13:55:57.833910 master-0 kubenswrapper[31830]: I0319 13:55:57.833850 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-bnf7q_bb1000ab-4419-43ce-b1b7-8f43413e017f/kube-rbac-proxy-main/0.log" Mar 19 13:55:57.856381 master-0 kubenswrapper[31830]: I0319 13:55:57.856028 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-bnf7q_bb1000ab-4419-43ce-b1b7-8f43413e017f/kube-rbac-proxy-self/0.log" Mar 19 13:55:57.872494 master-0 kubenswrapper[31830]: I0319 13:55:57.872322 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-c8c668d4-qqj8z_6aa1f8f0-265e-4a58-b02c-45967a85db0e/metrics-server/0.log" Mar 19 13:55:57.894663 master-0 kubenswrapper[31830]: I0319 13:55:57.894599 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-6675579648-kj9b2_0f7377b4-649e-496a-af31-69e2ebfccb36/monitoring-plugin/0.log" Mar 19 13:55:57.915016 master-0 kubenswrapper[31830]: I0319 13:55:57.914917 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-lpndz_a9d191d1-631d-4091-af8b-382283c18a5a/node-exporter/0.log" Mar 19 13:55:57.935578 master-0 kubenswrapper[31830]: I0319 13:55:57.935539 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-lpndz_a9d191d1-631d-4091-af8b-382283c18a5a/kube-rbac-proxy/0.log" Mar 19 13:55:57.952837 master-0 kubenswrapper[31830]: I0319 13:55:57.952780 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-lpndz_a9d191d1-631d-4091-af8b-382283c18a5a/init-textfile/0.log" Mar 19 13:55:57.976660 master-0 kubenswrapper[31830]: I0319 13:55:57.976563 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-k464h_de39c80c-acfa-4bc1-a844-95b170169b44/kube-rbac-proxy-main/0.log" Mar 19 13:55:57.989391 master-0 kubenswrapper[31830]: I0319 13:55:57.989204 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-k464h_de39c80c-acfa-4bc1-a844-95b170169b44/kube-rbac-proxy-self/0.log" Mar 19 13:55:58.022234 master-0 kubenswrapper[31830]: I0319 13:55:58.020943 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-k464h_de39c80c-acfa-4bc1-a844-95b170169b44/openshift-state-metrics/0.log" Mar 19 13:55:58.066781 master-0 kubenswrapper[31830]: I0319 13:55:58.066108 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_d6814e91-dba6-44c2-80a5-6ee9429a3643/prometheus/0.log" Mar 19 13:55:58.080419 master-0 kubenswrapper[31830]: I0319 13:55:58.080355 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_d6814e91-dba6-44c2-80a5-6ee9429a3643/config-reloader/0.log" Mar 19 13:55:58.099062 master-0 kubenswrapper[31830]: I0319 13:55:58.099016 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_d6814e91-dba6-44c2-80a5-6ee9429a3643/thanos-sidecar/0.log" Mar 19 13:55:58.114056 master-0 kubenswrapper[31830]: I0319 13:55:58.113362 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_d6814e91-dba6-44c2-80a5-6ee9429a3643/kube-rbac-proxy-web/0.log" Mar 19 13:55:58.148805 master-0 kubenswrapper[31830]: I0319 13:55:58.147647 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_d6814e91-dba6-44c2-80a5-6ee9429a3643/kube-rbac-proxy/0.log" Mar 19 13:55:58.182349 master-0 kubenswrapper[31830]: I0319 13:55:58.180876 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_d6814e91-dba6-44c2-80a5-6ee9429a3643/kube-rbac-proxy-thanos/0.log" Mar 19 13:55:58.202816 master-0 kubenswrapper[31830]: I0319 13:55:58.202297 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_d6814e91-dba6-44c2-80a5-6ee9429a3643/init-config-reloader/0.log" Mar 19 13:55:58.236949 master-0 kubenswrapper[31830]: I0319 13:55:58.236899 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-6c8df6d4b-qsrjj_86884445-e29b-492b-8810-b63b938b9170/prometheus-operator/0.log" Mar 19 13:55:58.257945 master-0 kubenswrapper[31830]: I0319 13:55:58.257892 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-6c8df6d4b-qsrjj_86884445-e29b-492b-8810-b63b938b9170/kube-rbac-proxy/0.log" Mar 19 13:55:58.274449 master-0 kubenswrapper[31830]: I0319 13:55:58.274398 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-69c6b55594-z8xf6_882fd952-1914-47be-96bf-cac6341ca877/prometheus-operator-admission-webhook/0.log" Mar 19 13:55:58.327846 master-0 kubenswrapper[31830]: I0319 13:55:58.327742 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-6975d7769d-nvxfv_7c80f8d0-ee9b-4a4d-ba92-e241b2552e58/telemeter-client/0.log" Mar 19 13:55:58.352971 master-0 kubenswrapper[31830]: I0319 13:55:58.352930 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-6975d7769d-nvxfv_7c80f8d0-ee9b-4a4d-ba92-e241b2552e58/reload/0.log" Mar 19 13:55:58.379969 master-0 kubenswrapper[31830]: I0319 13:55:58.379861 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-6975d7769d-nvxfv_7c80f8d0-ee9b-4a4d-ba92-e241b2552e58/kube-rbac-proxy/0.log" Mar 19 13:55:58.423210 master-0 kubenswrapper[31830]: I0319 13:55:58.422732 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5ff76c69fd-pt6vq_3d3b5c49-51a9-465a-b6e9-b0107612c311/thanos-query/0.log" Mar 19 13:55:58.429356 master-0 kubenswrapper[31830]: I0319 13:55:58.428568 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d58dc466-cbcqj_ec2e9575-5f21-44a5-a34c-f076f726a1d2/manager/0.log" Mar 19 13:55:58.447511 master-0 kubenswrapper[31830]: I0319 13:55:58.447453 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5ff76c69fd-pt6vq_3d3b5c49-51a9-465a-b6e9-b0107612c311/kube-rbac-proxy-web/0.log" Mar 19 13:55:58.454340 master-0 kubenswrapper[31830]: I0319 13:55:58.454286 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-588d4d986b-4zgd2_9962d57a-2869-4044-a24e-65338d28f6c3/manager/0.log" Mar 19 13:55:58.474231 master-0 kubenswrapper[31830]: I0319 13:55:58.474146 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5ff76c69fd-pt6vq_3d3b5c49-51a9-465a-b6e9-b0107612c311/kube-rbac-proxy/0.log" Mar 19 13:55:58.497662 master-0 kubenswrapper[31830]: I0319 13:55:58.497595 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5ff76c69fd-pt6vq_3d3b5c49-51a9-465a-b6e9-b0107612c311/prom-label-proxy/0.log" Mar 19 13:55:58.529625 master-0 kubenswrapper[31830]: I0319 13:55:58.529581 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5ff76c69fd-pt6vq_3d3b5c49-51a9-465a-b6e9-b0107612c311/kube-rbac-proxy-rules/0.log" Mar 19 13:55:58.545240 master-0 kubenswrapper[31830]: I0319 13:55:58.545095 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-79df6bcc97-cbj6m_0bf9354e-75bc-4f4d-b665-f23bf828bfa8/manager/0.log" Mar 19 13:55:58.552087 master-0 kubenswrapper[31830]: I0319 13:55:58.551989 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5ff76c69fd-pt6vq_3d3b5c49-51a9-465a-b6e9-b0107612c311/kube-rbac-proxy-metrics/0.log" Mar 19 13:55:58.561610 master-0 kubenswrapper[31830]: I0319 13:55:58.561573 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-67dd5f86f5-wq7gk_2ca9358c-cf3c-4965-a617-08dcd5e916c4/manager/0.log" Mar 19 13:55:58.572736 master-0 kubenswrapper[31830]: I0319 13:55:58.572695 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-8464cc45fb-d2qll_333d933c-7a84-455c-80c8-d5795ba1058d/manager/0.log" Mar 19 13:55:58.762253 master-0 kubenswrapper[31830]: I0319 13:55:58.762202 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7dd6bb94c9-6kkfv_1f0b9a13-7862-4829-a97d-56034487da2e/manager/0.log" Mar 19 13:55:58.782825 master-0 kubenswrapper[31830]: I0319 13:55:58.782343 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6f787dddc9-lpsb9_cc913cd6-6365-4019-a201-f4ed756e7238/manager/0.log" Mar 19 13:55:58.863426 master-0 kubenswrapper[31830]: I0319 13:55:58.863365 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-768b96df4c-tc6m5_a30da668-d209-4afb-a612-79302fb7942e/manager/0.log" Mar 19 13:55:58.888934 master-0 kubenswrapper[31830]: I0319 13:55:58.888780 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-55f864c847-vrk79_ac5b2ff6-6088-45fd-9b33-6d20a3ad9e59/manager/0.log" Mar 19 13:55:59.089102 master-0 kubenswrapper[31830]: I0319 13:55:59.088317 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67ccfc9778-xslmv_b5cf325f-5ed3-416a-b7cf-c95cc198afff/manager/0.log" Mar 19 13:55:59.163446 master-0 kubenswrapper[31830]: I0319 13:55:59.163388 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-767865f676-px9d7_467e2f90-bbbf-4d88-9b56-9ed6a353b45f/manager/0.log" Mar 19 13:55:59.302583 master-0 kubenswrapper[31830]: I0319 13:55:59.302534 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5d488d59fb-9vdlb_82330c7e-a21c-42e0-9f7c-ddc6e7269f0c/manager/0.log" Mar 19 13:55:59.328319 master-0 kubenswrapper[31830]: I0319 13:55:59.327968 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5b9f45d989-22wxs_da8e07b7-2ac3-454b-a30a-51b242c86b6a/manager/0.log" Mar 19 13:55:59.360530 master-0 kubenswrapper[31830]: I0319 13:55:59.360394 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-r5pn2_b8442707-1048-49e7-883b-9dfc0c48eb15/controller/0.log" Mar 19 13:55:59.361901 master-0 kubenswrapper[31830]: I0319 13:55:59.361775 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-74c4796899m7flr_49025043-9018-47ec-8930-e6580af6aeb2/manager/0.log" Mar 19 13:55:59.366118 master-0 kubenswrapper[31830]: I0319 13:55:59.366088 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-r5pn2_b8442707-1048-49e7-883b-9dfc0c48eb15/kube-rbac-proxy/0.log" Mar 19 13:55:59.382815 master-0 kubenswrapper[31830]: I0319 13:55:59.382360 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hx5pt_450fdf42-489c-4403-9c52-03c51471160c/controller/0.log" Mar 19 13:55:59.590766 master-0 kubenswrapper[31830]: I0319 13:55:59.590710 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-b85c4d696-mlg8z_c7ef7174-3939-4606-a689-d29f50fd7790/operator/0.log" Mar 19 13:56:00.621190 master-0 kubenswrapper[31830]: I0319 13:56:00.621115 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-86bd8996f6-8j8qk_45a81c5f-fb70-4b84-8c91-bc55830c36cd/manager/0.log" Mar 19 13:56:00.759562 master-0 kubenswrapper[31830]: I0319 13:56:00.759520 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-gxqv7_4bdab8cb-e11a-4b5d-9e1e-3cf37ce23ab8/registry-server/0.log" Mar 19 13:56:00.868666 master-0 kubenswrapper[31830]: I0319 13:56:00.868603 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-884679f54-4jhnc_ebd2199d-6888-4d1a-8e5d-b951062bdc18/manager/0.log" Mar 19 13:56:00.917184 master-0 kubenswrapper[31830]: I0319 13:56:00.915159 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5784578c99-9ldx8_9897197c-6347-48f3-bce4-f2e70d2241af/manager/0.log" Mar 19 13:56:00.939532 master-0 kubenswrapper[31830]: I0319 13:56:00.939482 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zt7k5_24b5e2be-28d1-44bc-a999-d68572529f9a/operator/0.log" Mar 19 13:56:00.983622 master-0 kubenswrapper[31830]: I0319 13:56:00.983565 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-c674c5965-85lzd_1e3ac87a-41fb-4d68-8531-01685bc8f17c/manager/0.log" Mar 19 13:56:01.001819 master-0 kubenswrapper[31830]: I0319 13:56:01.001622 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-d6b694c5-ztnkm_8dac5751-ffc3-4927-9cb4-362538cffc88/manager/0.log" Mar 19 13:56:01.017069 master-0 kubenswrapper[31830]: I0319 13:56:01.016868 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-bqfjg_462afca1-50bf-43ba-bcdf-b7d71f9504d5/manager/0.log" Mar 19 13:56:01.040827 master-0 kubenswrapper[31830]: I0319 13:56:01.040203 31830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6c4d75f7f9-gx66s_558a5b2d-e0d2-4a17-ab12-f4e3da3c522a/manager/0.log"